[deleted]
(This post continues the misplaced post below, so that post should be read first.)
"At p. 7, at the bottom of the second paragraph of Section `Removing the Element of Surprise` you seem to provide a sort of proof that the usage of FMs would tend to eliminate undesirable outcomes. I feel that the crucial passage of the argument is: `However, truly undesirable outcomes that are feasible for a civilization to avoid would strongly invite a VCO, but in such cases there could be no explanation as to why a VCO does not occur.` I could not grasp the logic here. In particular, why would that strongly invite a VCO? Also, words such as `strongly` and `overwhelmingly` suggest a probabilistic argument, while the general context would seem more that of a definite, yes/no type of argument. In any case, given the limited space you had, I could not find clear and convincing argumentation for either."
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
I again commend you, Tommaso, for another excellent set of observations. The argument is essentially intended to be probabilistic, but insofar as it is, it is a probabilistic argument of the strongest variety. In looking over what I wrote, I see that I might have changed one thing to clarify this. The corresponding sentence in my paper could have instead read:
"This amazing effect would emerge because any outcome that a civilization with foreknowledge machines would wish to avoid, which would also be feasible for that civilization to avoid, [would overwhelmingly tend] ...not [to] emerge as viewer foreknowledge in the first place."
To see that the meaning of what I wrote and the meaning of the above sentence are nearly indistinguishable in the context of the extreme probabilities involved, let's look at an analogy. Remember the steamroller scene from Austin Powers? (Youtube it before reading further, if you don't know it.) The guy clearly doesn't want to be flattened by the steamroller, as he is yelling "noooo!" Yet, even though he has so much time to avoid the steamroller, he ends up flat.
That scene provides a good analogy because the man is faced with an outcome that he wishes to avoid, which is also feasible for him to avoid. Now, could what happened to him in the movie happen to anyone in real life? Yes, such an event is logically possible. Accordingly, at most, one can only make a probabilistic argument to conclude that it would overwhelmingly tend not to happen. However, most people would say that such a thing simply could not happen, and leave it at that--after all, the fact that it is so unrealistic and so absurd as to be essentially impossible, explains why it is so funny.
Now, with that being said, "essentially impossible" and "impossible" are two different things. You were very right to characterize the argument I seemed to be presenting as a probabilistic argument. As such, the words, "could not emerge as viewer foreknowledge in the first place," are incongruent, since these words signal a deductive argument.
However, there is another way to look at this. A civilization cannot see an outcome in viewer foreknowledge that they will ultimately wish to avoid, and which will be feasible for them to avoid, because the combination of those two things would mean that they would act to avoid such an outcome--but, if they did so act and did avoid the outcome (because it had indeed been feasible for them to do so), this would mean that they must not have received viewer foreknowledge (of the outcome they avoided) in the first place. So, in this context, the statement "any outcome that a civilization with foreknowledge machines would wish to avoid, which would also be feasible for that civilization to avoid, could not emerge as viewer foreknowledge in the first place," is actually true, due to the meaning of the italicized part. I now remember that that is why I included this wording, rather than something like the wording of the above modified sentence (which I also considered).
Your skills of observation are very keen to identify an issue here. There is indeed an issue here, and I will endeavor very much to sort all of this out.
After providing all of this clarification, I do not know how much of your question remains, so I will wait until you absorb my two posts and ask further questions before continuing.
Thank you very much, Tommaso, for writing to me to seek clarification. I look forward to another interaction with you when you can find the time, and I will read and rate your paper, as well as comment about it on your page soon.
Warmly,
Aaron