Thanks. I will take a look at your essay. Unfortunately, I do not really understand the meaning of:

"I noticed that you did not mentioned how these probabilities can be distinguished as such without defining aspects of certainty"

There is probably something being lost in translation here, but what exactly do you want me to distinguish probabilities from and what do you mean by "aspects of certainty". If you can try to clarify then I will do my best to address your concerns.

Hi Matthew,

Your essay is too difficult for my lack of expertise in that field to discuss technical details at the moment. But it is one of the advantages of the contest we can spot something interesting to learn. Nevertheless I would like to address some general, and in my subjective opinion, important issues.

Jochen Szangolies complains on "troll votes" but we know that the contest is not only a scientific event but also a kind of game where players rate the opponent players. Jochen tries to counteract a non-honest appraisals. But how? Using also non-honest appraisal albeit in good will. I think that all appraisals and their sum can be explained in terms of the degree of belief in entrants' own concepts and independent of any evidence (not quite Bayesian interpretation?). People are extremely stubborn in their belief. So I appreciate your answer to Hoang: "I think the only honest thing to do is to admit that we don't have all the answers and to try and rigorously narrow down the possibilities by converting seemingly philosophical questions into concrete mathematical ones." To the mathematics I would necessarily add an experiment based on its predictions or the mathematics based on the experiment outcomes depending what comes first.

You say: 'the sense of "it" used in "it from bit" is different from the sense used in "bit from it".'

To me it is one out of many proofs that physicists do not have a common language and they are not able to agree about that language. It would possibly mean an endless and pointless discussion.

As the noncontextuality issue is explained in the replay to Jochen's post I would need only some clarification of the conclusion of your essay: "On the subjective Bayesian view, "it from bit" implies that probability theory needs to be generalized, which is in accord with the observation that quantum theory is a generalized probability theory." Logically acc. to the latter part of the statement we already possess something so why we do still need it?

Best regards

    Regarding "troll votes" I am inclined to believe that the opinions of the judging panel hold a lot more weight than the community ratings, so I am not too worried about it. Also, troll voters probably give everybody's essays low ratings, so they probably all cancel out in the end.

    As for different senses of "it", I do think that physicists share a common language, which is the language of mathematics and empirical observation. However, when we are constructing speculative theories and explanations our views are colored by philosophical prejudice just as in any other area. Experiment will eventually determine who is right, but we need some idea of what the best directions to explore are in the meantime. It is important to note that "it from bit" is not an accepted principle of mainstream physics, but a speculative idea proposed by Wheeler. I think it is clear from his writings that it was not intended to mean "everything is discrete" or "everything is made of information", but rather "everything that appears to us to be real is a result of our interventions into nature". If people choose to interpret it a different way, either because they have not read Wheeler or because they prefer a more radical interpretation then they are entitled to do so. However, I chose to restrict attention to the principle actually proposed by Wheeler because otherwise the essay topic becomes too broad.

    Regarding quantum theory as generalized probability there are a couple of points to make. Firstly, whilst it is known that quantum theory can be viewed as a generalization of probability, not everyone views this as significant, preferring to treat is as just another dynamical theory of physics. Quantum theory can be viewed in multiple different ways and this is what leads to the whole debate over interpretations. However, if we can argue, on independent grounds, that a generalization of probability is to be expected then the fact that quantum theory can be viewed in this way may take on new significance. Secondly, there is the issue of probability as a formal mathematical theory vs probability as a theory of how to reason in a world of uncertainty. It is pretty easy to write down mathematical axioms for probability, essentially it is a theory of sets of positive numbers that add up to one, but much harder to say what it has to do with the real world. Similarly, it is easy to say that quantum theory is a formal generalization of probability theory, but that does not tell us anything of foundational significance unless we can say why our reasoning about physical systems should obey that theory. My essay is really about how to fill that gap.

    Matthew ,

    You are right, the opinions of the judging panel hold a lot more weight however it regards only to finalists. Never mind. A perfect system does not exist.

    For you and me it is clear from Wheeler writings that it was not intended to mean "everything is discrete" or "everything is made of information", but rather "everything that appears to us to be real is a result of our interventions into nature" but I have read all the essays in the contest and I can assure you that we are in minority.

    The language of mathematics and empirical observation is really a beautiful common language. E.g. the mathematical formulations of quantum mechanics gives a rigorous description nevertheless we have got so many interpretations...

    Thanks for the clarifications.

    Regards

    Dear Matthew,

    Very nice essay with a lot of reductio ad absurdum type arguments. As a realist myself, let me play Wheeler's game of twenty questions with you...

    My chosen word is "non-existence".

    It is clear that if you start with questions such as, "Is it a living object?" No."Is it here on earth?" No."Is it red?" No. "Is it round"? No, etc. You will never get the answer, 'Yes' and you must fail.

    That being the case, I suspect that the first question, the question at "the very bottom" (Wheeler), that which "lies at the ontological basement" (Paul Davies), that which must first be asked and to which we must first get a No or Yes answer depicted by the binary digits 0 and 1, will be: is it existing (1) or not-existing 0)? It is after you get the answer depicted 1, that you then continue. Hope

    I explore the meaning of that first question and its digital answers (Yes/No), here. Agreeing with Julian Barbour that the 0 and 1 cannot be abstract symbols but must stand for something ontologically concrete.

    But you redeem the situation somewhat in your conclusion, "We have arrived at the conclusion that noncontextuality must be derived in terms of an analysis of the things that objectively exist. This implies a realist view of physics..."

    Best regards,

    Akinbo

      ``It is clear that if you start with questions such as, "Is it a living object?" No."Is it here on earth?" No."Is it red?" No. "Is it round"? No, etc. You will never get the answer, 'Yes' and you must fail.''

      Why?

      To clarify my position, I want to make it clear that I am also a realist, as you can tell from the last section and conclusion of my essay. I am just trying to argue for this in a different style --- one that I hope is more effective against anti-realists. One can develop all sorts of a priori arguments against "it from bit" based on the idea that we should be realists. However, if you are an anti-realist then your response to this would be "What do I care? I am an anti-realist so I do not buy these arguments". To argue effectively against this, one has to start from a position that an anti-realist can support and then argue for realism on those grounds. Now, "it from bit" was posited by Wheeler as a foundational principle within a broadly ant-realist or neo-Copenhagen framework. If we can show that this principle cannot do the work required of it without being backed up with a realist conception of physics, then that should be a much more compelling argument for an ant-realist than any a priori argument for realism. That is the sort of argument I was trying to construct.

      Dear Matthew,

      I welcome your essay not for only it written professionally, but there I find very important for my point - the right physical science can not be builded without of realism. I am hopefully by listening this from professionals (it is true, they are not too much at present, or they afraid talk openly!) Let us we connect your realism with your colleague Ben Dribus' demand - to return to a causality principle, taking care also Lee Smolin's conclusion - about necessity to find more weighty interpretation to QM phenomena, then we will come to one complex approach - how to reconstruct physics. I hope you will find some useful things on this matter in my essay, then we can continue talk if you see it reasonable. I appreciate your work on (9)

      Regards,

      George

      Essay

      Dear Matthew,

      I enjoyed the first part of your essay, but I couldn't follow the second part, where you use a language that is unusual for me, despite I have my own baggage of philosophy of probability. As you know I am a Bayesian, but my way of thinking seems to differ from yours more than from that of a frequentist. I really cannot capture your meaning of "context".

      For me things are quite simple. Probability theory is an extension of logic, precisely it is the algebra of Probable Inference of Richard T. Cox. I don't care of games where the player can decide to play or not: I just consider the situation of relevance for physics (and not for finance) where the player has no other choice than to play. Then probability theory is the set of rule for making a rational choice starting from a belief. The probabilities always depend on the context, and the agent knows (or has a belief of) the context. To be precise: there exists a joint probability for the full context, and all local events are marginals. The agent uses a Theory in forming his belief. A Theory is a set of rules that associates a joint probability to a full context. The only real things are the data and the procedure to build up the context. Everything else is subjective. End of story.

      Sorry, I may look naïve, I don't have your philosophical training, but I like synthesis for starting productive discussions.

      Regarding realism, I personally find "philosophically schizophrenic" your last sentence:

      "A subjective Bayesian analysis of noncontextuality indicates that it can only be derived within a realist approach to physics. At present, this type of derivation has only been carried out in the many-worlds interpretation, but I expect it can be made to work in other realist approaches to quantum theory, including those yet to be discovered."

      It looks funny that you say "other realist approaches", as if you take the "many-world" as a realistic one!

      With my best regards and wishes

      Mauro

        Hi Mauro,

        I am flattered that you think I have philosophical training. I don't. I just read a lot of books about the philosophy of probability.

        If you are a follower of Cox then it is definitely true that there is a wide gap between your position and mine. I am a subjective Bayesian in the vein of Ramsey, de Finetti, Savage, Jeffrey et. al. and I think that Cox's derivation of probability theory is one of the silliest things I have ever seen. Debating the relative merits of the two approaches could occupy a lot of space, so I will confine myself to a couple of comments.

        Firstly, Cox's approach contains a lot of arbitrariness. For example, he starts from the idea that degrees of belief have to be represented by real numbers, with no real justification other than simplicity. Why do they have to be totally ordered rather than just partially ordered? Weak analogies with measuring distance with a ruler just don't cut it for me, especially since the approach does not explain how one would construct an analogous device for measuring someone's belief that would yield a real number.

        Secondly, and relatedly, I believe that a viable approach to the foundations of probability has to be operational, i.e. it must say what things in the world correspond to probabilities and how to measure them. Subjective Bayesianism does this, i.e. it explains how to measure probabilities in terms of an agent's actions, but no other approach to probability really does. It is a bit complicated to explain why I think operationalism is needed here given that I am not an operationalist. Indeed, I don't actually think that probabilities ultimately should be defined in a purely operational way. It is just that, when you are confused about why a theory works, i.e. you cannot quite derive the results you need to justify the way it is applied, then it is a good idea to try to analyse the problematic concept in terms of something else and then use agreed upon facts about that other thing to see if you can find a better justification. Directly measurable things are the type of things about which we have a lot of agreed upon facts that anyone can verify, so operational definitions are the most useful for this purpose. I don't view operational definitions as "the" definition of the concept in question, but they provide a very useful rigging when there is a controversy to be resolved. As an aside, this is how I reconcile Einstein's approach to special relativity with his later statements on physics. It is not that he wanted to define spacetime operationally, but rather that he knew something had to change about the nature of space and time. The concepts of space and time come in a tight package with all the rest of the concepts of classical physics and it is very difficult to see how to unpick that package when you want to make some fundamental change. One way of getting around this is to redefine the problematic concepts, temporarily, in an operational fashion. However, after we are finished we can go back to being straightforwardly realist, e.g. viewing the structure of spacetime as the fundamental thing that accounts for the way that light rays behave rather than the other way around. It is the same with probability. We can't agree why statistics works so there must be something wrong with our usual concepts and derivations. However, probabilities are tied up with the whole theory in a tight package so it is best to temporarily define them in terms of something directly measurable. By the way, in the context of quantum theory, I think this is what Lucien means when he says that we should adopt an "operational methodology" without necessarily being operationalists.

        Regarding the meaning of "context", I presume you understand that in quantum theory I intend it to be synonymous with the choice of measurement. In general, a context is the thing that determines the set of bets that can be jointly resolved. Now, of course, if we already have probability theory then we could say that there is a probability for each context and then a conditional probability for each measurement outcome given the context. Multiply the two together and you have a joint probability distribution over contexts and outcomes, which is just an ordinary classical distribution. However, the point is that we are trying to derive probability theory rather than assuming it so we have to ask what would force our beliefs about the context to be described by a classical probability distribution. I suppose you could write down an exhaustive list of all contexts and then allow bets to be made on the context as well as the measurement outcomes. Then you could apply a Dutch book to the bets on context. That would be reasonable in the 20 questions game the way I have described it in which a third party is doing the questioning. However, I also want to allow for the possibility that the bookie might be the person choosing the context and they might choose the context adversarially after you have announced your probabilities (or similarly it might be you choosing the context after making your bets and putting the bookie at a disadvantage). It might have been clearer if I had described things this way in the essay. In this case, the choice of context is not something that you can assign a probability to. Instead, you have to do a worst case analysis and hedge against all possible contexts. This type of setup is the Bayesian way of fleshing out what it means for the choice of context to be a "free choice" that we cannot assign probabilities to. Practically it just means that it might be determined adversarially so we have to do a worst case analysis.

        Regarding many-worlds, I do not currently think it is a "realistic approach", but hopefully we can agree that it is a realist one (important distinction there). Although I do not advocate the theory, it remains the only interpretation of quantum theory in which a fully subjective Bayesian derivation of the Born rule along the lines I suggest has been carried out, so it would be unfair of me not to mention it. However, it is not too surprising that they are able to do this, since they start from the premise that the quantum state is real and that is the thing that carries all the information about the probabilities in the first place. It would not be too hard to derive classical probability theory if you started from the premise that reality was described by an object isomorphic to a probability distribution, and I hope we would all reject such a derivation as silly. As it happens, I am toying with a version of many worlds in which the wavefunction is not real but I still think you can derive the Born rule. I am not taking this too seriously, since it is just meant as a counterexample to the PBR theorem showing that you can have a realist theory with an epistemic quantum state if you broaden the ontology in some way. I don't think many-worlds is the best way of broadening the ontology, but one has to start somewhere and it is a more concrete suggestion than vague talk about retrocausality or "relational degrees of freedom" that you might hear from me and Rob Spekkens on other days.

        Dear Matt,

        In view of your deep knowledge of what (non-)contextuality means in the different approaches of quantum theory, you may be interested in mine, that is quite orthodox (in the Bohr sense) but pushes the meaning of observables towards graphs, finite geometries and algebraic curves (you would call them epistemic concepts).

        Going to your essay, and the related publications, I realize how deep the problem is and I certainly learn a lot by reading you.

        Best wishes.

        Michel

        ps/ I completely agree with Jochen Szangolies about the poll-votes.

        Dear Matthew

        I need to re-read your answer more closely, though there are many point that will remain missing. We would need a real conversation in person. I hope we will someday be able to come back to the old days of our first Cambridge meeting.

        The only thing I want to stress here, where it seems that I may have been misunderstood, is that also for me the context is a "parameter" for which it makes no sense to provide a probability. Second simple thing is that I remain convinced that all your line of research is motivated by the realist's epistemic interpretation of probability. Even though I admit that we cannot live without a personal interpretation, I try to stay over interpretations, and look for just the minimization of the axioms, and seeking clear relations between different axiomatizations and theories: things that have a much more general value than pursuing just a single viewpoint. Instead of marking the differences between us, as a rule we should try to understand the relations and the common points.

        It is always a pleasure to discuss with you.

        My best regards

        Mauro

        Dear Matthew,

        I see you are too busy with interesting discussions.

        For this, I just asking you to check my essay in your good time (from above my post) and in two words only write your opinion (it will valuable for me as from professional scientist.) Let me say - there are no any quantitatively reasoning contain, and it will necessary spent short time only to study it.

        With best wishes,

        George

          Dear George,

          There are a lot of interesting essays to read and unfortunately I do not have time to read all of them. I hope you will understand that I cannot guarantee to read someone's essay just because they mention it here. If the abstract looks interesting to me then I will read it and rate it.

          This thread is supposed to be for comments about my essay, so if you have something specific to say about it then I would be glad to respond.

          Matthew,

          If given the time and the wits to evaluate over 120 more entries, I have a month to try. My seemingly whimsical title, "It's good to be the king," is serious about our subject.

          Jim

          Dear Matthew! let me see that my previous post concerns to your work manly (1 Jul, see above.) I did not get response even as small thanks, that may be enough for me. Of course, for us own work is important first that is clear for everyone.

          Regards

          George

          Dear Matt,

          I've read your essay and many of your comments. I particularly appreciate your sober approach: "All too often people take a very strong stance on the interpretation of quantum theory and see their job as defending their view against all comers."

          As part of writing my essay, I studied ET Jaynes "Probability Theory: the Logic of Science". Your "Dutch book" discussion is a fascinating follow-on to his work. As you know he viewed it more as a theory of how to reason than as formal mathematical theory. He said, "...we choose a model for Bayesian analysis; this amounts to expressing some prior knowledge-or some working hypothesis-about the phenomenon being observed. [...] If the extra hypotheses are true, then we expect that the Bayesian results will improve on maximum entropy; if they are false, the Bayesian inferences will likely be worse. On the other hand, maximum entropy is a non-speculative procedure, in the sense that it involves no hypotheses beyond the sample space and the evidence that is in the available data. Thus it predicts only observable facts [...] rather than values of parameters which may exist only in our imagination."

          One question I'd like to ask is "where one starts" to interpret QM. For example in an earlier essay, I 'derive' Born probability for the wave function from the partition function. In your mind is this a legitimate starting place, or must one go all the way back to tossing coins (or placing bets)? As you note, "there's nothing in logic that tells you what premises you have to start with."

          In response to Mauro you declare "context" is synonymous with the choice of measurement. Would it be possible for you to say what you think should come to mind when one hears the word 'non-contextual' in QM? [As Einstein said: "as simple as possible, but no simpler".] I read your P(E|B) = P(F|B'), which, as you note, requires acceptance of counterfactual(ity). Nevertheless, I invite you to try to provide a simple 'standalone' definition of non-contextuality.

          Like you, I'm a realist who feels the need for 'quantum stuff'. My previous essay, The Nature of the Wave Function, provides my general approach, and my current essay, I believe, strengthens those arguments.

          But Bell gets in the way of being a 'local realist' and that is where my instincts and intuition lead me. So I'm interested in all analyses of Bell that are less than worshipful. In particular, Gordon Watson has an essay in this contest that questions a step in Bell's logic. An integral tends to fold things together that may not fold so well in a sum. Gordon re-expresses Bell's integral as a sum and finds a problem. He may be making a simple mistake, but, if so, I don't see it. I would hope you would look at his first two pages (and perhaps our discussion in the comments) and give your opinion. In my opinion, much of this contest is based on Bell's non-locality, and I've asked others the following question: if Bell had never lived, what is it in experimental data or quantum theory that would prove non-locality? So far no one has answered this. It seems to me that the entire move away from local realism is based solely on his inequality.

          Of course I invite you to read my essay and comment, but I'm most interested in your opinion of Gordon's initial results. Thanks for the long comments you have contributed to various blogs.

          My best regards,

          Edwin Eugene Klingman

            Regarding Jaynes, I made some less than sympathetic comments about the Cox approach to probability in my response to D'Ariano and of course Jaynes relies on that approach as his foundation. As with Cox, I find Jaynes a little too simple minded for my taste, as he ducks some major issues with the choice of prior and makes too many arguments based on simplicity. Of course, Jaynes was working in a time when Bayesian methodology was disavowed by the vast majority of statisticians and scientists, so he has to be read in context. He was mainly concerned with Bayesian statistics as a practical tool that could give greater insight than classical statistics and with ways of rendering the theory practical in a time before computer simulation was available. In this he succeeded admirably. In those days, the two issues of how we should think about probability and whether we should adopt Bayesian or classical statistics were often conflated. Now that Bayesian statistics has a large community of supporters, we can afford to think more carefully about the former problem, and I think we would do better to move beyond the Jaynes-Cox maxent dogma. I should note that the subjective approach I favour predates Jaynes by several decades, but its founders were a bit more careful about conceptual issues.

            ``One question I'd like to ask is "where one starts" to interpret QM. For example in an earlier essay, I 'derive' Born probability for the wave function from the partition function. In your mind is this a legitimate starting place, or must one go all the way back to tossing coins (or placing bets)? As you note, "there's nothing in logic that tells you what premises you have to start with."''

            If you want to come up with a fullblown interpretation of quantum theory then I think you need to start from a well-defined ontology. You need to say what things would exist in reality if quantum theory were literally true. This sounds like a realist position and it is naturally interpreted that way, but it would be OK by me if you want to say that the only things that exist are measurement outcomes or something along those lines, so long as you have started from a clear statement to that effect and deal with the conceptual problems entailed by that. You need to start from such a clear statement if you want to derive the quantum probability rule because you need to say what the quantum probabilities are probabilities of. Depending on your choice, it may be that you do not need to view quantum probabilities as constituting a different theory from classical probability, so you may not need to go all the way down to the foundations of probability. For example, it is like this in Bohmian mechanics where the probabilities are just ordinary classical probabilities similar to those of statistical mechanics.

            If you don't start from a clear statement about reality then deriving the Born rule becomes a mathematical game with no clear conceptual meaning and of course there are already a lot of formal mathematical reasons for adopting the Born rule.

            On the other hand, it may be that you are happy not to have a fullblown interpretation of quantum theory at the moment but still think that some argument you have come up with is suggestive as a way to proceed. Most of the best work on the foundations of quantum theory is probably going to be like that until we are lucky enough to hit on the right ideas. Therefore, suggestive arguments are fine with me so long as you are honest about their status.

            Regarding noncontextuality, there are two senses of this word in quantum theory, which can cause some confusion. I am using it in the sense of "noncontextual probability assignment", which simply means that the same projector receives the same probability regardless of which measurement that includes it is made. There is also the sense of "noncontextual hidden variable theory", which is ruled out by Kochen-Specker and related results. Some people say that the impossibility of a noncontextual hidden variable theory should be shortened to "quantum theory is contextual" in the same way that we say "quantum theory is nonlocal" as a shorthand for the implications of Bell's theorem. This leaves us with the awkward statement "quantum theory is contextual, but it has noncontextual probability assignments", but in fact I think this is a rather good way of stating what the central puzzle of this area is. If the world really is described by a contextual theory then it is very puzzling that the quantum probabilities are noncontextual. If you just put an arbitrary probability distribution over a set of contextual states then generically you would get contextual probability assignments. Therefore, a fine tuning would be required to get exactly the quantum probability rule. This is similar to the fine tuning required to prevent signalling if the word is really described by a nonlocal hidden variable theory. The fine tuning is the real issue and what it indicates to me is that we need to look for an alternative kind of ontology to which the fine tuning does not apply.

            I haven't read Gordon's essay, but I have rather had my fill of skepticism about Bell's theorem this year and the way you describe it does not sound promising. We can already write Bell's theorem in terms of sums rather than integrals once we realize that the case of a stochastic hidden variable theory can be reduced to that of a deterministic one by convexity. Once we have done that, we then realize that the only thing that matters about the hidden variables for the purposes of the argument is what measurement results they predict. Since there are only a finite number of lists of possible measurement outcomes for most Bell inequality setups we are at this point dealing with finite sets so we already have sums rather than integrals. This cannot possibly be the place where things go wrong.

            Of course, there are still ways around Bell's theorem that involve questioning the basic setup rather than the mathematical result itself. I think it is here where we will find the solution. Like Ken Wharton and Huw Price, I am rather partial to the idea that retrocausal theories should be investigated, but there are other possibilities.

            Dear Matt,

            Thanks for the answers and explanations. I'm not familiar with Cox, so did not make the connection. Nor am I deeply focused on probability, but instead on the underlying ontology. I may return with a comment or question on the partition function as the basis for a Born interpretation of the wave function, after digesting your answer. You say, "If you want to come up with a fullblown interpretation of quantum theory then I think you need to start from a well-defined ontology. You need to say what things would exist in reality if quantum theory were literally true." I do this, and use the partition function to explain why the wave function yields probability. I guess you would call this a suggestive argument, but it works, and you seem to be happy with that as a start.

            You say, "Once we have done that, we then realize that the only thing that matters about the hidden variables for the purposes of the argument is what measurement results they predict." The measurement results predicted by my model yield the cosine result Bell said is impossible -- as long as Alice and Bob make independent choices!

            I would hate to steer you away from Gordon's essay because I misstated or badly summarized it. I understand "had your fill", so I won't push the issue, but hope you change your mind.

            Your remarks about Ken's approach surprise me. Perhaps I should re-read your exchange with him.

            I'm not sure whether you answered the question as to whether non-locality would be obvious had Bell never invented his inequality. If you'd care to clarify it, I'm still interested.

            Thanks again for your detailed answers to my questions.

            Best wishes,

            Edwin Eugene Klingman

            "Your remarks about Ken's approach surprise me. Perhaps I should re-read your exchange with him."

            Well, I am critical not because I believe that his models are a bad idea, but because I believe that the conceptual framework needs to be developed more carefully. After all, there are ways of writing down classical theories that make them look like they involve weird causality, such as the Wheeler-Feynman absorber theory, but we know in that case that there are alternative ways of writing things down that have conventional causality. Therefore, it is not enough to say "look, I have derived a model starting from a block picture of spacetime". One also has to prove that there are obstacles to understanding the theory in any other way. To achieve this we need an analysis of the possibilities for such models at least as rigorous as Bell's analysis of theories with conventional causality. For this reason, I am putting his work in the "suggestive argument" category for now and, as I said, there is nothing wrong with being in that category.

            "I'm not sure whether you answered the question as to whether non-locality would be obvious had Bell never invented his inequality. If you'd care to clarify it, I'm still interested."

            Sorry, I didn't realize that this was one of your questions. My answer is a definitive no. Without Bell's analysis I think the EPR reasoning would stand (or better Einstein's earlier arguments which are less confusingly tied up with the uncertainty principle) and the best response would have been to look for a local hidden variable theory. In fact, I think there are only a few results that point to fundamental difficulties in interpreting quantum theory. These are:

            - Bell's theorem

            - Contextuality (This starts with Kochen-Specker, but I prefer Spekkens' more general definition)

            - Results about the reality of the wavefunction (PBR theorem et. al.)

            - Excess baggage theorems (results about how the size of the ontic state space must scale exponentially with the number of systems)

            Each one of these theorems points to an explanatory gap. Namely,

            - Ontological models must be nonlocal, but they must also be nonsignalling.

            - Ontological models must be contextual, but the probabilities must be noncontextual.

            - Ontological models must have the wavefunction as part of their ontology, but many quantum phenomena are most naturally interpreted in terms of an epistemic wavefunction.

            - n qubits must carry O(2^n) bits of information, but on any way you define the operational information content of n qubits it comes out as O(n) bits.

            All other phenomena that I know of can be modelled quite straightforwardly so long as one does not stick to the dogma that reality must be described by particles travelling along definite trajectories. This is why I am not impressed by arguments based on basic interferometry experiments like the double slit.

            What the explanatory gaps indicate to me is that there is something wrong with our basic framework for realist models of quantum theory. The right framework, whether that involves retrocausality or some other exotic thing, should close all these gaps, e.g. it should reveal that quantum theory is not nonlocal after all and similarly for the other gaps.

            Matt,

            Thanks again for the reply. I repeat that I am very impressed with your sober quality, and the well-thought-out replies you give. I very much appreciate your answering the question about the absolutely central place Bell's inequality holds for 'non-locality'. That agrees with my opinion on the issue and despite your weariness of attacks on Bell, I've read too many 'almost convincing' analyses of problems with Bell to conclude that he is bulletproof. I find your last answer gratifyingly succinct and complete; a mini-paper within a comment. Probably the best answer I've received in years, so thanks again.

            If you want exotic, read my two essays. I strongly resist non-locality, but I believe the "mechanism" is built into my current essay, I just do not wish to invoke it! To see how it applies to QM, you must read my previous essay.

            Edwin Eugene Klingman