Dear Matthew,

I enjoyed the first part of your essay, but I couldn't follow the second part, where you use a language that is unusual for me, despite I have my own baggage of philosophy of probability. As you know I am a Bayesian, but my way of thinking seems to differ from yours more than from that of a frequentist. I really cannot capture your meaning of "context".

For me things are quite simple. Probability theory is an extension of logic, precisely it is the algebra of Probable Inference of Richard T. Cox. I don't care of games where the player can decide to play or not: I just consider the situation of relevance for physics (and not for finance) where the player has no other choice than to play. Then probability theory is the set of rule for making a rational choice starting from a belief. The probabilities always depend on the context, and the agent knows (or has a belief of) the context. To be precise: there exists a joint probability for the full context, and all local events are marginals. The agent uses a Theory in forming his belief. A Theory is a set of rules that associates a joint probability to a full context. The only real things are the data and the procedure to build up the context. Everything else is subjective. End of story.

Sorry, I may look naïve, I don't have your philosophical training, but I like synthesis for starting productive discussions.

Regarding realism, I personally find "philosophically schizophrenic" your last sentence:

"A subjective Bayesian analysis of noncontextuality indicates that it can only be derived within a realist approach to physics. At present, this type of derivation has only been carried out in the many-worlds interpretation, but I expect it can be made to work in other realist approaches to quantum theory, including those yet to be discovered."

It looks funny that you say "other realist approaches", as if you take the "many-world" as a realistic one!

With my best regards and wishes

Mauro

    Hi Mauro,

    I am flattered that you think I have philosophical training. I don't. I just read a lot of books about the philosophy of probability.

    If you are a follower of Cox then it is definitely true that there is a wide gap between your position and mine. I am a subjective Bayesian in the vein of Ramsey, de Finetti, Savage, Jeffrey et. al. and I think that Cox's derivation of probability theory is one of the silliest things I have ever seen. Debating the relative merits of the two approaches could occupy a lot of space, so I will confine myself to a couple of comments.

    Firstly, Cox's approach contains a lot of arbitrariness. For example, he starts from the idea that degrees of belief have to be represented by real numbers, with no real justification other than simplicity. Why do they have to be totally ordered rather than just partially ordered? Weak analogies with measuring distance with a ruler just don't cut it for me, especially since the approach does not explain how one would construct an analogous device for measuring someone's belief that would yield a real number.

    Secondly, and relatedly, I believe that a viable approach to the foundations of probability has to be operational, i.e. it must say what things in the world correspond to probabilities and how to measure them. Subjective Bayesianism does this, i.e. it explains how to measure probabilities in terms of an agent's actions, but no other approach to probability really does. It is a bit complicated to explain why I think operationalism is needed here given that I am not an operationalist. Indeed, I don't actually think that probabilities ultimately should be defined in a purely operational way. It is just that, when you are confused about why a theory works, i.e. you cannot quite derive the results you need to justify the way it is applied, then it is a good idea to try to analyse the problematic concept in terms of something else and then use agreed upon facts about that other thing to see if you can find a better justification. Directly measurable things are the type of things about which we have a lot of agreed upon facts that anyone can verify, so operational definitions are the most useful for this purpose. I don't view operational definitions as "the" definition of the concept in question, but they provide a very useful rigging when there is a controversy to be resolved. As an aside, this is how I reconcile Einstein's approach to special relativity with his later statements on physics. It is not that he wanted to define spacetime operationally, but rather that he knew something had to change about the nature of space and time. The concepts of space and time come in a tight package with all the rest of the concepts of classical physics and it is very difficult to see how to unpick that package when you want to make some fundamental change. One way of getting around this is to redefine the problematic concepts, temporarily, in an operational fashion. However, after we are finished we can go back to being straightforwardly realist, e.g. viewing the structure of spacetime as the fundamental thing that accounts for the way that light rays behave rather than the other way around. It is the same with probability. We can't agree why statistics works so there must be something wrong with our usual concepts and derivations. However, probabilities are tied up with the whole theory in a tight package so it is best to temporarily define them in terms of something directly measurable. By the way, in the context of quantum theory, I think this is what Lucien means when he says that we should adopt an "operational methodology" without necessarily being operationalists.

    Regarding the meaning of "context", I presume you understand that in quantum theory I intend it to be synonymous with the choice of measurement. In general, a context is the thing that determines the set of bets that can be jointly resolved. Now, of course, if we already have probability theory then we could say that there is a probability for each context and then a conditional probability for each measurement outcome given the context. Multiply the two together and you have a joint probability distribution over contexts and outcomes, which is just an ordinary classical distribution. However, the point is that we are trying to derive probability theory rather than assuming it so we have to ask what would force our beliefs about the context to be described by a classical probability distribution. I suppose you could write down an exhaustive list of all contexts and then allow bets to be made on the context as well as the measurement outcomes. Then you could apply a Dutch book to the bets on context. That would be reasonable in the 20 questions game the way I have described it in which a third party is doing the questioning. However, I also want to allow for the possibility that the bookie might be the person choosing the context and they might choose the context adversarially after you have announced your probabilities (or similarly it might be you choosing the context after making your bets and putting the bookie at a disadvantage). It might have been clearer if I had described things this way in the essay. In this case, the choice of context is not something that you can assign a probability to. Instead, you have to do a worst case analysis and hedge against all possible contexts. This type of setup is the Bayesian way of fleshing out what it means for the choice of context to be a "free choice" that we cannot assign probabilities to. Practically it just means that it might be determined adversarially so we have to do a worst case analysis.

    Regarding many-worlds, I do not currently think it is a "realistic approach", but hopefully we can agree that it is a realist one (important distinction there). Although I do not advocate the theory, it remains the only interpretation of quantum theory in which a fully subjective Bayesian derivation of the Born rule along the lines I suggest has been carried out, so it would be unfair of me not to mention it. However, it is not too surprising that they are able to do this, since they start from the premise that the quantum state is real and that is the thing that carries all the information about the probabilities in the first place. It would not be too hard to derive classical probability theory if you started from the premise that reality was described by an object isomorphic to a probability distribution, and I hope we would all reject such a derivation as silly. As it happens, I am toying with a version of many worlds in which the wavefunction is not real but I still think you can derive the Born rule. I am not taking this too seriously, since it is just meant as a counterexample to the PBR theorem showing that you can have a realist theory with an epistemic quantum state if you broaden the ontology in some way. I don't think many-worlds is the best way of broadening the ontology, but one has to start somewhere and it is a more concrete suggestion than vague talk about retrocausality or "relational degrees of freedom" that you might hear from me and Rob Spekkens on other days.

    Dear Matt,

    In view of your deep knowledge of what (non-)contextuality means in the different approaches of quantum theory, you may be interested in mine, that is quite orthodox (in the Bohr sense) but pushes the meaning of observables towards graphs, finite geometries and algebraic curves (you would call them epistemic concepts).

    Going to your essay, and the related publications, I realize how deep the problem is and I certainly learn a lot by reading you.

    Best wishes.

    Michel

    ps/ I completely agree with Jochen Szangolies about the poll-votes.

    Dear Matthew

    I need to re-read your answer more closely, though there are many point that will remain missing. We would need a real conversation in person. I hope we will someday be able to come back to the old days of our first Cambridge meeting.

    The only thing I want to stress here, where it seems that I may have been misunderstood, is that also for me the context is a "parameter" for which it makes no sense to provide a probability. Second simple thing is that I remain convinced that all your line of research is motivated by the realist's epistemic interpretation of probability. Even though I admit that we cannot live without a personal interpretation, I try to stay over interpretations, and look for just the minimization of the axioms, and seeking clear relations between different axiomatizations and theories: things that have a much more general value than pursuing just a single viewpoint. Instead of marking the differences between us, as a rule we should try to understand the relations and the common points.

    It is always a pleasure to discuss with you.

    My best regards

    Mauro

    Dear Matthew,

    I see you are too busy with interesting discussions.

    For this, I just asking you to check my essay in your good time (from above my post) and in two words only write your opinion (it will valuable for me as from professional scientist.) Let me say - there are no any quantitatively reasoning contain, and it will necessary spent short time only to study it.

    With best wishes,

    George

      Dear George,

      There are a lot of interesting essays to read and unfortunately I do not have time to read all of them. I hope you will understand that I cannot guarantee to read someone's essay just because they mention it here. If the abstract looks interesting to me then I will read it and rate it.

      This thread is supposed to be for comments about my essay, so if you have something specific to say about it then I would be glad to respond.

      Matthew,

      If given the time and the wits to evaluate over 120 more entries, I have a month to try. My seemingly whimsical title, "It's good to be the king," is serious about our subject.

      Jim

      Dear Matthew! let me see that my previous post concerns to your work manly (1 Jul, see above.) I did not get response even as small thanks, that may be enough for me. Of course, for us own work is important first that is clear for everyone.

      Regards

      George

      Dear Matt,

      I've read your essay and many of your comments. I particularly appreciate your sober approach: "All too often people take a very strong stance on the interpretation of quantum theory and see their job as defending their view against all comers."

      As part of writing my essay, I studied ET Jaynes "Probability Theory: the Logic of Science". Your "Dutch book" discussion is a fascinating follow-on to his work. As you know he viewed it more as a theory of how to reason than as formal mathematical theory. He said, "...we choose a model for Bayesian analysis; this amounts to expressing some prior knowledge-or some working hypothesis-about the phenomenon being observed. [...] If the extra hypotheses are true, then we expect that the Bayesian results will improve on maximum entropy; if they are false, the Bayesian inferences will likely be worse. On the other hand, maximum entropy is a non-speculative procedure, in the sense that it involves no hypotheses beyond the sample space and the evidence that is in the available data. Thus it predicts only observable facts [...] rather than values of parameters which may exist only in our imagination."

      One question I'd like to ask is "where one starts" to interpret QM. For example in an earlier essay, I 'derive' Born probability for the wave function from the partition function. In your mind is this a legitimate starting place, or must one go all the way back to tossing coins (or placing bets)? As you note, "there's nothing in logic that tells you what premises you have to start with."

      In response to Mauro you declare "context" is synonymous with the choice of measurement. Would it be possible for you to say what you think should come to mind when one hears the word 'non-contextual' in QM? [As Einstein said: "as simple as possible, but no simpler".] I read your P(E|B) = P(F|B'), which, as you note, requires acceptance of counterfactual(ity). Nevertheless, I invite you to try to provide a simple 'standalone' definition of non-contextuality.

      Like you, I'm a realist who feels the need for 'quantum stuff'. My previous essay, The Nature of the Wave Function, provides my general approach, and my current essay, I believe, strengthens those arguments.

      But Bell gets in the way of being a 'local realist' and that is where my instincts and intuition lead me. So I'm interested in all analyses of Bell that are less than worshipful. In particular, Gordon Watson has an essay in this contest that questions a step in Bell's logic. An integral tends to fold things together that may not fold so well in a sum. Gordon re-expresses Bell's integral as a sum and finds a problem. He may be making a simple mistake, but, if so, I don't see it. I would hope you would look at his first two pages (and perhaps our discussion in the comments) and give your opinion. In my opinion, much of this contest is based on Bell's non-locality, and I've asked others the following question: if Bell had never lived, what is it in experimental data or quantum theory that would prove non-locality? So far no one has answered this. It seems to me that the entire move away from local realism is based solely on his inequality.

      Of course I invite you to read my essay and comment, but I'm most interested in your opinion of Gordon's initial results. Thanks for the long comments you have contributed to various blogs.

      My best regards,

      Edwin Eugene Klingman

        Regarding Jaynes, I made some less than sympathetic comments about the Cox approach to probability in my response to D'Ariano and of course Jaynes relies on that approach as his foundation. As with Cox, I find Jaynes a little too simple minded for my taste, as he ducks some major issues with the choice of prior and makes too many arguments based on simplicity. Of course, Jaynes was working in a time when Bayesian methodology was disavowed by the vast majority of statisticians and scientists, so he has to be read in context. He was mainly concerned with Bayesian statistics as a practical tool that could give greater insight than classical statistics and with ways of rendering the theory practical in a time before computer simulation was available. In this he succeeded admirably. In those days, the two issues of how we should think about probability and whether we should adopt Bayesian or classical statistics were often conflated. Now that Bayesian statistics has a large community of supporters, we can afford to think more carefully about the former problem, and I think we would do better to move beyond the Jaynes-Cox maxent dogma. I should note that the subjective approach I favour predates Jaynes by several decades, but its founders were a bit more careful about conceptual issues.

        ``One question I'd like to ask is "where one starts" to interpret QM. For example in an earlier essay, I 'derive' Born probability for the wave function from the partition function. In your mind is this a legitimate starting place, or must one go all the way back to tossing coins (or placing bets)? As you note, "there's nothing in logic that tells you what premises you have to start with."''

        If you want to come up with a fullblown interpretation of quantum theory then I think you need to start from a well-defined ontology. You need to say what things would exist in reality if quantum theory were literally true. This sounds like a realist position and it is naturally interpreted that way, but it would be OK by me if you want to say that the only things that exist are measurement outcomes or something along those lines, so long as you have started from a clear statement to that effect and deal with the conceptual problems entailed by that. You need to start from such a clear statement if you want to derive the quantum probability rule because you need to say what the quantum probabilities are probabilities of. Depending on your choice, it may be that you do not need to view quantum probabilities as constituting a different theory from classical probability, so you may not need to go all the way down to the foundations of probability. For example, it is like this in Bohmian mechanics where the probabilities are just ordinary classical probabilities similar to those of statistical mechanics.

        If you don't start from a clear statement about reality then deriving the Born rule becomes a mathematical game with no clear conceptual meaning and of course there are already a lot of formal mathematical reasons for adopting the Born rule.

        On the other hand, it may be that you are happy not to have a fullblown interpretation of quantum theory at the moment but still think that some argument you have come up with is suggestive as a way to proceed. Most of the best work on the foundations of quantum theory is probably going to be like that until we are lucky enough to hit on the right ideas. Therefore, suggestive arguments are fine with me so long as you are honest about their status.

        Regarding noncontextuality, there are two senses of this word in quantum theory, which can cause some confusion. I am using it in the sense of "noncontextual probability assignment", which simply means that the same projector receives the same probability regardless of which measurement that includes it is made. There is also the sense of "noncontextual hidden variable theory", which is ruled out by Kochen-Specker and related results. Some people say that the impossibility of a noncontextual hidden variable theory should be shortened to "quantum theory is contextual" in the same way that we say "quantum theory is nonlocal" as a shorthand for the implications of Bell's theorem. This leaves us with the awkward statement "quantum theory is contextual, but it has noncontextual probability assignments", but in fact I think this is a rather good way of stating what the central puzzle of this area is. If the world really is described by a contextual theory then it is very puzzling that the quantum probabilities are noncontextual. If you just put an arbitrary probability distribution over a set of contextual states then generically you would get contextual probability assignments. Therefore, a fine tuning would be required to get exactly the quantum probability rule. This is similar to the fine tuning required to prevent signalling if the word is really described by a nonlocal hidden variable theory. The fine tuning is the real issue and what it indicates to me is that we need to look for an alternative kind of ontology to which the fine tuning does not apply.

        I haven't read Gordon's essay, but I have rather had my fill of skepticism about Bell's theorem this year and the way you describe it does not sound promising. We can already write Bell's theorem in terms of sums rather than integrals once we realize that the case of a stochastic hidden variable theory can be reduced to that of a deterministic one by convexity. Once we have done that, we then realize that the only thing that matters about the hidden variables for the purposes of the argument is what measurement results they predict. Since there are only a finite number of lists of possible measurement outcomes for most Bell inequality setups we are at this point dealing with finite sets so we already have sums rather than integrals. This cannot possibly be the place where things go wrong.

        Of course, there are still ways around Bell's theorem that involve questioning the basic setup rather than the mathematical result itself. I think it is here where we will find the solution. Like Ken Wharton and Huw Price, I am rather partial to the idea that retrocausal theories should be investigated, but there are other possibilities.

        Dear Matt,

        Thanks for the answers and explanations. I'm not familiar with Cox, so did not make the connection. Nor am I deeply focused on probability, but instead on the underlying ontology. I may return with a comment or question on the partition function as the basis for a Born interpretation of the wave function, after digesting your answer. You say, "If you want to come up with a fullblown interpretation of quantum theory then I think you need to start from a well-defined ontology. You need to say what things would exist in reality if quantum theory were literally true." I do this, and use the partition function to explain why the wave function yields probability. I guess you would call this a suggestive argument, but it works, and you seem to be happy with that as a start.

        You say, "Once we have done that, we then realize that the only thing that matters about the hidden variables for the purposes of the argument is what measurement results they predict." The measurement results predicted by my model yield the cosine result Bell said is impossible -- as long as Alice and Bob make independent choices!

        I would hate to steer you away from Gordon's essay because I misstated or badly summarized it. I understand "had your fill", so I won't push the issue, but hope you change your mind.

        Your remarks about Ken's approach surprise me. Perhaps I should re-read your exchange with him.

        I'm not sure whether you answered the question as to whether non-locality would be obvious had Bell never invented his inequality. If you'd care to clarify it, I'm still interested.

        Thanks again for your detailed answers to my questions.

        Best wishes,

        Edwin Eugene Klingman

        "Your remarks about Ken's approach surprise me. Perhaps I should re-read your exchange with him."

        Well, I am critical not because I believe that his models are a bad idea, but because I believe that the conceptual framework needs to be developed more carefully. After all, there are ways of writing down classical theories that make them look like they involve weird causality, such as the Wheeler-Feynman absorber theory, but we know in that case that there are alternative ways of writing things down that have conventional causality. Therefore, it is not enough to say "look, I have derived a model starting from a block picture of spacetime". One also has to prove that there are obstacles to understanding the theory in any other way. To achieve this we need an analysis of the possibilities for such models at least as rigorous as Bell's analysis of theories with conventional causality. For this reason, I am putting his work in the "suggestive argument" category for now and, as I said, there is nothing wrong with being in that category.

        "I'm not sure whether you answered the question as to whether non-locality would be obvious had Bell never invented his inequality. If you'd care to clarify it, I'm still interested."

        Sorry, I didn't realize that this was one of your questions. My answer is a definitive no. Without Bell's analysis I think the EPR reasoning would stand (or better Einstein's earlier arguments which are less confusingly tied up with the uncertainty principle) and the best response would have been to look for a local hidden variable theory. In fact, I think there are only a few results that point to fundamental difficulties in interpreting quantum theory. These are:

        - Bell's theorem

        - Contextuality (This starts with Kochen-Specker, but I prefer Spekkens' more general definition)

        - Results about the reality of the wavefunction (PBR theorem et. al.)

        - Excess baggage theorems (results about how the size of the ontic state space must scale exponentially with the number of systems)

        Each one of these theorems points to an explanatory gap. Namely,

        - Ontological models must be nonlocal, but they must also be nonsignalling.

        - Ontological models must be contextual, but the probabilities must be noncontextual.

        - Ontological models must have the wavefunction as part of their ontology, but many quantum phenomena are most naturally interpreted in terms of an epistemic wavefunction.

        - n qubits must carry O(2^n) bits of information, but on any way you define the operational information content of n qubits it comes out as O(n) bits.

        All other phenomena that I know of can be modelled quite straightforwardly so long as one does not stick to the dogma that reality must be described by particles travelling along definite trajectories. This is why I am not impressed by arguments based on basic interferometry experiments like the double slit.

        What the explanatory gaps indicate to me is that there is something wrong with our basic framework for realist models of quantum theory. The right framework, whether that involves retrocausality or some other exotic thing, should close all these gaps, e.g. it should reveal that quantum theory is not nonlocal after all and similarly for the other gaps.

        Matt,

        Thanks again for the reply. I repeat that I am very impressed with your sober quality, and the well-thought-out replies you give. I very much appreciate your answering the question about the absolutely central place Bell's inequality holds for 'non-locality'. That agrees with my opinion on the issue and despite your weariness of attacks on Bell, I've read too many 'almost convincing' analyses of problems with Bell to conclude that he is bulletproof. I find your last answer gratifyingly succinct and complete; a mini-paper within a comment. Probably the best answer I've received in years, so thanks again.

        If you want exotic, read my two essays. I strongly resist non-locality, but I believe the "mechanism" is built into my current essay, I just do not wish to invoke it! To see how it applies to QM, you must read my previous essay.

        Edwin Eugene Klingman

        Dear Matt,

        Well argued and mostly very agreeable. I appreciate your views of Bayesianism, Sample Space, psi, probablism and the Born rule, all of which I too discuss. I found your noncontextuality definition fascinating and analogous to a unique new realist derivation of QM. I present this by invoking 'IQbit's, which using orbital angular momentum in 3D space +t can contain more than O(2^n) bits (closely linked to quantum computing).

        I find your logic and specification for a new realist approach to QM immaculate except for the odd hidden assumption. I challenging some fundamental premises, including your view that uniqueness is not a useful scientific postulate. I justify this, then show how an ontology can be constructed to derive SR direct from a real and local quantum mechanism. Tested against Bells theorem it resolves the EPR paradox as von Neumann and indeed Bell, believed was likely.

        Using reality; May I offer you a skewer of 4 doughnuts. All spin, the inner two are chiral handed and also add up to 2, but each may vary inversely from 1. They stay vertical and move out to interact with the outer pair. Each may be seen as asymmetric rotating dipoles, so describing helices. The the outer pair (A,B) may tilt at will, independently. They may then intersect anywhere on the 'orbit' with each of the outer pair. Now we find something strange happens not anticipated by binary mathematics. Sure we can ask where the contact took place, say 'up' or 'down', and get simple yes/no answers. But we then find more intelligence is available. We can also get quantitative answers to 'how high/low' from both A and B. On the vertical (or any) axis these are certain, near horizontal (or perpendicular) they become entirely uncertain, and are relative to each other subject to A and B's relative angle!

        As Edwin has agreed, the cosine curve emerges at each detector in accordance with Malus' Law, as well as in numerical correlation. I've also tracked down these "anomalous orbital asymmetries" in Aspects data which he discarded most of as no theory existed to explain it! That theory is the 'discrete field' model (DFM) of quantum particle based dielectric 'inertial systems' in relative motion. I believe Gordon Watson's mathematics are uniquely applicable.

        I hope I've tempted you enough to read my essay. You'll find the references to quantum iPADS etc quite brief, but all the key elements are assembled. I greatly look forward to your comments and advice. Well done and thank you for yours.

        Peter

          Peter,

          I wrote this on your blog [Jun. 9, 2013 @ 22:45]: "As for Bell's so-called 'proof' that no locally real model can produce the cosine squared result, I produce it in my previous essay (link above). Joy has complained that it only works when Alice and Bob make independent selections, but I believe that is implied by Bell's formulation. I'm communicating with MJW Hall who specializes in determining the limiting cases implied by Bell assumptions. So I need to study your EPR discussion more before I form an opinion about it, although I am favorably disposed to your argument that there is a mismatch between statistics and 'real physical interactions' at each detector. I plan to study your approach further."

          Thus I agree that the cosine curve can be reproduced, contrary to Bell, but I am not yet committed to the Malus' Law derivation. I still need to study it.

          Sorry to intrude on Matt's blog, but he is clearly a stickler for details, so I wanted to clear up that detail.

          Edwin Eugene Klingman

          This discussion thread is supposed to be about my essay. I am happy for people to talk about their own essays provided this is done in the context of commenting on something specific that I have said in my essay, or something specific that has come up in the subsequent discussion. If you want to make a comparison between something you have said in your essay and something I said in mine then that is perfectly fine, especially if you do so in order to raise a specific question that we can then go on to discuss.

          Posts that are essentially just a summary of your own essay and/or a request for me to read your essay are against the FQXi terms of use http://fqxi.org/community/forum/intro#terms which state that posts should not be outside the scope of the forum topic.

          I do not mean to offend anyone. It is obviously a judgement call as to whether or not a particular post contains substantive comment on my essay and the subsequent discussion, but I will report posts that seem inappropriate to me. It is then up to FQXi to decide whether or not they are appropriate.

          Finally, let me reiterate that I cannot read all the essays but I will read those that I think look interesting. I will do this primarily based on reading the abstracts of the essays. Posting a summary of your essay here will not make me more likely to read it.

          Dear Matt,

          I want to be clear and to be able to quote you on this, as I am not sure you gave a committal answer above:

          Question: Is existence/non-existence on the list of possible binary choices? That is, 'is IT existing or not existing' among the questions that can be asked in the game of twenty questions?

          Regards,

          Akinbo

            Within the context of the essay, there are two possible positions that make sense to me. One could be completely operational about things and say that the only things that exist are the readings on our experimental apparatus, e.g. detector clicks, instrument readings and so forth. In that case, we are assuming that our macroscopic apparatus exist, but not that there is necessarily any deeper reality underlying them. However, the position that I prefer is to assume that there are things called quantum systems that exist independently of us and we are just asking questions about the properties of those systems. In that sense, I am assuming that "it" in the sense of "a quantum system" exists and is not being questioned.

            Of course, one can also consider situations in which you are not sure whether a particular quantum system exists, e.g. a laser typically emits a superposition of different numbers of photons. In that case, it sometimes makes sense to include a question about whether the system exists if you can ask it without destroying the system. For example, there are heralded single photon sources that produce two photons in an entangled state and then one can use a measurement on the first photon as evidence that the second photon exists.

            I hope that answers your question, but of course it depends on precisely what you mean by IT.

            6 days later

            Dear Matthew,

            You have presented a very thoughtful and at times surprisingly subtle analysis of the argument that even a subjective Bayesian view of quantum mechanics must be grounded in some sort of external reality, although this reality need not be understood in terms of classical concepts.

            It seems to me that your argument could provide a starting point through which quantum Bayesianism could attain wider acceptance. As far as I can tell, a major point of criticism is that QBism does not really offer a clear ontology (if any at all), and I imagine most physicists do not see themselves in the business of figuring out degrees of belief. Your approach reminds me a bit of what I read in Howard Barnum's entry, to try to find some sort of neat integration between the subjectivist approach and an objective reality (at least at some level).

            It would seem, then, that the next step is to try to find out how to map the external requirement for non-contextuality on any purported features of the external reality. Is that correct? If it is, how do you intend to accomplish this? Also, your entry whetted my appetite for learning more about the philosophy of probability about which I know next to nothing (other than what I learned in your article). Can you recommend one or more introductory texts?

            I have developed an original framework myself, and while it seems that it can reproduce major features of QM, I do not at present know how to get the Born Rule out of it. I strongly suspect that it is due to my ignorance in precisely this area, so I intend to change that.

            I enjoyed your essay and wish you all the best,

            Armin

              Thanks for your kind comments about my essay.

              It is important to distinguish two things:

              1) Believing in the subjective Bayesian interpretation of quantum theory and wanting to employ it in understanding the probabilities that arise in quantum theory.

              2) Being a QBist, i.e. believing in what Caves, Fuchs and Schack have called quantum Bayesianism.

              Quantum Bayesianism, or QBism, is a very specific work-in-progress interpretation of quantum theory, in the category that I call neo-Copenhagen. It employs a subjective Bayesian interpretation of probabilities, but goes way beyond that in denying the existence of a deeper reality underlying quantum theory.

              In contrast, just believing in subjective Bayesian probability does not imply any of that. As I mentioned in the essay, you can apply it in the many-worlds interpretation, and it would also be unproblematic in Bohmian mechanics, at least if the probabilities statistical mechanics can be interpreted subjectively because they are of the same type.

              My view is quite far from QBism because it ends up accepting realism of a fairly straightforward type. Fuchs and company are dead against this. It is rather unfortunate that they have co-opted the phrase "quantum Bayesianism" to refer to their approach, since by rights that should refer to any attempt to apply Bayesian probability and statistical methodology to quantum theory. As it happens, I did obtain my first exposure to Bayesian probability from Fuchs et. al. I am very grateful for that and probably would not be doing quantum foundations were it not for Fuchs' eloquent papers and talks, but my view is definitely not a form of QBism. If my view gains more acceptance than QBism then I would be happy about that, but I doubt it would please the QBists.

              You are right that the next step is to try and construct an ontology in which noncontextuality emerges naturally. This is not true in Bohmian mechanics for example, since a generic probability distribution over the positions of particles would be contextual. The reason that I mentioned many-worlds in the essay is that I believe that the Deutsch-Wallace argument does achieve noncontextuality naturally. However, this is not too surprising because they assume the objective existence of the wavefunction and only the wavefunction, and this is the thing that encodes all the information about the quantum probabilities in the first place. As a proponent of the epistemic interpretation of the wavefunction this would not be where I would start. Instead, I would look at more exotic single-world ontologies than are usually encountered in the literature, such as those involving retrocausality.

              Finally, regarding reading on the foundations of probability, I wrote a blog post with a reading list some time ago http://mattleifer.info/2010/11/01/a-reading-list-on-the-foundations-of-probability-and-statistics/ I update it periodically with mini reviews as I read things, but I haven't done so for some time meaning that I have now read a lot of the things marked UNREAD. For the philosophy of probability, I would recommend starting with Donald Gilles' textbook and then the collection of essays edited by Anthony Eagle. The former is the best overview, but is not perfect and is lacking in coverage of many contemporary issues. The latter remedies this as well as containing many of the classic papers.