Regarding Jaynes, I made some less than sympathetic comments about the Cox approach to probability in my response to D'Ariano and of course Jaynes relies on that approach as his foundation. As with Cox, I find Jaynes a little too simple minded for my taste, as he ducks some major issues with the choice of prior and makes too many arguments based on simplicity. Of course, Jaynes was working in a time when Bayesian methodology was disavowed by the vast majority of statisticians and scientists, so he has to be read in context. He was mainly concerned with Bayesian statistics as a practical tool that could give greater insight than classical statistics and with ways of rendering the theory practical in a time before computer simulation was available. In this he succeeded admirably. In those days, the two issues of how we should think about probability and whether we should adopt Bayesian or classical statistics were often conflated. Now that Bayesian statistics has a large community of supporters, we can afford to think more carefully about the former problem, and I think we would do better to move beyond the Jaynes-Cox maxent dogma. I should note that the subjective approach I favour predates Jaynes by several decades, but its founders were a bit more careful about conceptual issues.

``One question I'd like to ask is "where one starts" to interpret QM. For example in an earlier essay, I 'derive' Born probability for the wave function from the partition function. In your mind is this a legitimate starting place, or must one go all the way back to tossing coins (or placing bets)? As you note, "there's nothing in logic that tells you what premises you have to start with."''

If you want to come up with a fullblown interpretation of quantum theory then I think you need to start from a well-defined ontology. You need to say what things would exist in reality if quantum theory were literally true. This sounds like a realist position and it is naturally interpreted that way, but it would be OK by me if you want to say that the only things that exist are measurement outcomes or something along those lines, so long as you have started from a clear statement to that effect and deal with the conceptual problems entailed by that. You need to start from such a clear statement if you want to derive the quantum probability rule because you need to say what the quantum probabilities are probabilities of. Depending on your choice, it may be that you do not need to view quantum probabilities as constituting a different theory from classical probability, so you may not need to go all the way down to the foundations of probability. For example, it is like this in Bohmian mechanics where the probabilities are just ordinary classical probabilities similar to those of statistical mechanics.

If you don't start from a clear statement about reality then deriving the Born rule becomes a mathematical game with no clear conceptual meaning and of course there are already a lot of formal mathematical reasons for adopting the Born rule.

On the other hand, it may be that you are happy not to have a fullblown interpretation of quantum theory at the moment but still think that some argument you have come up with is suggestive as a way to proceed. Most of the best work on the foundations of quantum theory is probably going to be like that until we are lucky enough to hit on the right ideas. Therefore, suggestive arguments are fine with me so long as you are honest about their status.

Regarding noncontextuality, there are two senses of this word in quantum theory, which can cause some confusion. I am using it in the sense of "noncontextual probability assignment", which simply means that the same projector receives the same probability regardless of which measurement that includes it is made. There is also the sense of "noncontextual hidden variable theory", which is ruled out by Kochen-Specker and related results. Some people say that the impossibility of a noncontextual hidden variable theory should be shortened to "quantum theory is contextual" in the same way that we say "quantum theory is nonlocal" as a shorthand for the implications of Bell's theorem. This leaves us with the awkward statement "quantum theory is contextual, but it has noncontextual probability assignments", but in fact I think this is a rather good way of stating what the central puzzle of this area is. If the world really is described by a contextual theory then it is very puzzling that the quantum probabilities are noncontextual. If you just put an arbitrary probability distribution over a set of contextual states then generically you would get contextual probability assignments. Therefore, a fine tuning would be required to get exactly the quantum probability rule. This is similar to the fine tuning required to prevent signalling if the word is really described by a nonlocal hidden variable theory. The fine tuning is the real issue and what it indicates to me is that we need to look for an alternative kind of ontology to which the fine tuning does not apply.

I haven't read Gordon's essay, but I have rather had my fill of skepticism about Bell's theorem this year and the way you describe it does not sound promising. We can already write Bell's theorem in terms of sums rather than integrals once we realize that the case of a stochastic hidden variable theory can be reduced to that of a deterministic one by convexity. Once we have done that, we then realize that the only thing that matters about the hidden variables for the purposes of the argument is what measurement results they predict. Since there are only a finite number of lists of possible measurement outcomes for most Bell inequality setups we are at this point dealing with finite sets so we already have sums rather than integrals. This cannot possibly be the place where things go wrong.

Of course, there are still ways around Bell's theorem that involve questioning the basic setup rather than the mathematical result itself. I think it is here where we will find the solution. Like Ken Wharton and Huw Price, I am rather partial to the idea that retrocausal theories should be investigated, but there are other possibilities.

Dear Matt,

Thanks for the answers and explanations. I'm not familiar with Cox, so did not make the connection. Nor am I deeply focused on probability, but instead on the underlying ontology. I may return with a comment or question on the partition function as the basis for a Born interpretation of the wave function, after digesting your answer. You say, "If you want to come up with a fullblown interpretation of quantum theory then I think you need to start from a well-defined ontology. You need to say what things would exist in reality if quantum theory were literally true." I do this, and use the partition function to explain why the wave function yields probability. I guess you would call this a suggestive argument, but it works, and you seem to be happy with that as a start.

You say, "Once we have done that, we then realize that the only thing that matters about the hidden variables for the purposes of the argument is what measurement results they predict." The measurement results predicted by my model yield the cosine result Bell said is impossible -- as long as Alice and Bob make independent choices!

I would hate to steer you away from Gordon's essay because I misstated or badly summarized it. I understand "had your fill", so I won't push the issue, but hope you change your mind.

Your remarks about Ken's approach surprise me. Perhaps I should re-read your exchange with him.

I'm not sure whether you answered the question as to whether non-locality would be obvious had Bell never invented his inequality. If you'd care to clarify it, I'm still interested.

Thanks again for your detailed answers to my questions.

Best wishes,

Edwin Eugene Klingman

"Your remarks about Ken's approach surprise me. Perhaps I should re-read your exchange with him."

Well, I am critical not because I believe that his models are a bad idea, but because I believe that the conceptual framework needs to be developed more carefully. After all, there are ways of writing down classical theories that make them look like they involve weird causality, such as the Wheeler-Feynman absorber theory, but we know in that case that there are alternative ways of writing things down that have conventional causality. Therefore, it is not enough to say "look, I have derived a model starting from a block picture of spacetime". One also has to prove that there are obstacles to understanding the theory in any other way. To achieve this we need an analysis of the possibilities for such models at least as rigorous as Bell's analysis of theories with conventional causality. For this reason, I am putting his work in the "suggestive argument" category for now and, as I said, there is nothing wrong with being in that category.

"I'm not sure whether you answered the question as to whether non-locality would be obvious had Bell never invented his inequality. If you'd care to clarify it, I'm still interested."

Sorry, I didn't realize that this was one of your questions. My answer is a definitive no. Without Bell's analysis I think the EPR reasoning would stand (or better Einstein's earlier arguments which are less confusingly tied up with the uncertainty principle) and the best response would have been to look for a local hidden variable theory. In fact, I think there are only a few results that point to fundamental difficulties in interpreting quantum theory. These are:

- Bell's theorem

- Contextuality (This starts with Kochen-Specker, but I prefer Spekkens' more general definition)

- Results about the reality of the wavefunction (PBR theorem et. al.)

- Excess baggage theorems (results about how the size of the ontic state space must scale exponentially with the number of systems)

Each one of these theorems points to an explanatory gap. Namely,

- Ontological models must be nonlocal, but they must also be nonsignalling.

- Ontological models must be contextual, but the probabilities must be noncontextual.

- Ontological models must have the wavefunction as part of their ontology, but many quantum phenomena are most naturally interpreted in terms of an epistemic wavefunction.

- n qubits must carry O(2^n) bits of information, but on any way you define the operational information content of n qubits it comes out as O(n) bits.

All other phenomena that I know of can be modelled quite straightforwardly so long as one does not stick to the dogma that reality must be described by particles travelling along definite trajectories. This is why I am not impressed by arguments based on basic interferometry experiments like the double slit.

What the explanatory gaps indicate to me is that there is something wrong with our basic framework for realist models of quantum theory. The right framework, whether that involves retrocausality or some other exotic thing, should close all these gaps, e.g. it should reveal that quantum theory is not nonlocal after all and similarly for the other gaps.

Matt,

Thanks again for the reply. I repeat that I am very impressed with your sober quality, and the well-thought-out replies you give. I very much appreciate your answering the question about the absolutely central place Bell's inequality holds for 'non-locality'. That agrees with my opinion on the issue and despite your weariness of attacks on Bell, I've read too many 'almost convincing' analyses of problems with Bell to conclude that he is bulletproof. I find your last answer gratifyingly succinct and complete; a mini-paper within a comment. Probably the best answer I've received in years, so thanks again.

If you want exotic, read my two essays. I strongly resist non-locality, but I believe the "mechanism" is built into my current essay, I just do not wish to invoke it! To see how it applies to QM, you must read my previous essay.

Edwin Eugene Klingman

Dear Matt,

Well argued and mostly very agreeable. I appreciate your views of Bayesianism, Sample Space, psi, probablism and the Born rule, all of which I too discuss. I found your noncontextuality definition fascinating and analogous to a unique new realist derivation of QM. I present this by invoking 'IQbit's, which using orbital angular momentum in 3D space +t can contain more than O(2^n) bits (closely linked to quantum computing).

I find your logic and specification for a new realist approach to QM immaculate except for the odd hidden assumption. I challenging some fundamental premises, including your view that uniqueness is not a useful scientific postulate. I justify this, then show how an ontology can be constructed to derive SR direct from a real and local quantum mechanism. Tested against Bells theorem it resolves the EPR paradox as von Neumann and indeed Bell, believed was likely.

Using reality; May I offer you a skewer of 4 doughnuts. All spin, the inner two are chiral handed and also add up to 2, but each may vary inversely from 1. They stay vertical and move out to interact with the outer pair. Each may be seen as asymmetric rotating dipoles, so describing helices. The the outer pair (A,B) may tilt at will, independently. They may then intersect anywhere on the 'orbit' with each of the outer pair. Now we find something strange happens not anticipated by binary mathematics. Sure we can ask where the contact took place, say 'up' or 'down', and get simple yes/no answers. But we then find more intelligence is available. We can also get quantitative answers to 'how high/low' from both A and B. On the vertical (or any) axis these are certain, near horizontal (or perpendicular) they become entirely uncertain, and are relative to each other subject to A and B's relative angle!

As Edwin has agreed, the cosine curve emerges at each detector in accordance with Malus' Law, as well as in numerical correlation. I've also tracked down these "anomalous orbital asymmetries" in Aspects data which he discarded most of as no theory existed to explain it! That theory is the 'discrete field' model (DFM) of quantum particle based dielectric 'inertial systems' in relative motion. I believe Gordon Watson's mathematics are uniquely applicable.

I hope I've tempted you enough to read my essay. You'll find the references to quantum iPADS etc quite brief, but all the key elements are assembled. I greatly look forward to your comments and advice. Well done and thank you for yours.

Peter

    Peter,

    I wrote this on your blog [Jun. 9, 2013 @ 22:45]: "As for Bell's so-called 'proof' that no locally real model can produce the cosine squared result, I produce it in my previous essay (link above). Joy has complained that it only works when Alice and Bob make independent selections, but I believe that is implied by Bell's formulation. I'm communicating with MJW Hall who specializes in determining the limiting cases implied by Bell assumptions. So I need to study your EPR discussion more before I form an opinion about it, although I am favorably disposed to your argument that there is a mismatch between statistics and 'real physical interactions' at each detector. I plan to study your approach further."

    Thus I agree that the cosine curve can be reproduced, contrary to Bell, but I am not yet committed to the Malus' Law derivation. I still need to study it.

    Sorry to intrude on Matt's blog, but he is clearly a stickler for details, so I wanted to clear up that detail.

    Edwin Eugene Klingman

    This discussion thread is supposed to be about my essay. I am happy for people to talk about their own essays provided this is done in the context of commenting on something specific that I have said in my essay, or something specific that has come up in the subsequent discussion. If you want to make a comparison between something you have said in your essay and something I said in mine then that is perfectly fine, especially if you do so in order to raise a specific question that we can then go on to discuss.

    Posts that are essentially just a summary of your own essay and/or a request for me to read your essay are against the FQXi terms of use http://fqxi.org/community/forum/intro#terms which state that posts should not be outside the scope of the forum topic.

    I do not mean to offend anyone. It is obviously a judgement call as to whether or not a particular post contains substantive comment on my essay and the subsequent discussion, but I will report posts that seem inappropriate to me. It is then up to FQXi to decide whether or not they are appropriate.

    Finally, let me reiterate that I cannot read all the essays but I will read those that I think look interesting. I will do this primarily based on reading the abstracts of the essays. Posting a summary of your essay here will not make me more likely to read it.

    Dear Matt,

    I want to be clear and to be able to quote you on this, as I am not sure you gave a committal answer above:

    Question: Is existence/non-existence on the list of possible binary choices? That is, 'is IT existing or not existing' among the questions that can be asked in the game of twenty questions?

    Regards,

    Akinbo

      Within the context of the essay, there are two possible positions that make sense to me. One could be completely operational about things and say that the only things that exist are the readings on our experimental apparatus, e.g. detector clicks, instrument readings and so forth. In that case, we are assuming that our macroscopic apparatus exist, but not that there is necessarily any deeper reality underlying them. However, the position that I prefer is to assume that there are things called quantum systems that exist independently of us and we are just asking questions about the properties of those systems. In that sense, I am assuming that "it" in the sense of "a quantum system" exists and is not being questioned.

      Of course, one can also consider situations in which you are not sure whether a particular quantum system exists, e.g. a laser typically emits a superposition of different numbers of photons. In that case, it sometimes makes sense to include a question about whether the system exists if you can ask it without destroying the system. For example, there are heralded single photon sources that produce two photons in an entangled state and then one can use a measurement on the first photon as evidence that the second photon exists.

      I hope that answers your question, but of course it depends on precisely what you mean by IT.

      6 days later

      Dear Matthew,

      You have presented a very thoughtful and at times surprisingly subtle analysis of the argument that even a subjective Bayesian view of quantum mechanics must be grounded in some sort of external reality, although this reality need not be understood in terms of classical concepts.

      It seems to me that your argument could provide a starting point through which quantum Bayesianism could attain wider acceptance. As far as I can tell, a major point of criticism is that QBism does not really offer a clear ontology (if any at all), and I imagine most physicists do not see themselves in the business of figuring out degrees of belief. Your approach reminds me a bit of what I read in Howard Barnum's entry, to try to find some sort of neat integration between the subjectivist approach and an objective reality (at least at some level).

      It would seem, then, that the next step is to try to find out how to map the external requirement for non-contextuality on any purported features of the external reality. Is that correct? If it is, how do you intend to accomplish this? Also, your entry whetted my appetite for learning more about the philosophy of probability about which I know next to nothing (other than what I learned in your article). Can you recommend one or more introductory texts?

      I have developed an original framework myself, and while it seems that it can reproduce major features of QM, I do not at present know how to get the Born Rule out of it. I strongly suspect that it is due to my ignorance in precisely this area, so I intend to change that.

      I enjoyed your essay and wish you all the best,

      Armin

        Thanks for your kind comments about my essay.

        It is important to distinguish two things:

        1) Believing in the subjective Bayesian interpretation of quantum theory and wanting to employ it in understanding the probabilities that arise in quantum theory.

        2) Being a QBist, i.e. believing in what Caves, Fuchs and Schack have called quantum Bayesianism.

        Quantum Bayesianism, or QBism, is a very specific work-in-progress interpretation of quantum theory, in the category that I call neo-Copenhagen. It employs a subjective Bayesian interpretation of probabilities, but goes way beyond that in denying the existence of a deeper reality underlying quantum theory.

        In contrast, just believing in subjective Bayesian probability does not imply any of that. As I mentioned in the essay, you can apply it in the many-worlds interpretation, and it would also be unproblematic in Bohmian mechanics, at least if the probabilities statistical mechanics can be interpreted subjectively because they are of the same type.

        My view is quite far from QBism because it ends up accepting realism of a fairly straightforward type. Fuchs and company are dead against this. It is rather unfortunate that they have co-opted the phrase "quantum Bayesianism" to refer to their approach, since by rights that should refer to any attempt to apply Bayesian probability and statistical methodology to quantum theory. As it happens, I did obtain my first exposure to Bayesian probability from Fuchs et. al. I am very grateful for that and probably would not be doing quantum foundations were it not for Fuchs' eloquent papers and talks, but my view is definitely not a form of QBism. If my view gains more acceptance than QBism then I would be happy about that, but I doubt it would please the QBists.

        You are right that the next step is to try and construct an ontology in which noncontextuality emerges naturally. This is not true in Bohmian mechanics for example, since a generic probability distribution over the positions of particles would be contextual. The reason that I mentioned many-worlds in the essay is that I believe that the Deutsch-Wallace argument does achieve noncontextuality naturally. However, this is not too surprising because they assume the objective existence of the wavefunction and only the wavefunction, and this is the thing that encodes all the information about the quantum probabilities in the first place. As a proponent of the epistemic interpretation of the wavefunction this would not be where I would start. Instead, I would look at more exotic single-world ontologies than are usually encountered in the literature, such as those involving retrocausality.

        Finally, regarding reading on the foundations of probability, I wrote a blog post with a reading list some time ago http://mattleifer.info/2010/11/01/a-reading-list-on-the-foundations-of-probability-and-statistics/ I update it periodically with mini reviews as I read things, but I haven't done so for some time meaning that I have now read a lot of the things marked UNREAD. For the philosophy of probability, I would recommend starting with Donald Gilles' textbook and then the collection of essays edited by Anthony Eagle. The former is the best overview, but is not perfect and is lacking in coverage of many contemporary issues. The latter remedies this as well as containing many of the classic papers.

        Dear Matthew,

        Thank you for clarifying the distinctions. I looked at the reading list and it is far more extensive than I imagined. Thank you!

        Armin

        Hi Matthew -

        Your discussion of subjectivity is crucial to the subject of this contest. It is interesting to consider how we might apply the concept to an evolving observer, one who makes decisions at every moment, and over a very long period of time, during which his relation to the physical world - his own biological configuration, if you will - is continuously altered.

        All conceivable decision scenarios are then possible, as you say in referring to It from Bit - 'with only certain subsets of all possible bets being jointly resolvable'.

        If evolution affects us at every moment (and it is impossible to argue that it doesn't) then It from Bit is true: We live in a Species Cosmos that is being evolved from ourselves. However, it can and should be countered that we do seem to possess a certain objectivity - that Bits appear to be founded upon a reality greater than the continually evolving Species Cosmos - a reality where our logical and scientific parameters are less applicable, and often not applicable at all.

        You might be interested to see how I treat this evolutionary argument as a realist interpretation of the field of reality, thus expanding the definitions of It and Bit far beyond those signified by Wheeler. I think you touch on this when you allude to '"it from bit from it", where the first "it" refers to classical ontology and the second refers to quantum stuff.'

        I give this It-Bit-It sequence a structure you might find useful.

        All the Best,

        John.

          Dear Matthew,

          Excellent essay. I like the realist views and how you back up this approach. Moreover I really think you raise an excellent point and are dead right that we also need to consider the likes of wave fuctions, so it isn't a straightforward as It from Bit or vice versa.

          Hopefully you'll get a chance to look at my essay, which I hope you find of interst too.

          Best wishes & congratulations on your essay,

          Antony

            Hi Matt,

            Excellent essay! I never know quite what to make of it when someone carefully reasons out an argument that (in the end) agrees with my reasoning, but clearly is far more detailed and careful than my own considerations. Did I really just get lucky through sloppy reasoning? Are both arguments merely rationalizations, trying to justify a previously-held common conclusion? Or had I really worked out a parallel, simpler argument?

            In your case, at least, I'm pretty confident it's not the middle option... But I'm going back and forth between the other two.

            To the extent I think I might have a parallel argument, it basically revolves around the concept of "scientific explanation". By the end of section 6, after all, I think that your argument has basically boiled down to 1) noncontextuality as a necessary explanation for the Born rule, and 2) physical reality as a necessary explanation for noncontextuality. If I'm right, then the simpler, parallel argument would be 3) physical reality as a necessary explanation for the Born rule (and for that matter, any observed correlations in our universe). I guess the question then for you is this: Is there *any* set of observable (and reproducible) correlations that wouldn't need to have an explanation grounded in a physical reality? I would instinctively say no, but am curious as to your take on this.

            My only complaint is that you threw too large of a bone to the "It from Bit" concept at the very end... If classical ontology is put together in our heads, then it's not really an "it", is it? :-)

            Now, you just need to write up that general definition of contextuality you were telling me about, so that you can better explain the seeming disconnect between the (non)contextuality of "value assignments" and "probability assignments"!

            Cheers!

            Ken

            PS; After just reading a few of your responses to the above questions, I'm guessing that your answer to my above question will be that it depends whether one is a "realist" or not! So maybe I did just 'get lucky' by assuming realism from the outset. I applaud your efforts to convert anti-realists, as I don't seem to have the faintest clue where to begin on that front...

              When attempting to argue against anti-realists, I think it is more effective to try and take all their assumptions on board and then show that it leads to a contradiction than to try and argue for realism a priori on the basis of what constitutes a "scientific explanation" or something like that. Of course, most realists will agree with such a priori arguments, but that is just preaching to the choir. An anti-realist is, by definition, someone who does not agree with such arguments and thinks that they have valid counterarguments, so they can just go ahead and adopt something Copenhagen-like anyway. If you want to persuade them then you have to argue on their home turf.

              Regarding correlations that don't require a physical explanation, I would say that, technically speaking, the probabilities on a "semi-classical test space" are of this sort. This is where you just have N different betting contexts, each of which gets assigned its own independent classical probability distribution and there are no additional constraints. In this case, all the constraints are coming from the Dutch book argument, so they don't depend on any assumptions about a pre-existing reality. You may be inclined to reject such cases as uninteresting because they are only a slight modification of classical probability theory, but, as I mentioned in a footnote in the essay, such a system can be thought of as half of a nonlocal box, so interesting things can happen with such systems when we compose them. In fact, I would argue that the nonsignalling condition does have a viable physical explanation if one accepts fundamental Lorentz invariance, so in that case we could regard the PR-box correlations as an example of a set of probabilities that does not require additional physical explanation beyond what we already have.

              Thanks for your comments. Being an advocate of the epistemic interpretation of the wavefunction, I don't actually think that the wavefunction is a viable contender for the fundamental ontology, but I mentioned it in the essay because it is an obvious counterexample to the idea that the "it" has to be made of things that we view as real in classical physics, like particles and fields. I believe that the "quantum stuff" is something more exotic that does not yet appear in contemporary interpretations of quantum theory.

              Thanks for your comments. I think this is related to the longstanding debate about diachronic coherence in the literature on subjective Bayesian probability. Basically, the issue is about whether one can regard "you now" and "you at some point in the future" as one and the same agent. If you answer yes then it would be irrational to do something now that you believe with certainty that the future version of you will regard as irrational. Such kinds of argument are necessary to derive Bayesian updating as an rationality constraint. I have always been in the camp that regards diachronic coherence as unfounded. At most you can derive constraints on what you now expect that the future version of you will do, and not on what the future version of you actually should do. If this is the case then constraints on how probabilities evolve need to be grounded in physical reality rather than just rationality, but I believe that this point is already made just by considering alternative measurements at a single instant of time.

              Hi Matt,

              I really enjoyed reading your essay. While you defined the event space of subjective Bayesian probability as the discrete set, you mentioned about the Newtonian mechanics to seem to require the continuous event space. I a little confuse this point. How to explain that? Also, is there no difference or no technical or mathematical problem to extend the continues set from the discrete set? For example, considering functional analysis, there are many nontrivial examples to be satisfied in the continuous set but not in the discrete one and vice verse.

              Best wishes,

              Yutaka

                The example I gave from Newtonian mechanics does employ a continuous ontological state space, although we might imagine be betting on some coarse graining of that, in which case the event space would be discrete.

                Nevertheless, there are of course additional issues that come up in the case of a sample space of infinite cardinality, even for a countably infinite space let alone the continuum. In classical probability, the main additional requirement is Kolmogorov's axiom of countable additivity. There are Dutch book arguments for countable additivity, but of course they involve considering bets on an infinite sequence of events. The status of these arguments depends on whether you view the Dutch book argument as the true operational definition of probability or merely as useful window dressing to help us understand why degrees of belief must satisfy probability theory. In the former case, there could be some trouble with considering sequences of bets that it is not practical for someone to actually enter into. Bayesians of this stripe, which include de Finetti, typically argue that probabilities should only be required to be finitely additive.

                Whatever you think about this, the point is that it is equally an issue with classical probability theory and has nothing to do with the specific application to quantum theory. It is worth noting that most other interpretations of probability theory also have problems with the Kolmogorov axioms. For example, von Mises version of frequentism, which is probably the most popular one, also does not support countable additivity.

                I don't think there really are any issues of this sort that are unique to quantum theory, at least at the level I am discussing it in my essay. Of course, to get quantum theory, the things we are assigning probabilities to must have the structure of the closed subspaces of Hilbert space, the projections in a von Neumann algebra, or something of that sort. The major part of a derivation of quantum theory would be to derive those structures and assumptions in functional analysis will come up there. However, in my essay I focussed on how probabilities should be assigned once we know the structure of the betting contexts, in which case those structures are already being assumed.