Essay Abstract

The famous theorem of Bell (1964) left two loopholes for determinism underneath quantum mechanics, viz. non-local deterministic hidden variable theories (like Bohmian mechanics) or theories denying free choice of experimental settings (like 't Hooft's cellular automaton interpretation of quantum mechanics). However, a precise analysis of the role of randomness in quantum theory and especially its undecidability closes these loopholes, so that-accepting the statistical predictions of quantum mechanics-determinism is excluded full stop. The main point is that Bell's theorem does not exploit the full empirical content of quantum mechanics, which consists of long series of outcomes of repeated measurements (idealized as infinite binary sequences). It only extracts the long-run relative frequencies derived from such series, and hence merely asks hidden variable theories to reproduce certain single-case Born probabilities. For the full outcome sequences of a fair quantum coin flip, quantum mechanics predicts that these sequences (almost surely) have a typicality property called 1-randomness in logic, which is definable via computational incompressibility à la Kolmogorov and is much stronger than e.g. uncomputability. Chaitin's remarkable version of Gödel's (first) incompleteness theorem implies that 1-randomness is unprovable (even in set theory). Combined with a change of emphasis from single-case Born probabilities to randomness properties of outcome sequences, this is the key to the above claim.

Author Bio

Klaas Landsman (1963) is a professor of mathematical physics at Radboud University (Nijmegen, the Netherlands). He was a postdoc at DAMTP in Cambridge from 1989-1997. He mainly works in mathematical physics, mathematics (notably non-commutative geometry), and foundations of physics. His book Foundations of Quantum Theory: From Classical Concepts to Operator Algebras (Springer, 2017, Open Access) combines these interests. He is associate editor of Foundations of Physics and of Studies in History and Philosophy of Modern Physics and is a member of FQXi.

Download Essay PDF File

Dear Prof. Landsmann,

this is a very exciting essay! I have only given it a first pass, but as far as I understand, you propose to extend the scope of Bell's theorem from the statistics of ensembles of measurement outcomes to the characteristics of individual outcome strings, thus uncovering the incompatibility of quantum mechanics with (a certain notion of) determinism.

I think this is a highly original way to think about these issues; certainly, most treatments never leave the level of statistical analysis, but of course, the statistical properties of an ensemble don't suffice to fix those of its members. I'm reminded of the old joke: the average human has one testicle and one breast, features which a 'theory' of beings that have one testicle and one breast each may well replicate; but that theory would fail badly at reproducing the characteristics of humans on an individual basis.

Again, if I understand you correctly, your main argument is that the deterministic replacements of quantum mechanics fail to replicate the typicality of individual outcome strings, while meeting the requirements posed by the Born rule. That outcome sequences of quantum mechanics must be Kolmogorov random has been argued before, in various ways---Yurtsever has argued that computable pseudorandomness would lead to exploitable signalling behavior (https://arxiv.org/abs/quant-ph/9806059), echoed by Bendersky et al., who explicitly prove that non-signalling deterministic models must be noncomputable, if they are to recapitulate the predictions of quantum mechanics (https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.118.130401). Likewise, Calude and Svozil have argued for non-computability from a generalization of the Kochen-Specker theorem (https://arxiv.org/abs/quant-ph/0611029). (And of course, there are my own efforts, see https://link.springer.com/article/10.1007/s10701-018-0221-9, which also makes use of Chaitin's incompleteness theorem, and my contribution to this contest.)

Thus, the randomness of any quantum outcome sequence can't be produced by any effective means, and hence, any deterministic theory must either fail to reproduce these outcome sequences, or otherwise incorporate this randomness by fiat (as in the Bohmian equilibrium hypothesis), which renders its origin essentially mysterious.

It seems to me that at the heart of this is really the observation that you can write any noncomputable function as a finite algorithm that has access to an infinite reservoir of (algorithmic) randomness. In this way, Bohmian mechanics can play the role of the algorithmic part, which has to be augmented by an infinite random string in order to replicate the quantum predictions.

There is, however, another way that's quite popular at present---you can also just compute any sequence whatsoever in parallel, by an 'interleaving' algorithm that just outputs all possible bit strings, running forever. A measure-1 subset of the strings produced in this way will by typical, but the overall operation is, of course, quite deterministic. This is basically the sense in which the many worlds interpretation is deterministic: if we just look at any given bitstring as a single 'world', then in general one would expect to find oneself in a world that's algorithmically random.

Another observation of yours I also found highly interesting, namely, that in principle the non-signalling nature of quantum mechanics should be considered as a statistical notion, like the second law of thermodynamics. In the limit of infinitely long strings, the non-signalling principle will hold with probability 1, but for finite lengths, deviations may be possible. However, one probably couldn't get any 'useful' violations of non-signalling out of this, because one could probably not certify these sorts of violations (although perhaps one could provide a bound in such a way that Alice and Bob will agree on a certain message with slightly higher probability than merely by guessing, with that probability going to the guessing probability with the length of the message).

Anyway, thanks for this interesting contribution. I wish you the best of luck in this contest!

Cheers

Jochen

    Dear Prof. Landsman,

    I am no mathematician, but it strikes me that all attempts to define randomness as an inherent objective property of sequences in math, or of events in nature, are doomed to failure precisely because randomness (or its opposite, determinism) is the assertion of an agent (mathematician or physicist). This fact aligns it with prediction and computation, which are the actions of agents. The concept of determinism as an objective state of affairs is an unfortunate result of an ambiguity in language and thought. It confuses the purported ability of one external thing, to fix the state of another (causality), with the ability of an agent to ascertain the state in question (knowledge)--that is, to "determine" what has actually occurred. I hold that determinism is a property of human artifacts, such as equations and machines, because they are products of definition. Physicists have found it convenient to ascribe it to nature in the macroscopic realm and some would like to extend that to the micro realm. But strictly speaking, neither determinism nor non-determinism can be properties of nature itself.

    On another note, as you point out, one completed or pre-existing STRING of binary digits is no more probable than another, as an arbitrary selection from the set of all strings ("an apparently "random" string like σ = 0011010101110100 is as probable as a 'deterministic' string like σ = 111111111111111"). In contrast, as a product of individual coin flip events, the latter sequence is certainly less probable than the former. I would point out that completed sequences (and the set of all strings) are concepts created by agents, not things existing in nature. The same must apply to the notion of prior probability as a property inhering in individual events (independent of observers).

    I suspect that many dilemmas in physics would be viewed differently if the role of the observer or theorist were better taken into account. I hope these comments might be of some interest, and I apologize if they are tangential to your very impressive argument.

    Cheers,

    Dan Bruiger

      Dear Jochen, Thank you for this kind and detailed post, which summarizes my paper well. I am well aware of the literature you cite (including your own 2018 paper, which I studied in detail with an honours student), but the rules for this essay contest exclude an extensive bibliography - I cite most of these papers in my Randomness? What randomness? paper, ref. [20] in my essay, available Open Access at https://link.springer.com/article/10.1007/s10701-020-00318-8 (although your 2018 paper dropped off the longlist of references included in earlier drafts, which became almost book-length, so even in [20] at the end of the day I only cited papers I directly relied on. I will comment on your paper in a separate post about your own essay in this contest later today).

      You write: "It seems to me that at the heart of this is really the observation that you can write any noncomputable function as a finite algorithm that has access to an infinite reservoir of (algorithmic) randomness." This observation is the Kucera-Gacs Theorem (this is Theorem 8.3.2 in ref. [8] of my essay (Downey & Hirschfeldt), which states that every set is reducible to a random set (acting as an oracle). Phrased in this way, your point on Bohmian mechanics ("Bohmian mechanics can play the role of the algorithmic part, which has to be augmented by an infinite random string in order to replicate the quantum predictions.") is, as you suggest, exactly the point I make in my essay, implying that precisely because of this dependence on a random oracle (which has to come from "nature" itself? or what?) it cannot be a deterministic underpinning of quantum mechanics. And likewise for 't Hooft's or any other serious hidden variable theory.

      Finally, as to your last point, "that in principle the non-signaling nature of quantum mechanics should be considered as a statistical notion, like the second law of thermodynamics.": I proposed this in my Randomness? What randomness? paper but did not include it in my current essay, although all these things are closely related. In any case, as I learnt from my friend Guido Bacciagaluppi, it was Antony Valentini who first made this point, long ago. But it should be much more widely known!

      Keep it up! All the best, Klaas

      Dear professor Landsman (Beste Klaas);

      First: Thank you for the very clear and understandable historical introduction.

      Quote "I conclude that deterministic hidden variable theories compatible with quantum mechanics cannot exist; Bell's theorem leaves two loopholes for determinism (i.e. nonlocality or no choice) because its compatibility condition with quantum mechanics is not stated strongly enough.34" unquote. I agree you're your conclusion only for different reasons:

      1. Our past (as down-loaded in our memory) is the only deterministic entity (the seemingly cause and event line)

      2. The NOW moment is still in the future for an emergent conscious agent (the flow of processing time in the emergent phenomenon reality ).

      3. ALL the experienced emergent time-lines are as I argue ONE (eternal compared to our emergent reality) dimensionless Point Zero in Total Simultaneity, from where there is an infinity of Choices (hidden variables) can be made by the partial consciousness of an agent. So, the future is NON-Deterministic as we are approaching Point Zero. Point Zero, however, is unreachable for any emergent phenomenon out of Total Simultaneity.

      Quote: "Now a proof of the above true statement about deterministic hidden variable theories should perhaps not be expected to show that all typical quantum mechanical outcome sequences violate the predictions of the hidden variable theory, but it should identify at least an uncountable number of such typical sequences-for finding a countable number, let alone a mere finite number, would make the contradiction with quantum mechanics happen with probability zero." Unquote. I think you are quite right here (This conclusion makes me think of the super-asymmetry theory of Sheldon Cooper...I hope I am not offending you), everything we are making a LAW of doesn't mean that it will be always so in a future that emerge, the more information we are gathering (in trying to complete our models), the more changes we will encounter.

      I liked your approach and conclusions, but there are different ways to come to the same conclusions, so, I hope that you can find some time to read my approach

      Thank you very much and good luck in the contest.

      Wilhelmus de Wilde

        Dear Dan,

        Thanks for your comments. I would agree that (in)determinism is not a property of "Nature" but of physical theories, which indeed are assertions of agents. However, this does not seem to block the possibility of randomness of sequences or other mathematical structures.

        I do not understand your remark that "In contrast, as a product of individual coin flip events, the latter sequence is certainly less probable than the former." According to standard probability theory they _are_ equally probable.

        Your comments are not at all merely tangential to my topic: they go to the heart of it!

        Best wishes, Klaas

        Dear Klaas,

        thanks for your reply. I seem to have gotten the gist of your argument right. I agree that, from the point of view you suggest, the 'determinism' of Bohmian mechanics (I can't comment too much on 't Hooft's cellular automata) looks sort of question-begging---sure you can shunt all the randomness to the initial conditions, but that doesn't really make it disappear.

        I did have a quick look at your 'Randomness'-paper, which will surely make for some interesting reading once I get some time to give it a more in-depth reading, hence why I got confused regarding the source of the 'statistical' nature of the non-signalling constraint. The Valentini paper you mention is 'Signal-Locality in Hidden-Variables Theories'?

        Finally, please tell the poor honors student who had to wade through my bloviations that I'm sorry. ;)

        Cheers

        Jochen

        Hi, again

        I simply mean that a series of coin tosses would look much more like mixed zeros and ones than a sequence of all ones (about half and half heads and tails as opposed to all heads). I think what you mean ("standard probability theory") is that ANY sequence, itself considered as a pre-existing entity, that is arbitrarily drawn from an infinite set of possible such sequences is equally probable (or rather improbable)? But my point is that such pre-existing sequences don't exist. There is only each event of a coin toss and the records we keep of them. In a series of such records, more of them will accumulate mixed ones and zeros than a straight run of ones, for example.

        Thanks for your patience.

        Dan

        6 days later

        Professor Landsman,

        I must admit your approach took me back in time to the ancient Greek insight that cosmos/order/certainty came out of or is grounded on chaos/disorder/uncertainty.

        If randomness (as apparent lack of patterns and predictability in events ) is a measure of uncertainty, and since outcomes over repeated trials of the same event often follow a probability distribution then the relative frequency over many trials is predictable (so in a way Von Mises was right to derive probability theory from randomness even though he failed in that attempt but helped Kolmogorov succed). In other words randomness could be more fundamental than probability theory that permeates QM and Statistical mechanics since Boltzmann, even though your concept of randomness is in the mathematical sense of Chaitin-Kolmogorov, not Von Mises sense.

        Coming back to our theme, if Godel's theorems tells us that the fields of mathematics and physics (according to Hilbert's axiomatic programme) cannot be grounded on logic(in classical and symbolic sense, but who knows maybe one day it could be grounded on a different type of logic, say in Brouwer-Heyting intuitionism or Moisil-Lukasiewicz many valued logic) and Bell's theorem that QM cannot be grounded on classical determinism or any underlying hidden variables theory a la deBroglie-Bohm-Vigier then how do we know that we haven't been using undecidable results to prove our theorems in both mathematics and physics throughout millenia? (like the ones we found in Euclidian geometry so that Hilbert had to re-axiomatize it)

        Does this mean that ultimayely, randomness and chaos could be the ground for both mathematics and physics, with their logical necessity and deterministic or indeteministic laws and that ultimately the Greeks were right?...

          Hi Mihai, Thank you for these interesting comments. I agree with your last point: ultimately, all laws derive from randomness! A point made repeatedly by my colleague Cristian Calude is that randomness cannot imply lawlessness, since any (infinite) sequence necessarily possesses some arithmetic regularities (Baudet's Conjecture/van der Waerden's Theorem. It should be stressed that random sequences of the kind studied in Kolmogorov complexity theory are far from lawless in the sense of Brouwer - they are pretty regular in satisfying all kinds of statistical laws that follows from 1-randomness, as I explain in my essay. I am less sure about your observation that the theorems of mathematics we use in physics are grounded on undecidable results, e.g. the derivation of the incompleteness results by Gödel and Chaitin itself is based on decidable propositions only (at least as far as I know). Also, I would not say that Gödel's theorems imply that mathematics cannot be grounded on logic, except when you mean "grounded" in Hilbert's sense, namely a proof of consistency. Without knowing that e.g. ZFC is consistent, it is still a logical language in which we do our mathematics, most of which is decidable in ZFC. Best wishes, Klaas

          Thank you for your rely Prof. Landsman. Indeed, I meant consistency of ZFC in Hilbert sense, which according to Godel's second incompleteness theorem, the consistency of ZFC cannot be proved within ZFC itself (unless it is actually inconsistent). In other words, if ZFC as a deductive formal axiomatic system of set theory as mathematics and foundational for most of classical mathematics, then the consistency of ZFC cannot be demonstrated in this system alone. Given that, so far, ZFC has proved immune to the classical set theory paradoxes of Cantor, Russell or Burali- Forti, this of course does not imply that ZFC is absolutely (categorically) free of any potential inner contradictions that might come up one day, wouldn't you agree?...

          The relativity of set-theoretic concepts, as the ones used in ZFC was signalled quite early on with the Lowenheim-Skolem theorem that subsequently led to Skolem's paradox, which implies that, if ZFC is consistent then its axioms must be satisfaible within a countable domain, even though they prove the existence of uncountable sets, in Cantor's sense (1922). Nowadays, we know that there are many mathematical statements undecidable in ZFC and other axioms need to be added in order to prove results in branches of mathematics such category theory or algebraic geometry, whose theorems are currently being used in some modern theories of physics today, and which work with Tarski-Grothendieck set theory for instance, one of many extensions of ZFC. Best wishes, Mihai

          Dear Klaas Landsman,

          interesting essay, i like it. Especially the fact that 'randomness' does not imply 'lawlessness', a result that is often overlooked when talking about randomness.

          I would be happy if you would comment on my essay where i try to figure out similar links between logics and physics.

          7 days later

          Dear Profesor Landsman,

          I am still waiting for your comment.

          Sorry for the comparison with "The BB theory"

          Wilhelmus de Wilde

          13 days later

          Dear Klass,

          I found your essay truly brilliant. Combining in a clear manner Quantum mechanics, Goedelian-like results and using them for a discussion of the possibility of deterministic theories of quantum mechanics.

          Since this is not my area, I must say I was quite blown away by some statements such as " fair classical coins do not exist". I am still recovering in fact and will have to look at the references you gave in your essay.

          With regards to that statement, I wanted to make sure I understood what is meant here:

          - Do you mean to say that a coin whose motion is determined by Newton's (or Hamilton's) equations of motion cannot eventually give rise to a fair coin toss (unless true fair randomness is put in the initial conditions)? or

          - Do you mean to say that a fair coin model within classical probability theory is actually not fair?

          I believe this is the former but just want to make sure.

          Finally, given that the argument relies, as far as I understood, on infinite sequences, is there a finite version of it whereby, say, a membership function (for the Kolmogorov random character of a sequence) would be in between 0 and 1 for any finite N but would tend to zero when N tends to infinity?

          Congratulations again on this very nice essay.

          Many thanks.

          Best,

          Fabien

            Dear Fabien,

            Thank you for your kind words. I meant the former, the latter presupposes fairness. The reason is, as I explain in my essay, that a fair coin toss requires a truly random sampling of a probability measure, which classical physics cannot provide (I am not claiming that Nature can provide it! But QM can, in theory).

            Your second question is very good, space limitations prevented me from discussing it. The question is about what I call "Earman's Principle" in my (Open Access) book, Foundations of Quantum Theory, see http://www.springer.com/gp/book/9783319517766, namely: "While idealizations are useful and, perhaps, even essential to progress in physics, a sound principle of interpretation would seem to be that no effect can be counted as a genuine physical effect if it disappears when the idealizations are removed." This is valid in the arguments in my essay because the definition of Kolmogorov randomness of infinite sequences guarantees it, coming from a limiting construction, as it does.

            Best wishes, Klaas

            Hello Professor Landsman,

            Wowww , I must say that your essay is very relevant and general, I liked a lot how you approach this topic about our limitations, I wish you all the best, I shared it on Facebook, one of my favorites with the essay of Del Santo and Klingman,

            best Regards

            "The famous theorem of Bell (1964) left two loopholes for determinism underneath quantum mechanics..."

            It left a lot more than that. The theorem also is founded upon the dubious, idealistic assumptions, that (1) particles are absolutely identical, and worse still, (2) that all the "measurements" of the particle states are absolutely without errors.

            It is easy to demonstrate that when those two assumptions are violated, as they both must be, when only a single bit of information is being manifested, by an entangled pair, then Bell's correlations can be readily reproduced classically, with detections efficiencies above the supposed quantum limit. Note that the detection efficiency, being actually measured in the demonstration, is the combined, dual detection efficiency, not the usually reported single detector efficiency. The former cannot even be measured, in a conventional Bell test.

            Rob McEachern

            Dear Klaas Landsman,

            If by 'compatible with quantum mechanics' one means that 'qubits' are real, then the argument is over. But there are probably a dozen interpretations of QM with almost as many claimed 'ontologies'.

            Bell demands qubits in his first equation: A,B = +1, -1. And for spins in magnetic domains this is a good statistical model, and reasonable. Unfortunately, for the Stern-Gerlach model upon which Bell based his reasoning, it is not. The SG data shown on the "Bohr postcard' is anything but +1 and -1, and a 3-vector spin model produces almost exactly the SG-data.

            A number of authors are concerned whether 'classical physics' is truly deterministic, and if not, how is this explained.

            If one assumes that the deBroglie-like gravitomagnetic wave circulation is induced by the mass flow density of the particle [momentum-density], then the equivalent mass of the field energy induces more circulation. This means that the wave field is self-interacting. For 'one free particle' a stable soliton-like particle plus wave is essentially deterministic. But for many interacting particles, all of which are also self-interacting, then 'determinism' absolutely vanishes, in the sense of calculations or predictions, and the statistical approach becomes necessary.

            This theory clearly supports 'local' entanglement, as the waves interact and self-interact, while rejecting Bell's 'qubit'-based projection: A, B = +1, -1 consistent with the Stern-Gerlach data (see Bohr postcard). For Bell experiments based on 'real' spin (3-vector) vs 'qubit' spin (good for spins in magnetic domains) the physics easily obtains the correlation which Bell claims is impossible, hence 'long distance' entanglement is not invoked and locality is preserved.

            This is not a matter of math; it is a matter of ontology. I believe ontology is the issue for the number of authors who also seem to support more 'intuition' in physics. My current essay, Deciding on the nature of time and space treats intuition and ontology in a new analysis of special relativity, and I invite you to read it and comment.

            Edwin Eugene Klingman

              Edwin,

              Thanks for mentioning the Bohr postcard. I had never actually seen the image until your comment provoked me to search for it.

              I would assert that QM is not about ontology at all. It is not describing what exists in "being", but only the statistics of a threshold-based energy-detection, of the things in being. So if you draw a vertical threshold line, down the middle of the "B-field on" image, you create the two states. But at the top and bottom of the image, those two states blur together and it becomes impossible to correctly distinguish between them. That is the problem with all Bell tests, that I noted in my comment above. When you examine a "coin" face-on, it is easy to correctly "call it". But not when you examine it "edge-on." The actual ontology of a coin is that it is what is is - not what you observe. Thus, a polarized coin is in the ontological state of merely being polarized; it is not polarized either "up" or "down" - the latter are merely the result of "observing" the polarization, with a detector that measures a different energy in the "polarization", as a function of the angle between the coin's axis and the axis of the detector - and then introducing a threshold to "call it" one state or the other - or "none of the above", in the event that there is not enough energy to ever reliably detect the object at all, as when it is nearly edge-on and thus "too close to call."

              In this context, it is useful to bear in mind, that QM got its start, when it was first observed that the photoelectric-effect behaved just as if an energy threshold exists.

              Rob McEachern

              Dear Dr. Landsman,

              Thank you for your well written essay. I agree with your conclusion that quantum mechanics is intrinsically random, and that hidden variables or initial conditions do not adequately explain the randomness of quantum measurement results. However, I reach a different conclusion on the origin of quantum randomness.

              In comparing Standard QM and Hidden Variables QM in section 4, you conclude that we have a choice between 1) random outcomes of measurements on identical quantum states, and 2) deterministic measurement outcomes on random or randomly sampled HVs.

              You reject the second choice on the basis that HV theories are deterministic only at first sight, and this therefore undermines their rationale. You conclude that the randomness of measurement results reflects randomness of the measurement process itself. This is, in essence, the orthodox (Copenhagen) interpretation. The Copenhagen interpretation is essentially an empirical model describing measurement outcomes in terms of Born probabilities.

              In my essay I outline a conceptual model and interpretation that provides a third option to explain the randomness of measurement results. I suggest that the randomness of measurement outcomes results from deterministic measurements on an ensemble of metastable quantum states, for example, an ensemble of identically prepared radioactive isotopes. Between its initial preparation and subsequent measurement, a metastable particle is subject to random transitions to a quantum state of higher stability. Deterministic measurements subsequent to the ensemble's preparation will therefore reveal random outcomes--no hidden variable required. As described in the essay, the proposed conceptual model is consistent with empirical observations, it is based on empirically sound and conceptually simple assumptions, and it explains the measurement problem and many other quantum "paradoxes." I hope you have a chance to look at it.

              Best regards,

              Harrison Crecraft