Dear Prof. Landsman,

I am no mathematician, but it strikes me that all attempts to define randomness as an inherent objective property of sequences in math, or of events in nature, are doomed to failure precisely because randomness (or its opposite, determinism) is the assertion of an agent (mathematician or physicist). This fact aligns it with prediction and computation, which are the actions of agents. The concept of determinism as an objective state of affairs is an unfortunate result of an ambiguity in language and thought. It confuses the purported ability of one external thing, to fix the state of another (causality), with the ability of an agent to ascertain the state in question (knowledge)--that is, to "determine" what has actually occurred. I hold that determinism is a property of human artifacts, such as equations and machines, because they are products of definition. Physicists have found it convenient to ascribe it to nature in the macroscopic realm and some would like to extend that to the micro realm. But strictly speaking, neither determinism nor non-determinism can be properties of nature itself.

On another note, as you point out, one completed or pre-existing STRING of binary digits is no more probable than another, as an arbitrary selection from the set of all strings ("an apparently "random" string like σ = 0011010101110100 is as probable as a 'deterministic' string like σ = 111111111111111"). In contrast, as a product of individual coin flip events, the latter sequence is certainly less probable than the former. I would point out that completed sequences (and the set of all strings) are concepts created by agents, not things existing in nature. The same must apply to the notion of prior probability as a property inhering in individual events (independent of observers).

I suspect that many dilemmas in physics would be viewed differently if the role of the observer or theorist were better taken into account. I hope these comments might be of some interest, and I apologize if they are tangential to your very impressive argument.

Cheers,

Dan Bruiger

    Dear Jochen, Thank you for this kind and detailed post, which summarizes my paper well. I am well aware of the literature you cite (including your own 2018 paper, which I studied in detail with an honours student), but the rules for this essay contest exclude an extensive bibliography - I cite most of these papers in my Randomness? What randomness? paper, ref. [20] in my essay, available Open Access at https://link.springer.com/article/10.1007/s10701-020-00318-8 (although your 2018 paper dropped off the longlist of references included in earlier drafts, which became almost book-length, so even in [20] at the end of the day I only cited papers I directly relied on. I will comment on your paper in a separate post about your own essay in this contest later today).

    You write: "It seems to me that at the heart of this is really the observation that you can write any noncomputable function as a finite algorithm that has access to an infinite reservoir of (algorithmic) randomness." This observation is the Kucera-Gacs Theorem (this is Theorem 8.3.2 in ref. [8] of my essay (Downey & Hirschfeldt), which states that every set is reducible to a random set (acting as an oracle). Phrased in this way, your point on Bohmian mechanics ("Bohmian mechanics can play the role of the algorithmic part, which has to be augmented by an infinite random string in order to replicate the quantum predictions.") is, as you suggest, exactly the point I make in my essay, implying that precisely because of this dependence on a random oracle (which has to come from "nature" itself? or what?) it cannot be a deterministic underpinning of quantum mechanics. And likewise for 't Hooft's or any other serious hidden variable theory.

    Finally, as to your last point, "that in principle the non-signaling nature of quantum mechanics should be considered as a statistical notion, like the second law of thermodynamics.": I proposed this in my Randomness? What randomness? paper but did not include it in my current essay, although all these things are closely related. In any case, as I learnt from my friend Guido Bacciagaluppi, it was Antony Valentini who first made this point, long ago. But it should be much more widely known!

    Keep it up! All the best, Klaas

    Dear professor Landsman (Beste Klaas);

    First: Thank you for the very clear and understandable historical introduction.

    Quote "I conclude that deterministic hidden variable theories compatible with quantum mechanics cannot exist; Bell's theorem leaves two loopholes for determinism (i.e. nonlocality or no choice) because its compatibility condition with quantum mechanics is not stated strongly enough.34" unquote. I agree you're your conclusion only for different reasons:

    1. Our past (as down-loaded in our memory) is the only deterministic entity (the seemingly cause and event line)

    2. The NOW moment is still in the future for an emergent conscious agent (the flow of processing time in the emergent phenomenon reality ).

    3. ALL the experienced emergent time-lines are as I argue ONE (eternal compared to our emergent reality) dimensionless Point Zero in Total Simultaneity, from where there is an infinity of Choices (hidden variables) can be made by the partial consciousness of an agent. So, the future is NON-Deterministic as we are approaching Point Zero. Point Zero, however, is unreachable for any emergent phenomenon out of Total Simultaneity.

    Quote: "Now a proof of the above true statement about deterministic hidden variable theories should perhaps not be expected to show that all typical quantum mechanical outcome sequences violate the predictions of the hidden variable theory, but it should identify at least an uncountable number of such typical sequences-for finding a countable number, let alone a mere finite number, would make the contradiction with quantum mechanics happen with probability zero." Unquote. I think you are quite right here (This conclusion makes me think of the super-asymmetry theory of Sheldon Cooper...I hope I am not offending you), everything we are making a LAW of doesn't mean that it will be always so in a future that emerge, the more information we are gathering (in trying to complete our models), the more changes we will encounter.

    I liked your approach and conclusions, but there are different ways to come to the same conclusions, so, I hope that you can find some time to read my approach

    Thank you very much and good luck in the contest.

    Wilhelmus de Wilde

      Dear Dan,

      Thanks for your comments. I would agree that (in)determinism is not a property of "Nature" but of physical theories, which indeed are assertions of agents. However, this does not seem to block the possibility of randomness of sequences or other mathematical structures.

      I do not understand your remark that "In contrast, as a product of individual coin flip events, the latter sequence is certainly less probable than the former." According to standard probability theory they _are_ equally probable.

      Your comments are not at all merely tangential to my topic: they go to the heart of it!

      Best wishes, Klaas

      Dear Klaas,

      thanks for your reply. I seem to have gotten the gist of your argument right. I agree that, from the point of view you suggest, the 'determinism' of Bohmian mechanics (I can't comment too much on 't Hooft's cellular automata) looks sort of question-begging---sure you can shunt all the randomness to the initial conditions, but that doesn't really make it disappear.

      I did have a quick look at your 'Randomness'-paper, which will surely make for some interesting reading once I get some time to give it a more in-depth reading, hence why I got confused regarding the source of the 'statistical' nature of the non-signalling constraint. The Valentini paper you mention is 'Signal-Locality in Hidden-Variables Theories'?

      Finally, please tell the poor honors student who had to wade through my bloviations that I'm sorry. ;)

      Cheers

      Jochen

      Hi, again

      I simply mean that a series of coin tosses would look much more like mixed zeros and ones than a sequence of all ones (about half and half heads and tails as opposed to all heads). I think what you mean ("standard probability theory") is that ANY sequence, itself considered as a pre-existing entity, that is arbitrarily drawn from an infinite set of possible such sequences is equally probable (or rather improbable)? But my point is that such pre-existing sequences don't exist. There is only each event of a coin toss and the records we keep of them. In a series of such records, more of them will accumulate mixed ones and zeros than a straight run of ones, for example.

      Thanks for your patience.

      Dan

      6 days later

      Professor Landsman,

      I must admit your approach took me back in time to the ancient Greek insight that cosmos/order/certainty came out of or is grounded on chaos/disorder/uncertainty.

      If randomness (as apparent lack of patterns and predictability in events ) is a measure of uncertainty, and since outcomes over repeated trials of the same event often follow a probability distribution then the relative frequency over many trials is predictable (so in a way Von Mises was right to derive probability theory from randomness even though he failed in that attempt but helped Kolmogorov succed). In other words randomness could be more fundamental than probability theory that permeates QM and Statistical mechanics since Boltzmann, even though your concept of randomness is in the mathematical sense of Chaitin-Kolmogorov, not Von Mises sense.

      Coming back to our theme, if Godel's theorems tells us that the fields of mathematics and physics (according to Hilbert's axiomatic programme) cannot be grounded on logic(in classical and symbolic sense, but who knows maybe one day it could be grounded on a different type of logic, say in Brouwer-Heyting intuitionism or Moisil-Lukasiewicz many valued logic) and Bell's theorem that QM cannot be grounded on classical determinism or any underlying hidden variables theory a la deBroglie-Bohm-Vigier then how do we know that we haven't been using undecidable results to prove our theorems in both mathematics and physics throughout millenia? (like the ones we found in Euclidian geometry so that Hilbert had to re-axiomatize it)

      Does this mean that ultimayely, randomness and chaos could be the ground for both mathematics and physics, with their logical necessity and deterministic or indeteministic laws and that ultimately the Greeks were right?...

        Hi Mihai, Thank you for these interesting comments. I agree with your last point: ultimately, all laws derive from randomness! A point made repeatedly by my colleague Cristian Calude is that randomness cannot imply lawlessness, since any (infinite) sequence necessarily possesses some arithmetic regularities (Baudet's Conjecture/van der Waerden's Theorem. It should be stressed that random sequences of the kind studied in Kolmogorov complexity theory are far from lawless in the sense of Brouwer - they are pretty regular in satisfying all kinds of statistical laws that follows from 1-randomness, as I explain in my essay. I am less sure about your observation that the theorems of mathematics we use in physics are grounded on undecidable results, e.g. the derivation of the incompleteness results by Gödel and Chaitin itself is based on decidable propositions only (at least as far as I know). Also, I would not say that Gödel's theorems imply that mathematics cannot be grounded on logic, except when you mean "grounded" in Hilbert's sense, namely a proof of consistency. Without knowing that e.g. ZFC is consistent, it is still a logical language in which we do our mathematics, most of which is decidable in ZFC. Best wishes, Klaas

        Thank you for your rely Prof. Landsman. Indeed, I meant consistency of ZFC in Hilbert sense, which according to Godel's second incompleteness theorem, the consistency of ZFC cannot be proved within ZFC itself (unless it is actually inconsistent). In other words, if ZFC as a deductive formal axiomatic system of set theory as mathematics and foundational for most of classical mathematics, then the consistency of ZFC cannot be demonstrated in this system alone. Given that, so far, ZFC has proved immune to the classical set theory paradoxes of Cantor, Russell or Burali- Forti, this of course does not imply that ZFC is absolutely (categorically) free of any potential inner contradictions that might come up one day, wouldn't you agree?...

        The relativity of set-theoretic concepts, as the ones used in ZFC was signalled quite early on with the Lowenheim-Skolem theorem that subsequently led to Skolem's paradox, which implies that, if ZFC is consistent then its axioms must be satisfaible within a countable domain, even though they prove the existence of uncountable sets, in Cantor's sense (1922). Nowadays, we know that there are many mathematical statements undecidable in ZFC and other axioms need to be added in order to prove results in branches of mathematics such category theory or algebraic geometry, whose theorems are currently being used in some modern theories of physics today, and which work with Tarski-Grothendieck set theory for instance, one of many extensions of ZFC. Best wishes, Mihai

        Dear Klaas Landsman,

        interesting essay, i like it. Especially the fact that 'randomness' does not imply 'lawlessness', a result that is often overlooked when talking about randomness.

        I would be happy if you would comment on my essay where i try to figure out similar links between logics and physics.

        7 days later

        Dear Profesor Landsman,

        I am still waiting for your comment.

        Sorry for the comparison with "The BB theory"

        Wilhelmus de Wilde

        13 days later

        Dear Klass,

        I found your essay truly brilliant. Combining in a clear manner Quantum mechanics, Goedelian-like results and using them for a discussion of the possibility of deterministic theories of quantum mechanics.

        Since this is not my area, I must say I was quite blown away by some statements such as " fair classical coins do not exist". I am still recovering in fact and will have to look at the references you gave in your essay.

        With regards to that statement, I wanted to make sure I understood what is meant here:

        - Do you mean to say that a coin whose motion is determined by Newton's (or Hamilton's) equations of motion cannot eventually give rise to a fair coin toss (unless true fair randomness is put in the initial conditions)? or

        - Do you mean to say that a fair coin model within classical probability theory is actually not fair?

        I believe this is the former but just want to make sure.

        Finally, given that the argument relies, as far as I understood, on infinite sequences, is there a finite version of it whereby, say, a membership function (for the Kolmogorov random character of a sequence) would be in between 0 and 1 for any finite N but would tend to zero when N tends to infinity?

        Congratulations again on this very nice essay.

        Many thanks.

        Best,

        Fabien

          Dear Fabien,

          Thank you for your kind words. I meant the former, the latter presupposes fairness. The reason is, as I explain in my essay, that a fair coin toss requires a truly random sampling of a probability measure, which classical physics cannot provide (I am not claiming that Nature can provide it! But QM can, in theory).

          Your second question is very good, space limitations prevented me from discussing it. The question is about what I call "Earman's Principle" in my (Open Access) book, Foundations of Quantum Theory, see http://www.springer.com/gp/book/9783319517766, namely: "While idealizations are useful and, perhaps, even essential to progress in physics, a sound principle of interpretation would seem to be that no effect can be counted as a genuine physical effect if it disappears when the idealizations are removed." This is valid in the arguments in my essay because the definition of Kolmogorov randomness of infinite sequences guarantees it, coming from a limiting construction, as it does.

          Best wishes, Klaas

          Hello Professor Landsman,

          Wowww , I must say that your essay is very relevant and general, I liked a lot how you approach this topic about our limitations, I wish you all the best, I shared it on Facebook, one of my favorites with the essay of Del Santo and Klingman,

          best Regards

          "The famous theorem of Bell (1964) left two loopholes for determinism underneath quantum mechanics..."

          It left a lot more than that. The theorem also is founded upon the dubious, idealistic assumptions, that (1) particles are absolutely identical, and worse still, (2) that all the "measurements" of the particle states are absolutely without errors.

          It is easy to demonstrate that when those two assumptions are violated, as they both must be, when only a single bit of information is being manifested, by an entangled pair, then Bell's correlations can be readily reproduced classically, with detections efficiencies above the supposed quantum limit. Note that the detection efficiency, being actually measured in the demonstration, is the combined, dual detection efficiency, not the usually reported single detector efficiency. The former cannot even be measured, in a conventional Bell test.

          Rob McEachern

          Dear Klaas Landsman,

          If by 'compatible with quantum mechanics' one means that 'qubits' are real, then the argument is over. But there are probably a dozen interpretations of QM with almost as many claimed 'ontologies'.

          Bell demands qubits in his first equation: A,B = +1, -1. And for spins in magnetic domains this is a good statistical model, and reasonable. Unfortunately, for the Stern-Gerlach model upon which Bell based his reasoning, it is not. The SG data shown on the "Bohr postcard' is anything but +1 and -1, and a 3-vector spin model produces almost exactly the SG-data.

          A number of authors are concerned whether 'classical physics' is truly deterministic, and if not, how is this explained.

          If one assumes that the deBroglie-like gravitomagnetic wave circulation is induced by the mass flow density of the particle [momentum-density], then the equivalent mass of the field energy induces more circulation. This means that the wave field is self-interacting. For 'one free particle' a stable soliton-like particle plus wave is essentially deterministic. But for many interacting particles, all of which are also self-interacting, then 'determinism' absolutely vanishes, in the sense of calculations or predictions, and the statistical approach becomes necessary.

          This theory clearly supports 'local' entanglement, as the waves interact and self-interact, while rejecting Bell's 'qubit'-based projection: A, B = +1, -1 consistent with the Stern-Gerlach data (see Bohr postcard). For Bell experiments based on 'real' spin (3-vector) vs 'qubit' spin (good for spins in magnetic domains) the physics easily obtains the correlation which Bell claims is impossible, hence 'long distance' entanglement is not invoked and locality is preserved.

          This is not a matter of math; it is a matter of ontology. I believe ontology is the issue for the number of authors who also seem to support more 'intuition' in physics. My current essay, Deciding on the nature of time and space treats intuition and ontology in a new analysis of special relativity, and I invite you to read it and comment.

          Edwin Eugene Klingman

            Edwin,

            Thanks for mentioning the Bohr postcard. I had never actually seen the image until your comment provoked me to search for it.

            I would assert that QM is not about ontology at all. It is not describing what exists in "being", but only the statistics of a threshold-based energy-detection, of the things in being. So if you draw a vertical threshold line, down the middle of the "B-field on" image, you create the two states. But at the top and bottom of the image, those two states blur together and it becomes impossible to correctly distinguish between them. That is the problem with all Bell tests, that I noted in my comment above. When you examine a "coin" face-on, it is easy to correctly "call it". But not when you examine it "edge-on." The actual ontology of a coin is that it is what is is - not what you observe. Thus, a polarized coin is in the ontological state of merely being polarized; it is not polarized either "up" or "down" - the latter are merely the result of "observing" the polarization, with a detector that measures a different energy in the "polarization", as a function of the angle between the coin's axis and the axis of the detector - and then introducing a threshold to "call it" one state or the other - or "none of the above", in the event that there is not enough energy to ever reliably detect the object at all, as when it is nearly edge-on and thus "too close to call."

            In this context, it is useful to bear in mind, that QM got its start, when it was first observed that the photoelectric-effect behaved just as if an energy threshold exists.

            Rob McEachern

            Dear Dr. Landsman,

            Thank you for your well written essay. I agree with your conclusion that quantum mechanics is intrinsically random, and that hidden variables or initial conditions do not adequately explain the randomness of quantum measurement results. However, I reach a different conclusion on the origin of quantum randomness.

            In comparing Standard QM and Hidden Variables QM in section 4, you conclude that we have a choice between 1) random outcomes of measurements on identical quantum states, and 2) deterministic measurement outcomes on random or randomly sampled HVs.

            You reject the second choice on the basis that HV theories are deterministic only at first sight, and this therefore undermines their rationale. You conclude that the randomness of measurement results reflects randomness of the measurement process itself. This is, in essence, the orthodox (Copenhagen) interpretation. The Copenhagen interpretation is essentially an empirical model describing measurement outcomes in terms of Born probabilities.

            In my essay I outline a conceptual model and interpretation that provides a third option to explain the randomness of measurement results. I suggest that the randomness of measurement outcomes results from deterministic measurements on an ensemble of metastable quantum states, for example, an ensemble of identically prepared radioactive isotopes. Between its initial preparation and subsequent measurement, a metastable particle is subject to random transitions to a quantum state of higher stability. Deterministic measurements subsequent to the ensemble's preparation will therefore reveal random outcomes--no hidden variable required. As described in the essay, the proposed conceptual model is consistent with empirical observations, it is based on empirically sound and conceptually simple assumptions, and it explains the measurement problem and many other quantum "paradoxes." I hope you have a chance to look at it.

            Best regards,

            Harrison Crecraft

            Dear Klass,

            Sounds interesting. I've downloaded to my read list. You may have missed my last years finalist essay showing a physical sequence can surprisingly reproduce QM's data set, in the way Bell predicted. I touch on it this year.

            In the meantime, could you perhaps answer these questions for me;

            1. Is a physical 'measurement' interaction more likely to be with a spinning sphere, or a 2D spinning coin? If the former, then;

            2. If we approach the sphere from random directions to measure the momentum states; "ROTATIONAL" (clockwise or anti-clockwise) and also; "LINEAR" (left or right) will we always likely find 100% certainty for both?

            3. With one at 100% certainty (say linear at the equator) will the other state not reduce, down to 50:50?

            4. Now with 100 interactions in a row, will any statistical uncertainly tend to increase or decrease?

            5. Did you know the rate of change of rotatation speed (so momentum) of Earth's surface with latitude over 90o between pole and equator is CosLat?

            Catch back up with you soon I hope.

            Very Best

            Peter

              Dear Klaas,

              I enjoyed very much your essay, from your insightful parallels between Gödel's and Bell's theorems, to your no-go theorem, which I think it's amazing. I still try to grasp its physical implications. I'm also glad to see form your essay that you know Cris Calude. We've met again when he came back to Bucharest a few months ago. He made me realize that randomness is not what we commonly think it is in physics. I realized that we use the word "randomness" pretty much randomly :D Your essay shows that indeed this is an important point, as Cris explained me in our discussions, which is not well understood in physics. Despite his explanations and your eloquent essay, I am still not sure I fully understand the implications. I have a lot to digest, and I also want to find time to go deeper into your ref. [19], a heavy book I have in my library for some time. So I may come back with some questions, but for the moment I am interested into one. Do you think, based on your analysis of the two representative examples of deterministic models and the implication of your theorem on them, that it is possible to distinguish them empirically from nondeterministic versions of quantum mechanics? My interest comes from trying to find falsifiable predictions for a single-world-unitary-without-collapse model, which seems to fit in the same category as 't Hooft's cellular automata, but I interpret it differently than denying free choice of experimental settings, as I explain in the attached pdf. In the last section I mention two possible experiments, and I am interested to see if testing for genuine randomness can be physically done. I expect some loopholes stronger than in the EPR case, due to the fact that measurements are not sharp in general, and that the measurement device and the environment may not be stable enough to allow a comparison of repeated experiments numerous enough to tell if the resulting randomness is genuine or not. But I'm interested if you think this to be possible, at least in principle.

              Cheers,

              CristiAttachment #1: Cristi_Stoica_The_post-determined_block_universe_draft_2020-04-16.pdf