Professor Landsman,

I must admit your approach took me back in time to the ancient Greek insight that cosmos/order/certainty came out of or is grounded on chaos/disorder/uncertainty.

If randomness (as apparent lack of patterns and predictability in events ) is a measure of uncertainty, and since outcomes over repeated trials of the same event often follow a probability distribution then the relative frequency over many trials is predictable (so in a way Von Mises was right to derive probability theory from randomness even though he failed in that attempt but helped Kolmogorov succed). In other words randomness could be more fundamental than probability theory that permeates QM and Statistical mechanics since Boltzmann, even though your concept of randomness is in the mathematical sense of Chaitin-Kolmogorov, not Von Mises sense.

Coming back to our theme, if Godel's theorems tells us that the fields of mathematics and physics (according to Hilbert's axiomatic programme) cannot be grounded on logic(in classical and symbolic sense, but who knows maybe one day it could be grounded on a different type of logic, say in Brouwer-Heyting intuitionism or Moisil-Lukasiewicz many valued logic) and Bell's theorem that QM cannot be grounded on classical determinism or any underlying hidden variables theory a la deBroglie-Bohm-Vigier then how do we know that we haven't been using undecidable results to prove our theorems in both mathematics and physics throughout millenia? (like the ones we found in Euclidian geometry so that Hilbert had to re-axiomatize it)

Does this mean that ultimayely, randomness and chaos could be the ground for both mathematics and physics, with their logical necessity and deterministic or indeteministic laws and that ultimately the Greeks were right?...

    Hi Mihai, Thank you for these interesting comments. I agree with your last point: ultimately, all laws derive from randomness! A point made repeatedly by my colleague Cristian Calude is that randomness cannot imply lawlessness, since any (infinite) sequence necessarily possesses some arithmetic regularities (Baudet's Conjecture/van der Waerden's Theorem. It should be stressed that random sequences of the kind studied in Kolmogorov complexity theory are far from lawless in the sense of Brouwer - they are pretty regular in satisfying all kinds of statistical laws that follows from 1-randomness, as I explain in my essay. I am less sure about your observation that the theorems of mathematics we use in physics are grounded on undecidable results, e.g. the derivation of the incompleteness results by Gödel and Chaitin itself is based on decidable propositions only (at least as far as I know). Also, I would not say that Gödel's theorems imply that mathematics cannot be grounded on logic, except when you mean "grounded" in Hilbert's sense, namely a proof of consistency. Without knowing that e.g. ZFC is consistent, it is still a logical language in which we do our mathematics, most of which is decidable in ZFC. Best wishes, Klaas

    Thank you for your rely Prof. Landsman. Indeed, I meant consistency of ZFC in Hilbert sense, which according to Godel's second incompleteness theorem, the consistency of ZFC cannot be proved within ZFC itself (unless it is actually inconsistent). In other words, if ZFC as a deductive formal axiomatic system of set theory as mathematics and foundational for most of classical mathematics, then the consistency of ZFC cannot be demonstrated in this system alone. Given that, so far, ZFC has proved immune to the classical set theory paradoxes of Cantor, Russell or Burali- Forti, this of course does not imply that ZFC is absolutely (categorically) free of any potential inner contradictions that might come up one day, wouldn't you agree?...

    The relativity of set-theoretic concepts, as the ones used in ZFC was signalled quite early on with the Lowenheim-Skolem theorem that subsequently led to Skolem's paradox, which implies that, if ZFC is consistent then its axioms must be satisfaible within a countable domain, even though they prove the existence of uncountable sets, in Cantor's sense (1922). Nowadays, we know that there are many mathematical statements undecidable in ZFC and other axioms need to be added in order to prove results in branches of mathematics such category theory or algebraic geometry, whose theorems are currently being used in some modern theories of physics today, and which work with Tarski-Grothendieck set theory for instance, one of many extensions of ZFC. Best wishes, Mihai

    Dear Klaas Landsman,

    interesting essay, i like it. Especially the fact that 'randomness' does not imply 'lawlessness', a result that is often overlooked when talking about randomness.

    I would be happy if you would comment on my essay where i try to figure out similar links between logics and physics.

    7 days later

    Dear Profesor Landsman,

    I am still waiting for your comment.

    Sorry for the comparison with "The BB theory"

    Wilhelmus de Wilde

    13 days later

    Dear Klass,

    I found your essay truly brilliant. Combining in a clear manner Quantum mechanics, Goedelian-like results and using them for a discussion of the possibility of deterministic theories of quantum mechanics.

    Since this is not my area, I must say I was quite blown away by some statements such as " fair classical coins do not exist". I am still recovering in fact and will have to look at the references you gave in your essay.

    With regards to that statement, I wanted to make sure I understood what is meant here:

    - Do you mean to say that a coin whose motion is determined by Newton's (or Hamilton's) equations of motion cannot eventually give rise to a fair coin toss (unless true fair randomness is put in the initial conditions)? or

    - Do you mean to say that a fair coin model within classical probability theory is actually not fair?

    I believe this is the former but just want to make sure.

    Finally, given that the argument relies, as far as I understood, on infinite sequences, is there a finite version of it whereby, say, a membership function (for the Kolmogorov random character of a sequence) would be in between 0 and 1 for any finite N but would tend to zero when N tends to infinity?

    Congratulations again on this very nice essay.

    Many thanks.

    Best,

    Fabien

      Dear Fabien,

      Thank you for your kind words. I meant the former, the latter presupposes fairness. The reason is, as I explain in my essay, that a fair coin toss requires a truly random sampling of a probability measure, which classical physics cannot provide (I am not claiming that Nature can provide it! But QM can, in theory).

      Your second question is very good, space limitations prevented me from discussing it. The question is about what I call "Earman's Principle" in my (Open Access) book, Foundations of Quantum Theory, see http://www.springer.com/gp/book/9783319517766, namely: "While idealizations are useful and, perhaps, even essential to progress in physics, a sound principle of interpretation would seem to be that no effect can be counted as a genuine physical effect if it disappears when the idealizations are removed." This is valid in the arguments in my essay because the definition of Kolmogorov randomness of infinite sequences guarantees it, coming from a limiting construction, as it does.

      Best wishes, Klaas

      Hello Professor Landsman,

      Wowww , I must say that your essay is very relevant and general, I liked a lot how you approach this topic about our limitations, I wish you all the best, I shared it on Facebook, one of my favorites with the essay of Del Santo and Klingman,

      best Regards

      "The famous theorem of Bell (1964) left two loopholes for determinism underneath quantum mechanics..."

      It left a lot more than that. The theorem also is founded upon the dubious, idealistic assumptions, that (1) particles are absolutely identical, and worse still, (2) that all the "measurements" of the particle states are absolutely without errors.

      It is easy to demonstrate that when those two assumptions are violated, as they both must be, when only a single bit of information is being manifested, by an entangled pair, then Bell's correlations can be readily reproduced classically, with detections efficiencies above the supposed quantum limit. Note that the detection efficiency, being actually measured in the demonstration, is the combined, dual detection efficiency, not the usually reported single detector efficiency. The former cannot even be measured, in a conventional Bell test.

      Rob McEachern

      Dear Klaas Landsman,

      If by 'compatible with quantum mechanics' one means that 'qubits' are real, then the argument is over. But there are probably a dozen interpretations of QM with almost as many claimed 'ontologies'.

      Bell demands qubits in his first equation: A,B = +1, -1. And for spins in magnetic domains this is a good statistical model, and reasonable. Unfortunately, for the Stern-Gerlach model upon which Bell based his reasoning, it is not. The SG data shown on the "Bohr postcard' is anything but +1 and -1, and a 3-vector spin model produces almost exactly the SG-data.

      A number of authors are concerned whether 'classical physics' is truly deterministic, and if not, how is this explained.

      If one assumes that the deBroglie-like gravitomagnetic wave circulation is induced by the mass flow density of the particle [momentum-density], then the equivalent mass of the field energy induces more circulation. This means that the wave field is self-interacting. For 'one free particle' a stable soliton-like particle plus wave is essentially deterministic. But for many interacting particles, all of which are also self-interacting, then 'determinism' absolutely vanishes, in the sense of calculations or predictions, and the statistical approach becomes necessary.

      This theory clearly supports 'local' entanglement, as the waves interact and self-interact, while rejecting Bell's 'qubit'-based projection: A, B = +1, -1 consistent with the Stern-Gerlach data (see Bohr postcard). For Bell experiments based on 'real' spin (3-vector) vs 'qubit' spin (good for spins in magnetic domains) the physics easily obtains the correlation which Bell claims is impossible, hence 'long distance' entanglement is not invoked and locality is preserved.

      This is not a matter of math; it is a matter of ontology. I believe ontology is the issue for the number of authors who also seem to support more 'intuition' in physics. My current essay, Deciding on the nature of time and space treats intuition and ontology in a new analysis of special relativity, and I invite you to read it and comment.

      Edwin Eugene Klingman

        Edwin,

        Thanks for mentioning the Bohr postcard. I had never actually seen the image until your comment provoked me to search for it.

        I would assert that QM is not about ontology at all. It is not describing what exists in "being", but only the statistics of a threshold-based energy-detection, of the things in being. So if you draw a vertical threshold line, down the middle of the "B-field on" image, you create the two states. But at the top and bottom of the image, those two states blur together and it becomes impossible to correctly distinguish between them. That is the problem with all Bell tests, that I noted in my comment above. When you examine a "coin" face-on, it is easy to correctly "call it". But not when you examine it "edge-on." The actual ontology of a coin is that it is what is is - not what you observe. Thus, a polarized coin is in the ontological state of merely being polarized; it is not polarized either "up" or "down" - the latter are merely the result of "observing" the polarization, with a detector that measures a different energy in the "polarization", as a function of the angle between the coin's axis and the axis of the detector - and then introducing a threshold to "call it" one state or the other - or "none of the above", in the event that there is not enough energy to ever reliably detect the object at all, as when it is nearly edge-on and thus "too close to call."

        In this context, it is useful to bear in mind, that QM got its start, when it was first observed that the photoelectric-effect behaved just as if an energy threshold exists.

        Rob McEachern

        Dear Dr. Landsman,

        Thank you for your well written essay. I agree with your conclusion that quantum mechanics is intrinsically random, and that hidden variables or initial conditions do not adequately explain the randomness of quantum measurement results. However, I reach a different conclusion on the origin of quantum randomness.

        In comparing Standard QM and Hidden Variables QM in section 4, you conclude that we have a choice between 1) random outcomes of measurements on identical quantum states, and 2) deterministic measurement outcomes on random or randomly sampled HVs.

        You reject the second choice on the basis that HV theories are deterministic only at first sight, and this therefore undermines their rationale. You conclude that the randomness of measurement results reflects randomness of the measurement process itself. This is, in essence, the orthodox (Copenhagen) interpretation. The Copenhagen interpretation is essentially an empirical model describing measurement outcomes in terms of Born probabilities.

        In my essay I outline a conceptual model and interpretation that provides a third option to explain the randomness of measurement results. I suggest that the randomness of measurement outcomes results from deterministic measurements on an ensemble of metastable quantum states, for example, an ensemble of identically prepared radioactive isotopes. Between its initial preparation and subsequent measurement, a metastable particle is subject to random transitions to a quantum state of higher stability. Deterministic measurements subsequent to the ensemble's preparation will therefore reveal random outcomes--no hidden variable required. As described in the essay, the proposed conceptual model is consistent with empirical observations, it is based on empirically sound and conceptually simple assumptions, and it explains the measurement problem and many other quantum "paradoxes." I hope you have a chance to look at it.

        Best regards,

        Harrison Crecraft

        Dear Klass,

        Sounds interesting. I've downloaded to my read list. You may have missed my last years finalist essay showing a physical sequence can surprisingly reproduce QM's data set, in the way Bell predicted. I touch on it this year.

        In the meantime, could you perhaps answer these questions for me;

        1. Is a physical 'measurement' interaction more likely to be with a spinning sphere, or a 2D spinning coin? If the former, then;

        2. If we approach the sphere from random directions to measure the momentum states; "ROTATIONAL" (clockwise or anti-clockwise) and also; "LINEAR" (left or right) will we always likely find 100% certainty for both?

        3. With one at 100% certainty (say linear at the equator) will the other state not reduce, down to 50:50?

        4. Now with 100 interactions in a row, will any statistical uncertainly tend to increase or decrease?

        5. Did you know the rate of change of rotatation speed (so momentum) of Earth's surface with latitude over 90o between pole and equator is CosLat?

        Catch back up with you soon I hope.

        Very Best

        Peter

          Dear Klaas,

          I enjoyed very much your essay, from your insightful parallels between Gödel's and Bell's theorems, to your no-go theorem, which I think it's amazing. I still try to grasp its physical implications. I'm also glad to see form your essay that you know Cris Calude. We've met again when he came back to Bucharest a few months ago. He made me realize that randomness is not what we commonly think it is in physics. I realized that we use the word "randomness" pretty much randomly :D Your essay shows that indeed this is an important point, as Cris explained me in our discussions, which is not well understood in physics. Despite his explanations and your eloquent essay, I am still not sure I fully understand the implications. I have a lot to digest, and I also want to find time to go deeper into your ref. [19], a heavy book I have in my library for some time. So I may come back with some questions, but for the moment I am interested into one. Do you think, based on your analysis of the two representative examples of deterministic models and the implication of your theorem on them, that it is possible to distinguish them empirically from nondeterministic versions of quantum mechanics? My interest comes from trying to find falsifiable predictions for a single-world-unitary-without-collapse model, which seems to fit in the same category as 't Hooft's cellular automata, but I interpret it differently than denying free choice of experimental settings, as I explain in the attached pdf. In the last section I mention two possible experiments, and I am interested to see if testing for genuine randomness can be physically done. I expect some loopholes stronger than in the EPR case, due to the fact that measurements are not sharp in general, and that the measurement device and the environment may not be stable enough to allow a comparison of repeated experiments numerous enough to tell if the resulting randomness is genuine or not. But I'm interested if you think this to be possible, at least in principle.

          Cheers,

          CristiAttachment #1: Cristi_Stoica_The_post-determined_block_universe_draft_2020-04-16.pdf

            Dear Professor Klaas Landsman,

            Thank you for presenting a wonderful essay written with a very smooth flow....

            Your statement about Godel's law as.........Godel proved that any consistent mathematical theory (formalized as an axiomatic deductive system in which proofs could in principle be carried out mechanically by a computer) that contains enough arithmetic is incomplete (in that arithmetic sentences ' exist for which neither ' nor its negation can be proved)...................

            I have few questions about it. This law is applicable to Quantum Mechanics, but will this law be applicable to COSMOLOGY.......?????.........

            I never encountered any such a problem in Dynamic Universe Model in the Last 40 years, all the the other conditions mentioned in that statement are applicable ok

            I hope you will have CRITICAL examination of my essay... "A properly deciding, Computing and Predicting new theory's Philosophy".....

            Best Regards

            =snp

            Dear Klass,

            You really wrote a great article. Thank you!

            All the best,

            Noson

            Dear Klaas. While reading your essay I got very excited. I am not a physicist or a mathematician. My major expertise is in creativity and its fit in the world-and- in some respects its relationships with science and mathematics. You provided a very interesting overview of the "battle" between determinism and indeterminism. Several things in your essay as it relates to my essay are "very" exciting. In my essay I describe a process that converts chaos to order - the C*s to SSCU transformation (described in the appendix of my essay) and the scale up of the SSCU to become our (the visible) universe (described in the body of the essay). It appears to me that the C*s to SSCU transformation is the "...internal processing of atoms (in my theory the internal processing of all physicality) that enforce some particular outcome" expressed by Born (1926). In the Successful Self Creation theory that "enforcement" was the self replicating/self organizing progression that eventually became the universe. Also you mention that the "... attempts to undermine the "Copenhagen claim of randomness looked for deterministic theories underneath quantum mechanics" and you concluded that was impossible. I agree with your conclusion. However, those looking to undermine the Copenhagen claim of randomness had it backward. The SSC theory presents a randomness (chaos) underneath and the process that converted that randomness (chaos) to a repeating, self replicating deterministic progression that became the multiverse that contains our universe. There is much more that we should discuss. If you would read my essay and respond it could be the beginning of an exciting discussion of your "musings on (in)determinism" in your easy. I am looking forward to hearing from you. John D. Crowell

            Dear Peter,

            These questions are very interesting but they do not really reflect on my essay, as you also say yourself, and I find them very hard to answer. The last one I do not even understand. They seem to be more general physics questions than I am able to deal with. Best wishes, Klaas

            Dear Cristi,

            That's an interesting question. My analysis at the end of my essay suggests that the answer is no, deterministic HVB models are empirically indistinguishable from standard QM. This is not just because it is the way they are designed, but as I try to argue, the reason also lies in the unprovability of randomness, which should be the distinguishing feature. I cannot say I have fully grasped this issue, though, and you might benefit from the extremely interesting PhD thesis at the Universidad de Buenos Aires of G. Senno, A Computer-Theoretic Outlook on Foundations of Quantum Information (2017), which is easily found online.

            The case of dynamical collapse models is similar - I proposed one of these, see my Open Access book Foundations of Quantum Theory, http://www.springer.com/gp/book/9783319517766, and I also worked with a group in Oxford to design an experiment to test my theory, but this failed, perhaps for different reasons: you cannot really monitor the collapse of the wave-function in real time. I will try to take a look at your PhD thesis also, though for completely different reasons. Best wishes, Klaas

            Dear Klaas,

            Thanks for this brilliant essay. As I also "would personally expect that a valid theory of the Planck scale... would derive quantum mechanics as an emergent theory," have you thought of what seems to be a natural logical extension of the ancient idea of atomism - discreteness not only in space but in time (rather spacetime) as well? (To question a fundamental continuity - continuous existence in time - at the heart of quantum physics.)

            Then the probabilistic behavior of the quantum object may be represented as a manifestation of a probabilistic distribution of the quantum object itself in the forever given spacetime: an electron, for instance, can be thought of as an ensemble of the points of its disintegrated worldline, which are scattered in the spacetime region where the electron wavefunction is different from zero. Then, in the ordinary three-dimensional language, an electron would be an ensemble of constituents which appear-disappear в€ј10^20 times per second (the Compton frequency); and, obviously, such a "single" quantum object can pass simultaneously through all slits at its disposal in the double-slit experiments with single electrons and photons.

            Had Minkowski lived longer he might have described such a probabilistic spacetime structure by the mystical expression "predetermined probabilistic phenomena."

            It is true, the above question is more in the spirit of the Einstein-type approach to fundamental physics, whereas now the predominant approach seems to be more Minkowski-type (as we know Minkowski employed such an approach when he discovered the spacetime structure of the world).

            Best wishes,

            Vesselin