Essay Abstract

In [Found. Phys. 48.12 (2018): 1669], the notion of 'epistemic horizon' was introduced as an explanation for many of the puzzling features of quantum mechanics. There, it was shown that Lawvere's theorem, which forms the categorical backdrop to phenomena such as Gödelian incompleteness, Turing undecidability, Russell's paradox and others, applied to a measurement context, yields bounds on the maximum knowledge that can be obtained about a system. We give a brief presentation of the framework, and then proceed to study it in the particular setting of Bell's theorem. Closing the circle in the antihistorical direction, we then proceed to use the obtained insights to shed some light on the supposed incompleteness of quantum mechanics itself, as famously argued by Einstein, Podolsky, and Rosen.

Author Bio

Jochen Szangolies acquired a PhD in quantum information theory at the Heinrich-Heine-University in Düsseldorf. He has worked on quantum contextuality, quantum correlations and their detection, as well as the foundations of quantum mechanics. He is the author of "Testing Quantum Contextuality: The Problem of Compatibility".

Download Essay PDF File

Dear Jochen,

What an excellent article, I believe easily the best I have read so far.

I liked this conclusion a lot. "This motivates a proposal of relative realism: assign 'elements of reality' only where f(n, k) yields a definite value. In this way, we get as close to the classical ideal of local realism as is possible in a quantum world."

In my article, I wrote of "measurablism", being all that is real is that which is measurable which incorporates Quine "to be is to be the value of a variable" or Ladyman/Dennett "to be is to be a pattern". Your conclusion seems to place a more specific version of that ontology: where what is real is f(n, k) yields a definite value. The support for this seems to be found in the mathematics of Godel, but also in the nature of QM itself. That you point out Godel's rejection of Wheeler here is brilliant.

I think this is a pretty neat correlative argument, as the lines between epistemology and ontology seem to blur and this goes to the heart of the essay contest question.

Re Godel, does this mean that complete systems are real, but incomplete systems (or at least those unaccessible truths in them) are not? You might enjoy some of my 'amalgamated sleuths' in this vein in my article, though they are not meant to be serious, rather just as examples of the kinds of blurring we can do between mathematics, physics and philosophy. I think your article could definitely be published in the vein of Quine in a metaphysics journal. Congratulations.

    Dear Jack,

    thank you for your kind words! I'm glad you found something of value in my essay.

    The reference to Quine is interesting. I have thought about how to incorporate his take on undecidability and the notion of 'Quining' within my framework, but nothing obvious has come up, so far. I wonder if one could develop things along the direction you suggest, instead of bound variables thinking about definite/decidable propositions (although of course Quine came at this from a different angle). I will have to give this some thought.

    As to the question of 'what is real', I purposefully leave a little room for interpretation there. You could take my suggestion literally, amounting to an 'ontic' reading of the arguments I've presented, in which case f(n,k) really tells you 'what there is'. But I want to leave the possibility of an 'epistemic' interpretation open: our theories relate to reality as the map does to the territory, and there may be some inherent limitations to map-making, that is, some parts of the world that can't be modeled.

    I had marked your essay already as one that would need further attention; I'm looking forward to engage with it.

    "John Wheeler himself proposed the undecidable propositions of mathematical logic as a candidate for a 'quantum principle', from which to derive the phenomenology of quantum mechanics ..." Is any mathematical logic that assumes a potential infinity merely theology which might, or might not, be logically consistent? Suppose that I told you that I have a potentially infinite purse with the following property: if the purse can contain n ducats of gold then the purse can contain n+1 ducats of gold -- would you believe me? Is nature finite and digital? Ask yourself the following question, "Why do theologians so often discuss infinity?"

    "Infinity and the Proofs for the Existence of God" by Glenn F. Chesnut, 2019

      In your conclusion of your very fine essay you write: It is as if Schrödinger's student does not know the answer to any questions, as such, but knows each answer only relative to that question being asked.

      If true, this may provide the conclusion that the answer is in the system at large and that the student as part of that system is tapping the answer from the system.

      In life this is called intuition. If someone relies on it, the person can have a high rate of correct answers. That we may name coincidence when it happens once; if it happens again, psychic; and at always: incredible.

      But it is already in the system itself. The system knows itself.

      If there is an answer to a question then there is an answer too.

      Tautology is simple and true.

      It is what it is.

      The question therefor is one part of the system and the answer another part. Both are what they are, and belong to the system at large.

      The student can only lie if by giving a wrong answer, lying is not making contact to the system at large but to something beside it.

      Bests,

      Jos

        This is exceptionally deep Jochen...

        I am glad you give a little intro to Lawvere's theorem at the end. I am a fan of category theory, but I'm by no means well-versed in the subject, so it may require multiple readings just to grasp the central idea. I will not give up easily, though, because I think my efforts will be rewarded or rewarding.

        Good luck with this fine essay.

        Regards,

        Jonathan

          Dear Jochen Szangolies, after reading your essay, I found you as one of the brightest representatives of the modern paradigm in physics, through which I can inform the scientific world of my discovery. Through new Cartesian reasoning, I came to the conclusion that the probability density of states in an atom depends on the Lorentz abbreviations: length, time, mass, etc. Isn't this a unifying principle for physics?

          I invite you to discuss my essay, in which I show the successes of the neocartesian generalization of modern physics, based on the identity of space and matter of Descartes: "The transformation of uncertainty into certainty. The relationship of the Lorentz factor with the probability density of states. And more from a new Cartesian generalization of modern physics. by Dizhechko Boris Semyonovich »

            Dear David,

            thank you for your interest in my article. Regarding the topic of infinity, I must confess that it does make me somewhat uneasy---but in the end, our most successful current theories of physics (not just quantum field theory and general relativity, but also plain old Maxwellian electrodynamics) depend on the continuum of the real numbers, generally including an uncountable number of degrees of freedom. As these do yield spectacularly accurate predictions in many cases, one might well surmise that they do at least get something right.

            That said, there are intriguing arguments that quantum gravity, due to the Bekenstein bound, only includes a finite number of degrees of freedom within a finite volume. But I'm afraid these considerations are rather beyond the scope of the current topic.

            Dear Joseph,

            thanks for taking the time and commenting on my essay. In some sense, you can indeed view things like entanglement as information that's present in the system as a whole, but not reducible to any of its parts: while a classical system, such as the two differently colored cards in the envelopes I briefly consider, contains all of its information in the states of each individual subsystem, that's no longer the case in quantum mechanics---there, the state of each subsystem (of a maximally entangled system) will be maximally mixed, but the total system is not a combination of maximally mixed states.

            Hence, there is information within the total state that's not reducible to its components---the basis of what's sometimes called 'quantum holism'.

            Whether, of course, that has anything to do with intuition, or other macroscopic phenomena, seems rather doubtful to me. However, you might find this recent paper of renowned physicist Don Page interesting: https://arxiv.org/abs/2001.11331

            Dear Jonathan,

            thanks for your kind comment. I can't claim myself to be an expert in category theory, and really, the introduction I give to Lawvere's result is a kind of bastardized set-theoretical version, which one can get away with because Lawvere's theorem applies to categories that are cartesian closed, of which Set (the category with sets as objects and total functions as morphisms) is an example.

            For a much better introduction into the subject, I can only highly recommend Noson Yanofsky's paper: https://arxiv.org/abs/math/0305282

            For the core idea, I think the important part is to grasp the connection between the category-theoretic argument and the diagonalization; everything else flows from there. If you've got any questions, I'd be happy to try and address them!

            Dear Boris,

            thank you for your comment and for taking an interest in my essay, but you are much too kind.

            I will have a look at your essay, and see whether I have anything useful to say on the subject.

            If an entangled pair consists of an apple and an orange, and instead of measuring their extrinsic properties (like position and momentum in the original EPR), one instead decides (as Bell subsequently did in his theorem) to substitute a measurement of an intrinsic property (like skin-texture or polarization), then there is going to be a big problem, when you have thus unwittingly assumed that the measurement of the skin-texture of the orange can be substituted for (or compared to) a measurement of the skin-texture of the apple, in the same manner in which measurements of their position and momentum can be substituted. Consequently, unlike the original EPR thought experiment, Bell's theorem is only valid for perfectly identical pairs of entangled particles.

            By exploiting this rarely discussed "loophole", it is easy to demonstrate that a peculiar set of non-identical pairs of entangled particles (those manifesting only a single-bit-of-information), will perfectly reproduce the peculiar correlations observed in Bell-tests, in a purely classical manner; due to frequent "false positives", caused by mistaking the normal behavior of entangled "fraternal twins", for an abnormal behavior of entangled "identical twins". From this perspective, quantum theory should be interpreted as merely describing behavior analogous to a poorly-designed "drug test", rather than describing any behavior of the drugs (substances) themselves; tests in which "up" states (the drug is present) are frequently being mistaken for "down" states (the drug is not present) and vice-versa.

            Rob McEachern

              Dear Robert,

              thanks for highlighting your model. I see you can reproduce the singlet-correlation with a detection efficiency of 72%; out of curiosity, what is the maximal detection efficiency you can allow and still observe a violation of the CHSH inequality?

              Furthermore, the identical nature of particles is, of course, a central tenet of quantum mechanics---quantum particle statistics are derived assuming that exchanging any two particles does not lead to a new configuration of the system. This has many observable consequences that seem to be hard to explain otherwise. How do you propose to account for this?

              Jochen,

              The detection efficiency given in the paper, is not the standard conditional efficiency (single detector) usually reported; it is the product of both detectors. So the corresponding, conditional efficiency is the sqrt(0.72)=0.85, which is already above the theoretical limit for a classical process. With optimization, of the matched-filtering, it ought to be possible to perfectly reproduce the entire correlation curve (not just the few points evaluated in most Bell tests) at even higher efficiencies.

              The "fraternal twins" are statistically identical. So they obey the same probability of detection statistics predicted by quantum theory. But there are two detection distributions that are important, in any detection theory; unfortunately quantum theory only computes one of those two. It only computes the probability that something will be detected, but it never even attempts to predict the probability that that "something" is actually the thing that the system was supposed to detect (Probability of False Alarm). This would not be an issue, if all the things being detected were in fact "identical" particles, as has been assumed. But when the particles are only very similar (statistically identical), rather than perfectly identical, it does matter. This is exactly the problem in a Bell test; the number of detections agrees with the theory, but only because the actual state of the detection is frequently incorrect - enough to change the computed correlation statistics, as the result of non-random, systematic errors in the process. In effect, lopsided, "edge-on", "up" polarized coins are frequently being mistaken for perfect "face-on", "down" polarized coins and vice-versa.

              The point is, there is a very important distinction between "assuming that exchanging any two particles does not lead to a new configuration of the system", and assuming that exchanging any two particles does not lead to a new detectable configuration of the system, when the detection process (matched filtering) is not perfect enough to distinguish between "fraternal twins" and "identical twins", in every instance.

              Rob McEachern

              9 days later

              Regarding horizons...

              I have found another useful analogy for Schwarzschild event horizons to be the virtual ground or amplitude null at the summing junction of an inverting op-amp circuit. This too was suggested by my study of the Misiurewicz point M3,1, the 'edge of chaos' point. The suggestion here is that other types of black hole horizons could be studied with the toolkit of category theory, by finding the correct circuit diagram analogy, and that the Mandelbrot Set provides somewhat of a map.

              I have attached a diagram. Let me know if this makes sense. It would make a black hole horizon like a phase-reversed mirror that appears black or as a black body because whatever strikes it is (energetically or temporally?) inverted.

              All the Best,

              JonathanAttachment #1: MandelAmp.jpg

                Apologies if this is too far off-topic...

                I am following up on a comment you made in my essay forum. Still interested in your thoughts.

                JJD

                I am starting to get a handle on your paper Jochen...

                It appears Lawvere's theorem is a template for quite an array of meaningful conjectures, which makes it very powerful. Not so easy to grasp for those unfamiliar with the language of category theory however. I will stay the course and try to grasp what you are saying.

                In fact; I think grasping is a great metaphor here, because the story is about what gets caught before it slips behind the epistemic horizon, and only new information is available. The derivation of the word 'think' is about grasping coming from the word 'tong' a device that lets one pick things up and examine them.

                The essence of Lawvere would be caught up in the same idea. If the tongs are used to pick up hot items coming from a forge or kiln, one can examine only one item at a time and the rest are either heating or cooling, so one loses information about or control over the items not in your tongs. So the metaphor of grasping and holding vs. what slips away applies to Lawvere's theorem.

                Best,

                Jonathan

                  Yes, Lawvere's theorem is certainly a very deep result, if anything underutilised in my essay---as I noted, the simple set-theoretical framework in which I present it doesn't really do justice to the full category-theoretic treatment.

                  There have been attempts of bringing black hole type losses of predictability within the framework of formal incompleteness, such as this one: https://link.springer.com/article/10.1007/s11128-008-0089-2.

                  I am not sure I see how to connect this to the qualitative similarity you point out in regards to the Misiurewicz point; I'm largely ignorant on that topic, I'm afraid. However, you might be interested in some of the works of Louis H. Kauffmann (http://homepages.math.uic.edu/~kauffman/), particularly on what he calls 'reentrant forms'---for instance, in 'Knot Logic' (http://homepages.math.uic.edu/~kauffman/KnotLogic.pdf), he considers the Koch snowflake as an example, and I have a hunch something similar could be applied to the Mandelbrot set and its recursive dynamics.

                  Thank you Jochen,

                  The link to the paper by Srikanth and Hebri is appreciated, and it looks very interesting. I don't know enough about how Godelian incompleteness relates to the BH information paradox. I will check that out when I can, and perhaps discuss here further if timely.

                  I think the work of Louis Kauffman is pretty amazing. I have had some communication with him, but not in a while. My research has progressed much further since then and perhaps its time for an outreach, and I will check out the linked material which I appreciate your citing.

                  All the Best,

                  Jonathan

                  I finally read through your paper. It is very interesting that you do make some connection with Gödel incompleteness. I do though appear there is a need to "wring out" violations of Bell's inequalities --- pun intended. Maybe a form of PR box or Tsirelson bound argument with a diagonalization of possible measurements will work. This is important, for to use the language of my paper this is where the topological obstruction is manifested.

                  In effect the CHSH or bell inequality may be thought of as a sort of metric. As with geometry non-Euclidean spaces have different metrics, and this is particularly the case for spaces with different topologies. As with Nagel and Newman in their book I see this issue as similar to the incompleteness of geometry as an axiomatic system to determine the truth of the 5th axiom.

                  I have no clear stance on the matter of counterfactual definiteness. That also seems to be something dependent on various quantum interpretations. I question whether this is something that is simply undecidable. As Palmer puts it this means the statistical independence of state preparation and measurement is not something provable or derivable from QM.

                  Cheers LC