You can take it as a statement about conventional wisdom, allowing me to relate my uncomputable model to more conventional computable models.

Thank you. I look forward to reading your essay.

Dear Tim,

congratulations on an eminently readable and engaging essay on a difficult topic! I will need some time to fully digest your arguments, but I wanted to leave a few preliminary comments---also because our two approaches have some overlap, in particular as regards undecidability and Bell/EPR.

I'll state my biases upfront: I'm skeptical of any sort of 'completion' of quantum mechanics by hidden (or not-so-hidden) variables, that is, viewing quantum mechanics as a statistical theory of some deeper level of reality, and I'm in particular skeptical of superdeterminism.

That said, I'm always happy to see somebody giving an 'alternative' approach a strong outing---and that you certainly do. I do see now that I had dismissed these topics perhaps too quickly, so I've got to thank you for that. But on to some more detailed points.

You note the similarity between the Liouville equation and von Neumann's equation; you probably know this, but that similarity can be made much more explicit by considering phase-space quantization. There, the Moyal equation emerges as a deformation of the Liouville equation (with deformation parameter hbar), and contains the same empirical content as von Neumann's (they are linked by the Wigner-Weyl transformation).

I'm of two minds whether this supports your contention, or not. On the one hand, you can explicitly link the quantum evolution to that of a stochastic system; on the other, the deformation by hbar essentially means that there's no finer grain to the phase space, you can't localize the state of the system any further.

I'd be interested in how your approach works with things like the Pusey-Barrett-Rudolph theorem, that's generally thought to exclude viewing quantum mechanics as a stochastic theory of some more fundamental variables---although, as usual, and as you seem to be adept at exploiting, there are various assumptions and caveats with all 'no-go' theorems. I think there's an assumption that successive preparations of a system can be made independently; I'm not sure, but maybe that fails, taking one out of the invariant set.

I'll also have to have a more thorough look at how your model system is supposed to yield Bell inequality violation. Of course, having a Hilbert space formulation is not sufficient---you can formulate classical mechanics in Hilbert space, too (the Koopmann-von Neumann formalism).

I've got to go now, I'll add more later when I have some time. In the meantime, congratulations on a very strong and interesting entry into this contest!

    Thanks indeed for these very kind remarks.

    A few comments. I do not really view my approach as a completion of quantum mechanics in the sense of providing extra structure to be added to the quantum theoretic formalism. As mentioned in the Appendix to the essay, the closed Hilbert Space of quantum mechanics only arises in the singular limit where my finite fractal parameter p is set equal to infinity, and this is an unphysical limit! Hence, rather than complete quantum mechanics, my own view is that, guided by quantum theory, we have to go back to basics taking ideas based around non-computability and fractal geometry seriously!

    You are right to be sceptical of superdeterminism. However, the reasons to be sceptical - e.g. that it would imply statistically inequivalent sub-ensembles of particle pairs in a Bell experiment, simply do not apply to this model. Instead, I focus on a violation of Statistical Independence which only has implications when considering hypothetical counterfactual measurements in a Bell experiment. This interpretation of the violation of SI only makes sense in the type of non-computable model proposed.

    In fact this same point will also lead to a negation of the Pusey Barrett Rudolph theorem, through a violation of Preparation Independence. However, once again such a violation only occurs when considering counterfactual alternative preparations.

    The bottom line here (something I focus on in the essay) is that we have to think very carefully about what we mean by things like free choice and causality in these quantum no-go theorems: counterfactual-based and space-time based definitions (c.f Newton clapping his hands in the quad) are inequivalent in the type of non-computable model I am proposing here.

    Physical reality can be hacked. First method of hacking physical reality is visual hacking. Visual hacking of physical reality is a display of physical objects motion in real time and physical objects motion has nowhere to hide caught naked for the first time since the beginning of time on display and in real time.

    1 - Visual Hacking of Earth's motion or a display of Earth's motion in real time = 27.321 days cycle wrongly assigned to the Moon.

    2 - Hacking the Sun's motion or a display of the Sun's motion in real time = 365.256 days cycle wrongly assigned to Earth.

    3 - Physical sciences 5000 laws of physics, astronomy, physical chemistry, physical biology, physical engineering and technology in its entirety is based on light sources as a measuring tool and as used it only measures physics lab physical motion or Earth's motion in 27.321 days.

    4 - The (27.321 days, 365.256 days) Time cycles distance equivalence cycles = (R meters, C meters); R = Earth's theoretical radius = 6371000 meters and C = 299792458 meter claimed as light speed/second

    4 - The Space - Time errors = NASA's space data sheets

    5 - The Inverse Space - Time errors = CERN's atomic/nuclear data

    Meaning: Physical Sciences 5000 physics laws can be produced as (27.321days, 365.256 days, 6371000 meters, 299792458 meters) space -time errors is the subject of this contest of Extermination of Modern and Nobel Prize winners physics and physicists from 1610 Copernicus to 2020 Nobel winners using any level mathematics including 5th grade arithmetic and starting with Physics Most Erroneous Equation E = MC2. Are you ready to hack and strip the incontestable truth of physical reality?

    I can produce general relativity experimental numbers and special relativity experimental numbers 5000 times using any of 5000 physical sciences laws and using any level mathematics including 5th grade arithmetic and I can produce entire Einstein's relativity theory from Newton's equation contrary to what main stream scientists claim and I can produce it 5000 times as visual effects between (27.321 days, 365.256 days) motion (PHD dissertation subject 1990 University of Michigan Nuclear engineering department) I introduced "Hacking Physical Reality" and ended "Nobel Physics" decades ago. I know I sound unbelievable but it is a fact and is a well established fact.

    I've been thinking about differences and similarities between our respective models. I focus on a function which assigns values for all measurements and all states of a certain system, and show that there must be measurements such that this function is undefined---which yields the backdrop for Bell inequality violations. This also needs a restriction on admissible counterfactuals---otherwise, the EPR argument simply yields value-definiteness for complementary observables. I argue that different post-measurement states support different counterfactuals, and that thus, it is not permissible to make inferences about the value that a measurement in a different direction would have yielded, while leaving the outcome at the distant particle invariant. The measurement that has actually been performed is part of the antecedent conditions necessary to make inferences about the value at the distant particle, thus, in a situation in which that measurement is changed, one cannot expect to be able to similarly make these inferences.

    In your model, it seems to me, counterfactuality---or the inhibition thereof---enters one step earlier: it is not even permissible to draw conclusions based on the possibility of having made a different measurement, because the state of the world in which one would have made that measurement does not lie on the invariant set. Your invariant set then plays sort of the same role as my f(n,k) does---only I am considering which observables on the object system can be assigned definite values, while you are essentially considering simultaneously admissible pairs of (measurement direction, outcome).

    If that's right (and do correct me if it's not), then maybe our approaches are not so far from each other. Perhaps my model could be accompanied with a restriction that only those observables for which a definite value can be obtained do, in fact, get measured; on the other hand, you could perhaps hold that only observables for which the pair (measurement direction, outcome) lies on the invariant set have definite values, while a measurement in a different direction yields a random outcome---and perhaps a change in the invariant set, sort of analogous to an updating of the post-measurement state. In this way, one could perhaps trade superdeterminism for what might end up being a sort of nonlocal influence.

    One question, related to whether your approach is a 'completion' of quantum mechanics: do you assign definite values to observables for which the quantum state does not allow a definite prediction? I mean, certainly, the admissible measurements can't simply be the eigenbasis of the state, that is, those where the quantum state yields a definite value. Then, in some sense, it seems to me that your approach does amount to a completion in at least a 'partial' way, in that there exist possible measurements that have a definite outcome on the invariant set, but whose value can't be predicted from the quantum state of the system, which thus yields an incomplete description. Or am I way off-base there?

    Regarding your question, the reason I do not consider my approach to be a completion of quantum mechanics, is that (in my approach) there is a class of complex Hilbert States (those with irrational squared amplitudes or irrational phases) that are not probabilistic representations of any underpinning deterministic states. In this sense, large parts of the complex Hilbert Space of quantum theory have no underpinning ontic representation at all. In this sense it is less a matter of completing quantum mechanics, as thinning out the state space of quantum mechanics to leave only those quantum states that can then be given a probabilistic representation for some underlying deterministic ontology. Giving up arithmetic closure at level of Hilbert states is not a problem, since arithmetic closure can be reinstated at the deeper deterministic level (e.g. through the ring-theoretic properties of p-adic integers).

    I very much like this idea Tim...

    But I will have to re-read your paper a few times to fully grasp the depth of your reasoning. It seems reminiscent of some of the wild-sounding ideas about Cantorian space from Mohammed El Naschie when he was editing 'Chaos, Solitons, & Fractals' but with a different flavor. I think maybe your ideas have a more solid basis, but with El Naschie it is hard to tell - because so many of his references are self-citations from earlier work, hidden behind a pay wall.

    I also talk about fractals in my essay, but the context is rather different. For what it is worth; I like the work of Nottale on Scale Relativity, and I admire the breadth of its explanatory power as a model, though I don't think he got every detail right. When sent a copy by the publisher of his book for review; I enthusiastically recommended its publication. And it inspired my departed colleague Ray Munroe, who I think used it in an FQXi essay.

    More later,

    Jonathan

      I think the mathematics in my talk is pretty solid. As to the physics, well at the end of the day it will come down to experiment. I expect the crucial experiment to test invariant set theory will lie in the field of table-top experiments which probe the accuracy of quantum theory in domains where the self gravitation of a quantum system is not negligible. For example, based on the idea that gravity represents a clustering of states on the invariant set, the theory predicts that gravity is inherently decoherent and cannot itself encode entanglement.

      I like that answer Tim...

      There was recently published a paper describing an experiment that claimed to disprove objective reality, using a system with 6 entangled qubits. I think this is wrong. There are too many co-linear points, and the entire apparatus is co-planar. There are also 6 points instead of the 7 required by projective geometry. An experiment designed to correct these flaws could also search for the effects you describe. A ball of osmium placed at one end of the bench could be used to detect gravity-induced decoherence, and so on.

      In other words; I think it could be done.

      All the Best,

      Jonathan

      For what it's worth...

      I had some interaction with Phil Pearle, when he was first developing statevector reduction theory, which later blossomed into CSL. I have followed that evolution somewhat. But I recall a recent paper by Ivan Agullo that also talked about gravity-induced decoherence and broken EM symmetry, which I will try to find.

      I'd love to discuss this further. I will try to read your paper again first.

      Best,

      Jonathan

      Tim,

      On page 6 of your essay, you state that "The principal obstacle in drawing together chaos and quantum theory is therefore not the linearity of the Schrodinger equation, but the Bell Theorem."

      You appear to be unaware of the fact that Bell's theorem only applies to entangled, perfectly identical particles, like identical twins. There is no evidence that such idealized particles actually exist in the real world. Consequently, it is easy to demonstrate that entangled, non-identical, "fraternal twin" particles, will reproduce the observed "Bell correlations", with supposedly impossible-to-obtain detection efficiencies, and without any need for hidden variables, non-locality or any other non-classical explanation. This has a direct bearing on your issue of "drawing together chaos and quantum theory", since the underlying cause for the "quantum" behaviors, turns out to be, one single-bit-of-information removed from chaos (unrepeatable behavior).

      Rob McEachern

      Well, the issue of completion, to me, is whether an approach assigns values to quantities that the usual quantum formalism leaves indeterminate (where quantum mechanics therefore is incomplete), and this I think yours does. After all, if the values of measurements that are present within the invariant set were still left undetermined, and their outcome irreducibly probabilistic, it seems to me not much would be won.

      I have to say, I still can't shake some uneasyness regarding superdeterminism. I'm not bothered by the lack of free choice/free will, but I am not sure if such a theory can ever be considered empirically adequate in any sense. Usually, if we perform a measurement, we consider ourselves to be acquiring new information about the system; but it seems to me that in a superdeterministic world, there is formally no information gain at all---the outcome of the measurement does not reduce my uncertainty about the world anymore than the fact that I perform that measurement does. So how do we really learn anything about the world at all?

      Furthermore, it seems that one can cook up some superdeterministic scheme for any metaphysical predilection one is reluctant to give up. Take a good old-fashioned Mach-Zehnder interferometer setup: the fact that one of the detectors always stays dark tells you that there must be interference between photons taking 'both paths', if you will permit me this inexact phrasing.

      Suppose now we could switch either of the two detectors on at any given time, and whenever we do so, we observe a detection event at the 'bright' detector. We could interpret that as evidence for interference---but equally well, for a superdeterministic rule that exactly correlates which detector we switch on with the path the photon took.

      There are also worries of empirical underdetermination that seem to me to go beyond the standard Duhem-Quine notion. It's not surprising that I can explain the same data with different theories; but superdeterminism introduces another layer into that trouble. Usually, theories explaining the same evidence have to have some broad notion of (perhaps categorical) equivalence, but superdeterminism decouples the explanatory notions from the empirical evidence---the ultimate layer may thus take on radically different forms, each with some superdeterministic selection rule tuned to yield the observed evidence from that.

      Another issue is that of empirical justification. We take our faith in a theory to be reasonable based on the empirical data corroborating it; but the data does not corroborate a superdeterministic theory in the same way, because our observations are not independent of the data. Hence, are we ever justified in believing a superdeterministic explanation? How could evidence ever justify our belief in a theory that ultimately questions that very evidence?

      Tim Palmer,

      You recently co-wrote an arxiv paper titled Rethinking Superdeterminism together with physicist Sabine Hossenfelder [1].

      I happen to think it is rather strange for an internationally renowned meteorologist to think that the climate and everything else is superdetermined anyway. Here is an exchange I had today with your co-author Sabine Hossenfelder about whether the fires and the destruction in Australia are/were superdetermined [2]:

      Lorraine Ford 1:31 AM, February 05, 2020

      Re "your paper with Dr. H[ossenfelder]" (on superdeterminism): I hope Dr. H[ossenfelder] and Dr. P[almer] are enjoying the smell of burnt koala flesh and fur wafting over from Australia. It was all superdetermined, according to them.

      Sabine Hossenfelder 2:34 AM, February 05, 2020

      Lorraine, You think you are witty. You are wrong.

      Lorraine Ford 3:16 AM, February 05, 2020

      Sabine, I DON'T think I'm witty. I'm Australian, living with smoke-hazy skies, the horror of a billion animal deaths, let alone the people who have died, and more than 10 million acres of land burnt. You are saying that this was all superdetermined.

      Sabine Hossenfelder 4:12 AM, February 05, 2020

      Lorraine, Correct. If you have a point to make, then make it and stop wasting our time.

      1. https://arxiv.org/abs/1912.06462v2

      2. http://backreaction.blogspot.com/2020/02/guest-post-undecidability.html

        Lorraine

        Perhaps the most important thing to say in relation to my essay is that there is a difference between "superdeterminism" and "determinism". The former questions whether it is the case that, in a hidden variable model of the Bell experiment, the distribution of hidden variables are independent of the measurement settings. Without bizarre conspiracies, such distributions certainly are independent in classical models. However, in my essay I discuss a non-classical hidden-variable model which has properties of non-computability (in relation to state-space geometry) where this independence can be violated without conspiracy.

        This has nothing to do with the Australian bush fires. In discussing the climate system (which is essentially a classical system) the concept of superdeterminism never arises explicitly. However, as a classical system it is implicitly not superdeterministic (we are not aware of any bizarre conspiracies in the climate system).

        However, I think you are a little confused between the issues of determinism and superdeterminism. Somewhat perversely, given the name of the word, it is possible for a superdeterministic model to actually not be deterministic!

        Instead, I think the question you are asking is about determinism, e.g. whether it was "predetermined" that the 2019/20 Australian bush fires would occur, ten million, or indeed ten billion years ago? Put another way, was the information that led to these fires somehow contained on spacelike hypersurfaces in the distant past. I sense that you feel it is somehow ridiculous to think that this is so, and I know colleagues who think like you. However, not everyone does and logically there is nothing to disprove the notion that the information was indeed contained on these hypersurfaces (albeit in a very inaccessible highly intertwined form).

        However, this is not a discussion I would wish to have on these pages, not least because it rather deviates from the point of the essay which is that undecidability and non-computability provide a novel means to violate the Statistical Independence assumption in the Bell Theorem, without invoking conspiracy or violating the statistical equivalence of real-world sub-ensembles of particles.

        Hope this helps.

        Perhaps, since you mentioned the bush fires, I could tell you that I have a proposal for the Australian government if they want to reduce the risk of these fires in the future. My idea was published in the Sydney Morning Herald (and other Australian outlets) last week:

        https://www.smh.com.au/environment/climate-change/we-should-be-turning-our-sunshine-into-jet-fuel-20200123-p53u09.html

        I have a problem with the idea that the chaos is incompatible with relativistic invariance; I cant't give an example now, but a differential equation that is relativistic invariant and chaotic could be possible: I am thinking that in the solution set of the Einstein Field Equation there could be a solution that covers the space with a non-integer dimension, thus obtaining chaos for the metric tensors dynamics. I think that, for example, the Black Hole merger has an attractor (fixed point or almost limit cycle).

        An Einstein field equation with weak field is a linearizable theory, so that there is an approximation nearly linear.

        I don't understand: is a quantum non-locality the effect of the quantum field theory? The gauge boson interact between parts of the system, that transmit quantum information. So that to say that a system must satisfy bell's theorem is not equivalent to say: must a gauge boson exist?

          Tim Palmer,

          Thanks for your detailed reply. I think I WAS a little confused about the difference between determinism and superdeterminism: thanks for explaining. However, you are still in effect saying that every single koala death by fire was pre-determined.

          I will put the determinism issue another way, in terms of the problem of decidability: how we make decisions, and how we symbolically represent decision making. The issue is: what exactly do the symbols and numbers of physics represent, and what do the yellow blobs (Figure 3) represent, i.e. what is their deeper meaning? I have made various versions of the following case several times on the backreaction blogspot:

          According to physics there are no IF...THEN.... algorithmic steps in the laws of nature, there are only lawful relationships that are representable by equations. Try to do IF...THEN... with equations. You can't. So according to deterministic physics, you CAN'T make decisions, you CAN'T learn from your mistakes, and you CAN'T be responsible for your actions.

          Where are the models showing how IF...THEN... is done, using nothing but equations? IF...THEN... is about outcomes (the THEN... bit) that arise from logical analysis of situations (the IF... bit), but equations can't represent logical analysis. IF...THEN... is about non-deterministic outcomes, because logical analysis of situations is non-deterministic: there are no laws covering logical analysis. And you can't derive IF...THEN... from equations.

          The point of what I'm saying is this: If physicists need to use IF...THEN... logical analysis to represent the world (e.g. your Figure 1), then they are assuming that there exists a logical aspect of the world that is not representable by deterministic equations, and not derivable from deterministic equations. Your idea is that the world is deterministic, but the fact that you need to use symbolic representations of logical analysis and logical steps to represent the world contradicts the idea that the world is deterministic.

          (Please don't appeal to computer models. As a former computer programmer and analyst, I know that computers are 100% deterministic: they don't do IF...THEN... logical analysis. They deterministically process symbolic representations of IF...THEN... steps, which deterministically process symbolic representations of information.)

          We are going a bit off topic here. However, as I discuss in my essay, one can view free will as an absence of constraints that would otherwise prevent one from doing what one wants to do, a definition that is compatible with determinism. From this one could form a theory of how we make decisions based on maximising some objective function which somehow encodes our desires. This does allow one to learn from previous bad decisions, since such previous experiences would provide us with data that a certain type of decision, if repeated, would lead to a reduction, not an increase, in that objective function.

          However, we are veering into an area that has exercised philosophers for thousands of years and I suggest this is not the right place to discuss such matters. Of course, I respect your alternative point of view - there are many eminent philosophers and scientists who would agree with you.

          We are going a bit off topic here. However, as I discuss in my essay, one can view free will as an absence of constraints that would otherwise prevent one from doing what one wants to do, a definition that is compatible with determinism. From this one could form a theory of how we make decisions based on maximising some objective function which somehow encodes our desires. This does allow one to learn from previous bad decisions, since such previous experiences would provide us with data that a certain type of decision, if repeated, would lead to a reduction, not an increase, in that objective function.

          However, we are veering into an area that has exercised philosophers for thousands of years and I suggest this is not the right place to discuss such matters. Of course, I respect your alternative point of view - there are many eminent philosophers and scientists who would agree with you.