Regarding your question, the reason I do not consider my approach to be a completion of quantum mechanics, is that (in my approach) there is a class of complex Hilbert States (those with irrational squared amplitudes or irrational phases) that are not probabilistic representations of any underpinning deterministic states. In this sense, large parts of the complex Hilbert Space of quantum theory have no underpinning ontic representation at all. In this sense it is less a matter of completing quantum mechanics, as thinning out the state space of quantum mechanics to leave only those quantum states that can then be given a probabilistic representation for some underlying deterministic ontology. Giving up arithmetic closure at level of Hilbert states is not a problem, since arithmetic closure can be reinstated at the deeper deterministic level (e.g. through the ring-theoretic properties of p-adic integers).

I very much like this idea Tim...

But I will have to re-read your paper a few times to fully grasp the depth of your reasoning. It seems reminiscent of some of the wild-sounding ideas about Cantorian space from Mohammed El Naschie when he was editing 'Chaos, Solitons, & Fractals' but with a different flavor. I think maybe your ideas have a more solid basis, but with El Naschie it is hard to tell - because so many of his references are self-citations from earlier work, hidden behind a pay wall.

I also talk about fractals in my essay, but the context is rather different. For what it is worth; I like the work of Nottale on Scale Relativity, and I admire the breadth of its explanatory power as a model, though I don't think he got every detail right. When sent a copy by the publisher of his book for review; I enthusiastically recommended its publication. And it inspired my departed colleague Ray Munroe, who I think used it in an FQXi essay.

More later,

Jonathan

    I think the mathematics in my talk is pretty solid. As to the physics, well at the end of the day it will come down to experiment. I expect the crucial experiment to test invariant set theory will lie in the field of table-top experiments which probe the accuracy of quantum theory in domains where the self gravitation of a quantum system is not negligible. For example, based on the idea that gravity represents a clustering of states on the invariant set, the theory predicts that gravity is inherently decoherent and cannot itself encode entanglement.

    I like that answer Tim...

    There was recently published a paper describing an experiment that claimed to disprove objective reality, using a system with 6 entangled qubits. I think this is wrong. There are too many co-linear points, and the entire apparatus is co-planar. There are also 6 points instead of the 7 required by projective geometry. An experiment designed to correct these flaws could also search for the effects you describe. A ball of osmium placed at one end of the bench could be used to detect gravity-induced decoherence, and so on.

    In other words; I think it could be done.

    All the Best,

    Jonathan

    For what it's worth...

    I had some interaction with Phil Pearle, when he was first developing statevector reduction theory, which later blossomed into CSL. I have followed that evolution somewhat. But I recall a recent paper by Ivan Agullo that also talked about gravity-induced decoherence and broken EM symmetry, which I will try to find.

    I'd love to discuss this further. I will try to read your paper again first.

    Best,

    Jonathan

    Tim,

    On page 6 of your essay, you state that "The principal obstacle in drawing together chaos and quantum theory is therefore not the linearity of the Schrodinger equation, but the Bell Theorem."

    You appear to be unaware of the fact that Bell's theorem only applies to entangled, perfectly identical particles, like identical twins. There is no evidence that such idealized particles actually exist in the real world. Consequently, it is easy to demonstrate that entangled, non-identical, "fraternal twin" particles, will reproduce the observed "Bell correlations", with supposedly impossible-to-obtain detection efficiencies, and without any need for hidden variables, non-locality or any other non-classical explanation. This has a direct bearing on your issue of "drawing together chaos and quantum theory", since the underlying cause for the "quantum" behaviors, turns out to be, one single-bit-of-information removed from chaos (unrepeatable behavior).

    Rob McEachern

    Well, the issue of completion, to me, is whether an approach assigns values to quantities that the usual quantum formalism leaves indeterminate (where quantum mechanics therefore is incomplete), and this I think yours does. After all, if the values of measurements that are present within the invariant set were still left undetermined, and their outcome irreducibly probabilistic, it seems to me not much would be won.

    I have to say, I still can't shake some uneasyness regarding superdeterminism. I'm not bothered by the lack of free choice/free will, but I am not sure if such a theory can ever be considered empirically adequate in any sense. Usually, if we perform a measurement, we consider ourselves to be acquiring new information about the system; but it seems to me that in a superdeterministic world, there is formally no information gain at all---the outcome of the measurement does not reduce my uncertainty about the world anymore than the fact that I perform that measurement does. So how do we really learn anything about the world at all?

    Furthermore, it seems that one can cook up some superdeterministic scheme for any metaphysical predilection one is reluctant to give up. Take a good old-fashioned Mach-Zehnder interferometer setup: the fact that one of the detectors always stays dark tells you that there must be interference between photons taking 'both paths', if you will permit me this inexact phrasing.

    Suppose now we could switch either of the two detectors on at any given time, and whenever we do so, we observe a detection event at the 'bright' detector. We could interpret that as evidence for interference---but equally well, for a superdeterministic rule that exactly correlates which detector we switch on with the path the photon took.

    There are also worries of empirical underdetermination that seem to me to go beyond the standard Duhem-Quine notion. It's not surprising that I can explain the same data with different theories; but superdeterminism introduces another layer into that trouble. Usually, theories explaining the same evidence have to have some broad notion of (perhaps categorical) equivalence, but superdeterminism decouples the explanatory notions from the empirical evidence---the ultimate layer may thus take on radically different forms, each with some superdeterministic selection rule tuned to yield the observed evidence from that.

    Another issue is that of empirical justification. We take our faith in a theory to be reasonable based on the empirical data corroborating it; but the data does not corroborate a superdeterministic theory in the same way, because our observations are not independent of the data. Hence, are we ever justified in believing a superdeterministic explanation? How could evidence ever justify our belief in a theory that ultimately questions that very evidence?

    Tim Palmer,

    You recently co-wrote an arxiv paper titled Rethinking Superdeterminism together with physicist Sabine Hossenfelder [1].

    I happen to think it is rather strange for an internationally renowned meteorologist to think that the climate and everything else is superdetermined anyway. Here is an exchange I had today with your co-author Sabine Hossenfelder about whether the fires and the destruction in Australia are/were superdetermined [2]:

    Lorraine Ford 1:31 AM, February 05, 2020

    Re "your paper with Dr. H[ossenfelder]" (on superdeterminism): I hope Dr. H[ossenfelder] and Dr. P[almer] are enjoying the smell of burnt koala flesh and fur wafting over from Australia. It was all superdetermined, according to them.

    Sabine Hossenfelder 2:34 AM, February 05, 2020

    Lorraine, You think you are witty. You are wrong.

    Lorraine Ford 3:16 AM, February 05, 2020

    Sabine, I DON'T think I'm witty. I'm Australian, living with smoke-hazy skies, the horror of a billion animal deaths, let alone the people who have died, and more than 10 million acres of land burnt. You are saying that this was all superdetermined.

    Sabine Hossenfelder 4:12 AM, February 05, 2020

    Lorraine, Correct. If you have a point to make, then make it and stop wasting our time.

    1. https://arxiv.org/abs/1912.06462v2

    2. http://backreaction.blogspot.com/2020/02/guest-post-undecidability.html

      Lorraine

      Perhaps the most important thing to say in relation to my essay is that there is a difference between "superdeterminism" and "determinism". The former questions whether it is the case that, in a hidden variable model of the Bell experiment, the distribution of hidden variables are independent of the measurement settings. Without bizarre conspiracies, such distributions certainly are independent in classical models. However, in my essay I discuss a non-classical hidden-variable model which has properties of non-computability (in relation to state-space geometry) where this independence can be violated without conspiracy.

      This has nothing to do with the Australian bush fires. In discussing the climate system (which is essentially a classical system) the concept of superdeterminism never arises explicitly. However, as a classical system it is implicitly not superdeterministic (we are not aware of any bizarre conspiracies in the climate system).

      However, I think you are a little confused between the issues of determinism and superdeterminism. Somewhat perversely, given the name of the word, it is possible for a superdeterministic model to actually not be deterministic!

      Instead, I think the question you are asking is about determinism, e.g. whether it was "predetermined" that the 2019/20 Australian bush fires would occur, ten million, or indeed ten billion years ago? Put another way, was the information that led to these fires somehow contained on spacelike hypersurfaces in the distant past. I sense that you feel it is somehow ridiculous to think that this is so, and I know colleagues who think like you. However, not everyone does and logically there is nothing to disprove the notion that the information was indeed contained on these hypersurfaces (albeit in a very inaccessible highly intertwined form).

      However, this is not a discussion I would wish to have on these pages, not least because it rather deviates from the point of the essay which is that undecidability and non-computability provide a novel means to violate the Statistical Independence assumption in the Bell Theorem, without invoking conspiracy or violating the statistical equivalence of real-world sub-ensembles of particles.

      Hope this helps.

      Perhaps, since you mentioned the bush fires, I could tell you that I have a proposal for the Australian government if they want to reduce the risk of these fires in the future. My idea was published in the Sydney Morning Herald (and other Australian outlets) last week:

      https://www.smh.com.au/environment/climate-change/we-should-be-turning-our-sunshine-into-jet-fuel-20200123-p53u09.html

      I have a problem with the idea that the chaos is incompatible with relativistic invariance; I cant't give an example now, but a differential equation that is relativistic invariant and chaotic could be possible: I am thinking that in the solution set of the Einstein Field Equation there could be a solution that covers the space with a non-integer dimension, thus obtaining chaos for the metric tensors dynamics. I think that, for example, the Black Hole merger has an attractor (fixed point or almost limit cycle).

      An Einstein field equation with weak field is a linearizable theory, so that there is an approximation nearly linear.

      I don't understand: is a quantum non-locality the effect of the quantum field theory? The gauge boson interact between parts of the system, that transmit quantum information. So that to say that a system must satisfy bell's theorem is not equivalent to say: must a gauge boson exist?

        Tim Palmer,

        Thanks for your detailed reply. I think I WAS a little confused about the difference between determinism and superdeterminism: thanks for explaining. However, you are still in effect saying that every single koala death by fire was pre-determined.

        I will put the determinism issue another way, in terms of the problem of decidability: how we make decisions, and how we symbolically represent decision making. The issue is: what exactly do the symbols and numbers of physics represent, and what do the yellow blobs (Figure 3) represent, i.e. what is their deeper meaning? I have made various versions of the following case several times on the backreaction blogspot:

        According to physics there are no IF...THEN.... algorithmic steps in the laws of nature, there are only lawful relationships that are representable by equations. Try to do IF...THEN... with equations. You can't. So according to deterministic physics, you CAN'T make decisions, you CAN'T learn from your mistakes, and you CAN'T be responsible for your actions.

        Where are the models showing how IF...THEN... is done, using nothing but equations? IF...THEN... is about outcomes (the THEN... bit) that arise from logical analysis of situations (the IF... bit), but equations can't represent logical analysis. IF...THEN... is about non-deterministic outcomes, because logical analysis of situations is non-deterministic: there are no laws covering logical analysis. And you can't derive IF...THEN... from equations.

        The point of what I'm saying is this: If physicists need to use IF...THEN... logical analysis to represent the world (e.g. your Figure 1), then they are assuming that there exists a logical aspect of the world that is not representable by deterministic equations, and not derivable from deterministic equations. Your idea is that the world is deterministic, but the fact that you need to use symbolic representations of logical analysis and logical steps to represent the world contradicts the idea that the world is deterministic.

        (Please don't appeal to computer models. As a former computer programmer and analyst, I know that computers are 100% deterministic: they don't do IF...THEN... logical analysis. They deterministically process symbolic representations of IF...THEN... steps, which deterministically process symbolic representations of information.)

        We are going a bit off topic here. However, as I discuss in my essay, one can view free will as an absence of constraints that would otherwise prevent one from doing what one wants to do, a definition that is compatible with determinism. From this one could form a theory of how we make decisions based on maximising some objective function which somehow encodes our desires. This does allow one to learn from previous bad decisions, since such previous experiences would provide us with data that a certain type of decision, if repeated, would lead to a reduction, not an increase, in that objective function.

        However, we are veering into an area that has exercised philosophers for thousands of years and I suggest this is not the right place to discuss such matters. Of course, I respect your alternative point of view - there are many eminent philosophers and scientists who would agree with you.

        We are going a bit off topic here. However, as I discuss in my essay, one can view free will as an absence of constraints that would otherwise prevent one from doing what one wants to do, a definition that is compatible with determinism. From this one could form a theory of how we make decisions based on maximising some objective function which somehow encodes our desires. This does allow one to learn from previous bad decisions, since such previous experiences would provide us with data that a certain type of decision, if repeated, would lead to a reduction, not an increase, in that objective function.

        However, we are veering into an area that has exercised philosophers for thousands of years and I suggest this is not the right place to discuss such matters. Of course, I respect your alternative point of view - there are many eminent philosophers and scientists who would agree with you.

        Actually what I say is that chaos is only superficially incompatible with relativistic invariance. The key is to "geometrise" chaos and that can be done by considering the invariant sets of chaos. I then try to show that these invariant sets may in turn help make chaos compatible with quantum theory.

        My own view is that the resolution of the Bell Theorem is not through quantum field theory, since that is an extension of quantum theory. Rather my belief is that there is a deeper deterministic formalism based on non-computable fractal invariant sets which has quantum theory as a singular limit.

        I am currently working on an extension of these invariant set ideas to incorporate the formalism of relativistic quantum field theory.

        Tim,

        What do your diagrams and equations actually represent? Do they represent a world ruled by equations or do they represent a world that is taking logical steps and performing logical analysis? Where does the logical analysis and steps that you are personally taking end? Do the logical steps in Figure 1 represent logical steps you are taking (e.g. to solve a problem or equation) or do the logical steps in Figure 1 represent logical steps the world is taking?

        Your essay is interesting.

        Reading it made me think.

        For example if there was a chaotic state in general relativity, then is it possible that Hausdorff's measure of particle trajectory an relativistic invariant? If it were not so, then there would be an observer for whom the relativist trajectory is non-fractal, but this seems unlikely to me (it's like a change of topology, to change from a chaotic trajectory to a non-chaotic trajectory).

        Also for the Bell theorem (or the Einstein-Podolsky-Rosen paradox), it is possible to study the Feynmann diagram for the cross section in the scattering of two polarized Dirac particles (I read today the results in Greiner book) and to obtain the probability of the final state (with elicities). If there are interaction, so gauge bosons, then there is not an instantaneous effects; the collapse of Alice state communicate the state to Bob using the gauge bosons interaction, with the light speed.

        Hi Tim,

        It is quite a revolutionary program you have embarked on, overthrowing the infinitesimal and subverting the continuum. Your standard of rationality includes its mathematical definition: that any rational quantity can be expressed as a ratio of whole numbers. The conviction that the infinite and the infinitesimal have no place in physics goes well with the idea that appropriate mathematics ought to be involved. Nearly all of the infinities have been expelled from physics.

        But there is still the century-old foundational problem with infinity in physics that appropriate mathematics might help resolve. It strikes me as ironic that Hilbert's admonishment about how "the infinite is nowhere to be found in reality" still stands today, considering his name is on the Einstein-Hilbert action which is involved in the unlimited gravitational energy required for inflation. This point is made clear by Paul Steinhardt who, when asked where the energy for inflation comes from, confirmed that it comes from a bottomless supply of gravitational energy. Since inflation requires infinite energy, the theory is inadmissible by Hilbert's standard of rationality, and so is the general theory of relativity which is supposed to deliver that energy.

        On the presumption that it is essentially classical Newtonian gravitational potential energy which is apparently the source of the unlimited energy, it should be of interest that a relativistic version of gravitational potential energy can be constructed from a consideration of the composition of relativistic gravitational redshift due to a sphere.

        For example, given a test particle of mass m, the classical element of potential energy due to a spherical shell of matter is du = -F(r) dr where F(r) is the force of gravity at radius r. The redshift due to the shell is given by dz = du / mc^2. The total redshift, z, from all shells can be composed relativistically as the product, 1 z = PRODUCT[1 dz] = exp[INTEGRAL dz], using Wikipedia's Pi notation (here "PRODUCT") for the Volterra product integral. The composite redshift due to a complete sphere of mass M, at radius R, is then z = exp(GM/Rc^2) - 1, not the conventional relativistic (1 - 2GM/Rc^2)^{-1/2} - 1, and not the first-order approximation, GMm/Rc^2. The corresponding relativistic gravitational potential energy must have similar exponential form to be consistent with the composition of relativistic gravitational redshift.

        Unlike Newtonian potential energy which is negative, relativistic gravitational potential energy is positive, and equal to mc^2 exp(-GM/Rc^2). In the absence of a gravitational field, it is equal to rest energy. Gravitational potential energy is taken from that rest energy, and thus has a finite limit. Newtonian potential energy -GMm/R, is an approximation to mc^2 [exp(-GM/Rc^2) -1] for weak fields. Relativistic gravitational potential energy is an exponential map of the classical potential energy.

        Relativistic gravitational potential energy gives an escape velocity sensibly limited to the speed of light, as might be expected from a relativistic theory, whereas this condition is violated in both the classical theory and general relativity. The singularity-free metric corresponding to the escape velocity is the same as Brans-Dicke. In that theory, inertial and gravitational mass differ slightly, by a presently undetectable amount. I suspect this discrepancy could arise from failing to account properly for the exponential nature of gravitational energy.

        The original work can be found at the link in the file shells2010dec29.pdf. It has some simple examples to demonstrate the essential concepts. I was not aware of product integrals when it was written in 2010. This derivation of the product integral addresses the issue of normalization, which can be inferred from the physics of the problem. I don't have an essay for this contest, but here is a link to an essay from the last contest that shows some radical consequences of accepting the composite relativistic gravitational redshift.

        It seems to me that there might be a way to incorporate these relativistic compositions for gravity into general relativity via the product integral and arrive at the Brans-Dicke metric. I wonder, what would be your intuition on this possibility?

        Colin Walker

        "... the closed Hilbert Space of quantum mechanics only arises in the singular limit where my finite fractal parameter p is set equal to infinity, and this is an unphysical limit! Hence, rather than complete quantum mechanics, my own view is that, guided by quantum theory, we have to go back to basics ..."

        There may you meet the humble effort of an old engineer:

        Eckard Blumschein

        6 days later

        Hello again Tim,

        After reading Lawrence Crowell's paper; I have a greater appreciation for your work, and even moreso that you are able to write so lucidly about it for lay audiences. I am impressed. I will have more questions now, after all that fuel for thought.

        Would the correctness of your theory imply that the fabric of spacetime is fractal? This is a feature of several quantum gravity theories, in terms of the microstructure. Does that project onto the large scale structure of the cosmos in your view? Would it surprise you if I said it appears some of your starting assumptions would follow naturally, if my own theory pans out?

        Tip of the old iceberg for you.

        More later,

        Jonathan