• Ultimate Reality
  • Classical Spheres, Division Algebras, and the Illusion of Quantum Non-locality:

Thanks, Jonathan.

The results of the latest simulation of my model are getting better and better.

Here is the latest version (the details still to come):

Best,

Joy

Image 1

Thanks Jonathan for the journal reference, although my mathematical credentials are not excellent.

You say, "I think Joy's topological approach may absolve us of the need to quantize space, in order to reconcile things,...".

I beg to disagree, while a topological approach may help, Joy's approach MUST NOT shy away from its clear responsibility having started his work with a statement like... "The Point Bell Missed" and "Although lines and planes contain the same number of points...".

Firstly, this statement admits that lines and planes consist of points.

Secondly, by saying the points are numbered is to say they are discrete. Only discrete things can be numbered.

Thirdly, a line is claimed to be 1-dimensional and a plane 2-dimensional. How is a 0-dimensional object contained in a 1-dimensional one? Are the points in a line and those in a plane of same dimension?

Fourthly, the claim that lines and planes contain the SAME number of points needs to be clarified. Is it supportive of what Tom said that "...There are as many points in this line: ___, as there are in the entire universe" and "I am not going to discuss the point-line thing in this forum. It's well understood geometry". What Tom is saying is that number of points on the line is infinite and the number in the universe is infinite, so both the line and the universe contain the SAME number.

The question I would have asked Tom, but can't since he says for him the case is closed is, whether his line, ___ and a segment of it also contain the SAME number of points. If so, whether points can then be said to be capable of being counted as to make assertions like two geometrical objects having the same number as Joy started with.

Jonathan, henceforth we must demand strict definition of what anyone, particularly the more mathematically inclined are asserting. It is from not being as demanding that mathematicians were allowed to introduce 'a line having length but with a breadth of absolute zero' and a 'surface of absolute zero thickness' into our physics.

Akinbo

Jonathan,

I always appreciate your ability to get to the nub of the issue. "I think Joy's topological approach may absolve us of the need to quantize space, in order to reconcile things, so that is the question here."

Indeed, I think so as well. I was disheartened that Vesselin Petkov's entry a couple of essay contests ago ("Can spacetime be quantized?"), didn't get near the attention it deserved.

At the end of the day -- at a foundational level -- we can discard the idea of particles, though we cannot discard the continuum.

The Dedekind cut principle is one my favorite mathematics results, and I agree with you that it is ill understood, even often misinterpreted. Dedekind says that, e.g., there exist two numbers whose product is sqrt2, even though one cannot provide a procedure for multiplying the two. By existence, however, because sqrt2 is an algebraic number, we are assured the computability of corresponding points by which 2(sqrt2) sets an upper bound of correlation . The current proliferation of computer simulations of Joy's framework is rich confirmation. That's only the beginning -- aside from mere computability, Joy's research promises to tell us what "quantum" really means.

All best,

Tom

I was not surprised to see Matt Leifer's entry win first place in this year's FQXi essay contest. Yes, it's a good essay, well argued and worthy of a contest prize.

More important to me personally, however, is that it represents what I observe is a longtime Perimeter Institute and FQXi bias toward probability models based on Bayesian philosophy. For those not familiar with the difference between what are called Bayesian and Frequentist models -- a Bayesian assumes a definite probability on the interval [0,1], requiring a measure of personal belief in the correct probability outcome for a given problem. A frequentist model is objective, based on the average of a sufficient number of independent Bernoulli trials (throws of the dice) such that one's confidence in the correct probability for a given problem increases with the number of trials.

My reaction to Matt's conclusion that "My main argument is that, on the subjective Bayesian interpretation of probability, 'it from bit' requires a generalized probability theory" alternates between "of course," and "so what?"

The fact is, that a generalized probability theory based on Bayesian principles is oxymoronic. One gets from it what one assumes in the first place, and cannot get otherwise. So to conclude, "A subjective Bayesian analysis of noncontextuality indicates that it can only be derived within a realist approach to physics" is simply saying that realism is in the mind of the observer and nowhere else. In no philosophy except standard quantum theory is "realism" defined this way.

"At present, this type of derivation has only been carried out in the many-worlds interpretation," says Matt. Either he misunderstands what Everett's interpretation is actually saying, or he is trying to co-opt many-worlds to coat his theory in a pseudo-objective patina. The fact is, there is no collapse of the wave function in Everett's interpretation; therefore, no probability can be assigned to an outcome. Leifer concludes:

" ... but I expect it can be made to work in other realist approaches to quantum theory, including those yet to be discovered."

The only viable realist approach to quantum theory that I know of, that both forbids collapse of the wave function and fulfills the predictions of quantum measurement correlations demanded by standard quantum theory, is Joy Christian's measurement framework.

Very early on, I was concerned that Joy had sneaked Bayesian assumptions in by the back door -- which shows the extent to which even I have been indoctrinated -- in which case I would have dismissed the framework. It's clear now that no such probabilistic abracadabra infringes on the results.

The choice function of Bell-Aspect and CHSH type quantum experiments rests subjectively with the experimenter. The choice function in Joy's framework rests with random input to the continuous functions of nature independent of the experimenter. *That's* realism.

Tom

    Joy, I get it. I truly do. Without deconstructing the opposing arguments, though, I don't see any way to let your new ideas shine through. One can eventually stop a fire after letting it burn itself out; however, we usually desire ways to rob it of fuel and oxygen, to save the precious assets in its path.

    With one exception, I don't have an axe to grind with researchers who accept that reality is fundamentally probabilistic. They just haven't made the effort to learn otherwise.

    Best,

    Tom

    The problem, really, is Bayes' Theorem.

    It has no place as a guiding principle in the rationalist enterprise called science. It is just a tool for making inductively open judgements from phenomena, i.e., by Aristotelian method, rather than by the correspondence of logically closed judgments in scientific theory, to the natural world (Tarski, Popper).

    To sacrifice truth to probability is not worthy of foundational science.

    Let believers in Bell's theorem defend their choice with rational argument -- if they can find one.

    Tom

    Hi Everyone,

    As I promised last week, here are the details of the *fourth*, explicit, event-by-event, simulation of my local model for the EPR-Bohm correlation. It is independently produced by Michel Fodje, with code written in Python. The previous simulations were produced independently by different authors, with codes written in Java, Excel Visual Basic, and Mathematica. The theoretical model itself can be found here, or in the attached paper.

    Each simulation has given different statistical and geometrical insights into how my local model works, and indeed how Nature herself works. The original simulation written by Chantal Roth, which is most faithful to 3-sphere topology, may appeal to more geometrically inclined, whereas Michel Fodje's simulation, which has its own unique features, may appeal to more statistically inclined. In the end, however, all of these simulations, together with the original local model, confirm what I have been arguing for the past six years. The full details of my argument, which concerns the origins of quantum correlations, can be found on my blog.

    Enjoy :-)

    Joy Christian

    Image 1Attachment #1: 21_whither.pdf

      Joy, this is just beautiful! My quick impression is that it may actually be experimentally replicable with electron input. It already reads like an experimental computation model.

      Still nothing from the cone of silence where the critics are huddled?

      (Great work by Michel Fodje!)

      And I think that because the statistical function of Fodje's simulation verifies the topological structure of Roth's simulation, we can conclude, significantly:

      The simulation of a continuous function is a continuous function.

      Does it get more physically real and local than that?

      Thanks, Tom.

      I forgot to mention that the top part of the figure shows the familiar cosine correlation, whereas the bottom part shows the actual quantum probabilities predicted by my local model.

      Image 1

      I picked up a portion of a comment you made to a correspondent in sci.physics.foundations that I think is relevant:

      "It is also important to realize that an orientation of a manifold (like that of a 3-sphere) is both a local and a global property at the same time. It is local in the sense that a basis is chosen at each point of the 3-sphere. But since this choice must be made consistently throughout the 3-sphere, its orientation is also a global (or a topological) property."

      This local-global relation is an absolute breakthrough accomplishment, for future research directions -- including quantum computing without entanglement.

      Yes, beautiful and impressive work for mathematicians. While not wanting to be a kill joy, it is only mathematicians who will insist that all lines and all planes, no matter their size contain the same number of points as my post on Nov. 5, 2013 @ 11:06 GMT refers.

      Physicists, are now belatedly catching up with the idea that this is unlikely to be so. Einstein was already unto this long before as mentioned in a link. In a likely Quantum gravity theory there is likely to be a Planck limit at 10-35m. That being so, this line _____ and this _____________________ cannot contain the same number of points as Tom claims, since a Planck length cannot contain more than one point as it is indivisible.

      I therefore stand to be corrected, that both Bell's theorem and its Disproof being founded on the assumption that there is no limit to distance at the Planck length will not be of much use to physicists standing at the gate between Quantum and Classical physics waiting and ready to "Tear down this wall"!

      Akinbo

      There is no wall, Akinbo.

      "In a likely Quantum gravity theory there is likely to be a Planck limit at 10-35m."

      What makes you think that?

      Thank you, Mr. Moldoveanu, for your astute observation. I am sorry to say, however, that you are as mistaken as always (please recall that all your previous claims about my analytical model have been demonstrated to be false).

      To begin with, fair or unfair, no one is doing a "sampling" of any kind in the simulation. The property p, or equivalently the parameter t, is part of the hidden variable set. It is a part of lambda. It is randomly chosen by Nature in each run of the experiment, just like the other hidden variable e. It took me only five seconds to "uncover" this (I also have certain advantages, but I prefer not to mention them).

      Michel's simulation is simply a different implementation of the original simulation by Chantal. What Michel has done is take the C_a and C_b functions from my paper, streamline the notation, and use the sin^2(t) function appearing in the integration over the probability density function discussed in the footnote 1 of my paper to generate the correct statistics. In other words, Michel is doing the same statistical computation to arrive at the same correlation, but without using the C_ab part of Chantal's simulation, which has been deliberately misinterpreted by you. Thus the simulations of Chantal and Michel are two complementing simulations of one and the same analytical model.

      I will not dignify the rest of your derogatory comments about Chantal's simulation with a response. I will let the readers judge her beautiful work for themselves.

      Image 1

      Thank you again, Mr. Moldoveanu. This is the first ever constructive contribution you have made as far as I can remember.

      You have just demonstrated that the correct line of code to capture the 3-sphere topology of my analytical model is: out = p < abs(C) and numpy.sign(C) or 0.0 # if |C| > p then particle is detected by sign(C) channel. If this line of code is replaced by any incorrect line of code representing the topology of a real line, then the resulting correlation cannot be as strong as those between the points of a parallelized 3-sphere.

      Let me also remind you once again that, fair or unfair, no one is doing a "sampling" of any kind in Michel's simulation. The property p, or equivalently the parameter t, is part of the hidden variable set. It is a part of lambda. It is randomly chosen by Nature in each run of the experiment, just like the other hidden variable e. The correct line of code from Michel's simulation you have just quoted is thus implementing the correct topological structure of the parallelized 3-sphere.

      I hope I do not have to teach you that any simulation code by itself has no meaning without knowing what it is that is being simulated, or demonstrated. In the case of my local model what is being simulated is the correlation between the points of a parallelized 3-sphere. Unless one understands the meaning of the words "parallelized 3-sphere" (and you have amply demonstrated that you don't), there is no hope of understanding either my analytical model, or any simulation that confirms its validity.Attachment #1: 22_whither.pdf

      To spell out what is already so very clearly stated by Michel here, in the measurement results A(x, lambda) the initial or complete state lambda is a set { e, t }, where e from [0.. 2pi) and t from [0 .. pi/2) are two random hidden variables, shared by both Alice and Bob, such that the constraint

      |cos (x - e)| > ½ sin^2(t)

      holds for all feely chosen detector angles x. This constraint arises from the geometry and topology of the 3-sphere. One can understand this clearly from the footnote 1 on pages 18 to 19 of the attached paper. The two graphs above only reinforce the crucial role played by the geometry of the 3-sphere in my model. Of course, one is free to endlessly misinterpret everything I say without ever reading---let alone understanding---what is so clearly discussed in so many of my papers on the subject to date.

      Regardless of the interpretation of the hidden variable lambda, Michel's simulation clearly refutes the generally held belief that results A(a, lambda) and B(b, lambda) observed by Alice and Bob, with only freely chosen detector directions a and b and shared randomness lambda, cannot exhibit strong correlations seen in his simulation.

      Tom,

      On the Planck length, I throw the question back to you, what makes you think not?

      However, let me not cause a distraction on this thread. Lets all enjoy the beautiful mathematics. We may take this issue up else where, concerning whether space is digital or not.

      Regards,

      Akinbo

      Thanks for showing up, Florin.

      All spaces of a continuous measurement function are make-believe, because a computer is a 3-dimensional instrument. What you, Vongehr, Gill and others have never understood, is that computing in a 4-dimensional continuum introduces a degree of freedom that differentiates the choice function of the experimenter, or programmer, from random input that simulates the choice function of naturally occurring classical randomness.

      The computer doesn't care what space it is computing in. Until you realize that the Python implementation of a Java program implies that the simulation of a continuous function is a continuous function, you are going to be stuck in an outdated belief that 3-dimension computability = reality. Welcome to the 21st century, where Flatland is only a memory.

      Best,

      Tom