Hi Declan,

What about the experiments where only ONE photon at a time is sent to the two slits and even then appears the wave picture ?

regards

Wilhelmus

Hi Declan,

Your essay gives a very interesting conclusion. The two feet on the floor again.

But there are some things that I don't understand, sorry for that I have no programmation background.

You are concluding that non-detect events are the reason that the classical model can again be the explanation of "entanglement". In my own primitive thinking this means that "What we don't see is the reason that our "perceptions and its explanations" are wrong, just because we can generate an output of this "unseen" that proves it.Am I understanding this right or am I just wrong, if so help me.

I can agree with the three-dimensional wave perception of particle experience (like in Peter Jackson's essay's), what I don't understand that if it is so simple to explain "entanglement" in a "classical way" that this was not found before, there are a lot of scientists (like Joy Christian) who are seriously researching this subject.

And after three times reading your essay I still do not understand in the classical way entanglement, thank you for making me think again. I hope that you also may have a look at my essay : Foundational Quantum Reality Loops , it may not be your cup of tea, but I hope that it contains also thoughts that in turn make you think.

best regards

Wilhelmus de Wilde

    Dear Wilhelmus,

    Thank you for your comment and questions.

    Yes, essentially the correlation that is interpreted as being due to entanglement can be explained due to what is not detected, but is dependent on the angle difference between the photon's polarization axis and that of the detector. This means that a specific region of possible orientations there is a higher chance of not getting a detection.

    Prior to the loophole-free experiments, there was already acknowledged the possibility of what was termed the 'detection loophole' where detection efficiency (i.e. not every photon is detected) could bias the result set and falsely cause the correlation to appear to be entanglement.

    The use of a Steering Inequality was supposed to account for that by including Alice's non-detects in the statistical calculation that determines the degree of correlation. However, as I have demonstrated, if the two functions I have shown (for Alice and Bob) are used, and the Steering Inequality is calculated on a the result set, there is still violation and the supposed QM 'entanglement' correlation curve is obtained.

    I hope this helps you understand my explanation in regards to the explanation for entanglement using Classical Physics.

    I will have a look at your essay too...

    Best Regards,

    Declan

    Dear Stephen,

    Thank you, I'm glad you liked it and understood it.

    Best Regards,

    Declan

    Declan

    Have you seen an R language computer simulation of Pearle's (1970) model for the EPR-Bohm correlations by Richard Gill in July 2015 at http://rpubs.com/gill1109/pearle2 ?

    The third graph shows that data are less likely to be gathered at theta = 90 and 270 degree.

    The model uses a particular function to decide on whether data are excluded and this is the relevant part of the R code:

    U

      Declan

      My last post must have fallen foul of special symbols in the text, so I am trying again.

      Have you seen an R language computer simulation of Pearle's (1970) model for the EPR-Bohm correlations by Richard Gill in July 2015 at http://rpubs.com/gill1109/pearle2 ?

      The third graph shows that data are less likely to be gathered at theta = 90 and 270 degree.

      The model uses a particular function to decide on whether data are excluded and this is the relevant part of the R code:

      U is uniformly distributed

      s i given by (2/sqrt(3*U+1)) - 1

      where Pearle's "r" is arc cosine of "s"; divided by pi/2

      this seems to be similar to your method of excluding data but I am not sure how close it is. Does your model give a curve which asymptotically approach the cosine curve as the number of pairs of particles emitted increases? The points plotted in your Figure 2 do not look like an exact match but that could be caused by a small sample? If your function is different, which gives the better fit to the cosine curve?

      Best wishes

      Austin

      Dear Austin,

      Thank you for bringing that to my attention, I have not seen it before and it is great supporting evidence. The exact shape of the correlation curve can be tweaked by changing the algorithm for non-detects, and there may be different algorithms for different types of detectors. In my last graph the Steering Inequality is modeled so some of the non-detects (Alice's) are admitted as valid results. Thus has the effect of changing the correlation curve away from a perfect cosine, primarily as perfect correlation and anti-correlation cannot be achieved at 0 and 180 degrees in such a model (as non-detects are included in the data).

      Regards,

      Declan

      Dear Declan

      The paper that I mentioned by Gill shows the possibility of obtaining an asymptotically exact cosine curve by removing data according to a formula. I have written, a few years ago, a visual basic program (mimicking Chantal Roth's original java pprogram) to obtain the cosine curve but it does remove a lot of data (about 25%, by my memory?).

      I am quite suspicious that you have an asymmetry of Alice in favour of Bob with respect to missing data: should not there be symmetry? But I confess that I am unclear as to how that asymmetry arose in the missing data. There is also the issue of how experienced experimentalists can keep on overlooking, if that is what they are still doing, remaining sources of 'missing data' after taking such pains to exclude all possible sources?

      So does nature actually give a cosine curve or does it really give a sawtooth curve which morphs into a cosine curve when there are missing data? If the latter, I do not know enough to see what damage that would do to QM. I am not sure if 'entanglement' can be excised from QM without harming the rest of QM?

      There is also a third option (Joy Christian's option) which is that space is not as flat as it may seem. There are 1500 posts at

      http://retractionwatch.com/2016/09/30/physicist-threatens-legal-action-after-journal-mysteriously-removed-study/

      discussing this option. That website closed after exhausting the time available of the chair/referee rather than coming to an agreement. The maths there uses Clifford Algebra.

      There is an interesting brief essay in this contest by Richard Conn Henry whose main point, as I interpret it, is that we do not yet have the correct topology of 4-space. Having the correct topology may be relevant to the third option above?

      It is one thing to remove simulated data from a program after generating it in assumed 'normal' space, but the question arises if that data were removed illegitimately or whether that data never really could exist in a space of the correct topology.

      My own essay in this contest suggests (page 7) that there should be no random outcomes for Alice and Bob: as everything seems pre-determined. That would imply hidden variables and hence enforce a sawtooth curve and be incompatible with QM assuming space is normal or flat or R3. But with a modified topology of space there is the possibility of rescuing the cosine curve, though I am not sure of the effect on QM as 'entanglement' would cease to exist. The final paragraph on page 8 of my essay implies that spin may be a vital component of emergent space. And I do not know what effect that quantum spin would have on the topology of space.

      Best wishes

      Austin

      Dear Austin,

      That is how the latest loophole-free experiments using a Steering Inequality are formulated. Bob is considered the trusted person and Alice The untrustworthy partner whose non-detects form part of the data. See the paper in my Ref 2 from my essay for details on this. When I first encountered the Steering Inequality experiments I was very doubtful about the asymmetry too; but the QM theorists have devised this technique to, supposedly, close the detection loophole by including non-detects in the statistics.

      If all data is rejected past a cutoff threshold angle then the curve becomes a sawtooth. If a random element is introduced then the sawtooth is rounded off to be more like a cosine. The degree of rounding can be controlled by changing the random element/algorithm.

      Event with the best experimental techniques I think the detection efficiency I limited to fairly poor efficiency (I think something like 60-79% but I could be wrong). There is no way to determine double non-detects that I know of, and no symmetrical Steering Inequality that I know of.

      My essay is aimed at showing that despite the QM theorists' claims the detection loophole has not been closed and experimental results can still be explained Classically.

      Regards,

      Declan

      Dear Declan

      Thanks for the pointers. I will need to read more about trusty and untrustworthy partners to overcome my suspicions. It is reassuring that you also were doubtful about the asymmetry at first.

      Best wishes

      Austin

      Dear Declan ! Many thanks for supporting my work by rating it up; Jan20, I already rated yours with 10. See my post above. I can learn at FQXI at lot of things by reading the key essays, which are connected to my own research on the logic of exponentiality and the fundamentals of science. Best: stephen

        Dear Stephen,

        Ok, many thanks - I was wondering where that 10 came from...

        Best Regards,

        Declan

        Dear Declan

        I also have been investigating classically produced quantum correlations, in the context of Rob McEachern's one bit hypothesis. The simplified model I came up with had a similar amount of noise obscuring the cosine, but when the number of trials was increased to one billion, a residual systematic error of about 1 percent was discovered. This was verified with a purely probabilistic treatment instead of random trials. (see vixra.org/abs/1705.0377)

        I have a vague interest in simulating quantum computation, and it seems to me that a systematic error would likely limit the number of qubits. Can you increase the number of trials to see if there might be some unwanted correlation? I think the real problem is to find a method that actually converges to the cosine.

        Richard Gill found such a method in the work of Philip Pearle, as pointed out by Austin. I recall following (or trying to follow) the discussion mainly between Gill and Joy Christian, and then forgot about it until Austin's post. (see another of Gill's papers arxiv.org/abs/1505.04431) Having some experience now with the problem, I see that Gill does a nice job of explaining the procedure, but I find R code can be quite obscure. Here are some basic details.

        The method requires three(!) random numbers R1,R2,R3 generated uniformly over the interval 0-1 for each trial. Gill stores a large set of transformed random numbers, z,x,s, to be reused in trials for any combination of settings by Alice and Bob.

        The first two random numbers are transformed to cover a spherical shell, and then projected onto a plane running through the center, forming a disc. Points on the plane are taken as 2d vectors (z,x), so the distribution of their magnitude is biased towards the edge of the disc, where it is most dense.

        The third random number sets the threshold, s, for detection, with another carefully crafted distribution.

        z = 2 R1 - 1

        x = sqrt(1 - z^2) cos(2 pi R2)

        s = [ 2 / sqrt(3 R3 1) ] - 1

        A unit vector (az,ax) in the z-x plane sets Alice's angle, with (bz,bx) for Bob. Projections are calculated as follows

        pa = (z,x) . (az,ax) = z az x ax

        pb = (z,x) . (bz,bx) = z bz x bx

        A detection occurs when the absolute value of both pa and pb is greater than s. The correlation for a detected event is given by the product of the signs of their projections

        C = sign(pa) sign(pb)

        The average correlation converges to the cosine expectation as the number of trials is increased. Unfortunately, it is not clear to me how to measure noise for McEachern's hypothesis, but that is a problem for another day.

        Thanks for entering an essay on the technique of quantum steering, which I will have to experiment with.

        Cheers,

        Colin

          Dear Colin,

          Ok thanks for the comment. Austin brought the Pearle R code to my attention a few days ago (see his comment and my reply above). That is good supporting evidence for my essay.

          My model does a very similar thing, although my algorithm for non-detects is based on cosine squared (Born's rule - see past and current essays by Peter Jackson on the origin of cos squared dependency).

          It's good to have some like minded people in the contest. We should support each other's work with a decent rating.

          Regards,

          Declan

          Dear Declan,

          I read your essay, but due my formation in philosophy I've no means to evaluate - nor even to fully comprehend - your physics hypothesis. As far as I understand, it's quite revolutionary, if true - but sadly the history of science is full of thunderous silences.

          Could you please sum up your discovery in not specialists term? If it's possible, of course.

          Bests,

          Francesco D'Isa

            Dear Francesco,

            Thank you for your kind comment.

            Ok, I will try:

            Most of Physics can be understood and makes sense in terms of Classical Physics that people can understand and is what is termed "Local and Real". We can 'see' what is going on and can build a model of it.

            Quantum Mechanics, on the other hand, has some strange counterintuitive aspects, with entanglement being the main one that appears to be "spooky action at a distance" as Einstein famously called it. It asserts that particles can cooperate across vast distances instantaneously as if by magic.

            Experiments (usually termed EPR experiments, based on the original thought experiment devised by Einstein, Podolsky and Rosen) have been done that appear to confirm that the QM entanglement is actually occurring; however, due to the practical difficulties in performing the experiment there are possible ways that the correlation found in the experiment can be explained Classically, these are called 'loopholes'. Thus entanglement may not be occurring at all.

            In order to solve this situation, recently supposed 'loophole-free experiments have been devised in order to settle the answer once and for all. In these experiments, one of the loopholes termed the detection loophole is attempted to be closed by using a Steering Inequality rather than the usual Bell Test or CHSH Inequality that we're used to determine if entanglement is occurring in the original EPR experiments. The Steering Inequality includes non-detects in the statistical calculations so that detector inefficiency cannot be used as a means to explain the experimental result Classically.

            My work shows that this loophole has not been closed, and that even when the Steering Inequality is used, the experimental results can be explained Classically. As a result, the main problem with modeling the word in a 'Local and Real' way is overcome, and entanglement is shown not to be occurring. This may then allow the other aspects of QM to be unified with Classical Physics and Relativity.

            I hope this answers your question?

            Best Regards,

            Declan

            Dear Declan,

            thank you very much. It seems a very important discovery! I will rate it high to raise the curiosity of the specialists.

            Best regards and wish you luck!

            Francesco D'Isa

            Dear Francesco,

            Thank you very much, it would be good if it could get looked at by the establishment.

            Do you have an essay too?

            Best Regards,

            Declan

            Dear Declan,

            you are welcome, it seems like an important discovery who needs some more attention and review!

            Yes, I've one about absolute relativism, maybe you could be interested in it, it's a more philosophical point of view in the matter.

            bests,

            Francesco