I have not actually ever seen the Mermin device before, but am very familiar with the Stern-Gerlach apparatus that Mermin's device models. The Mermin device illustrates quantum superposition for single entangled particles just like the Stern-Gerlach device does for each particle and also illustrates the quantum measurement "problem" quite well too.

The challenge seems to be to explain in simple language quantum superposition and entanglement by using simple language to explain how the Mermin device works. The simple explanation is that the Mermin device Case B reveals the nature of quantum phase incoherence.

Our classical reality first of all mostly involves quantum phase coherence and all gravity relativity outcomes only involve quantum phase coherence. All of reality, however, actually does involve quantum phase and all matter bonds involve coherent quantum phases and coherent quantum phase makes up most of our classical reality as a result.

However, even bonded matter particles can exist with incoherent quantum spin phases above absolute zero, especially vaporized matter. For example, each silver atom in the vapor of the Stern-Gerlach experiment exists as an incoherent phase superposition of the silver atom's two quantum spin phases of +/-1/2.

Generally, classical reality can ignore quantum phase incoherence because of statistical averaging over large numbers of particles. However, certain quantum measurements of single atoms or particles do show the effects of quantum phase incoherence, which of course, has no classical explanation at all.

The Mermin device (see attachment) measures each of two silver atoms with entangled but still incoherent spin phases. When the two measurements have the same quantum phase or orientation, Case A, the two measured entangled spins always agree.

When the two measurements have different quantum phase at an orientation of 120 degrees from each other, Case B, the two entangled silver atom spin phases only agree 25% of the time. This means that each measurement of the two silver atoms with incoherent but entangled quantum spin phases has an incoherent precursor quantum spin phase that is only knowable with some well-defined incoherence.

Thus, quantum phase incoherence results in an intrinsic uncertainty for all quantum outcomes. Classical reality is a result of only quantum phase coherence and so it is actually quantum phase incoherence that reveals the true quantum nature of the world.

Correspondingly, the quantum measurement "problem" is only a problem because even many very smart people ignore the reality of quantum phase incoherence.Attachment #1: MerminExperiment.jpg

    Thank you for that Doc, that gives a good start for browsing up a reading list.jrc

    Dr. Agnew,

    The reading I've started referencing SG does make better sense of the operational meaning of superposition than I have been assuming from the common application to photonic probability distribution. Thanks much again. Now I'm as obliged QM-wise, as I am to Tom Ray for peeling the scales from my eyes in GR. Not that I'll abandon a classical model, in fact I was struck with a similar comparison to the 'particle edge boundary' we noted in an earlier post.

    It would take a lengthy paradigm shifting rundown to layout the mathematical rationale for the matter phase static unitary particle model, but suffice it to conclude thusly: It becomes apparent that to account for a quantity of energy (aether) condensing to a free rest mass such that the quantity is determinate of the volume in a co-ordinate free geometry, and the density increases exponentially from lower to upper bound; a simple same quantity, or same density cannot be incrementalized to concentric spheres (as might be attempted in accord with inverse square law). An exponentiation on a single pole will suffice because any pole in a manifold of radial poles would naturally be the root of all poles incremental to exponential variance of energy in density per volume. So while it is invalid to employ the exponential rate unit in a math function as the exponent in linear algebra; the pole root rationale is non-linear. Given a radii length equivalent to c(c^1/e) which would be commensurate with an exponential acceleration from rest to light velocity, that can be taken as the radial difference factor from an empirically derived base radius for a constant upper density core. So in a condensate where the proportional density difference is c^3, the proportional difference of energy quantity required by density in the upper density core volume and that at the lower density bound in the full field volume is an order of (c^3)^1/e, and the change of volume between lower and upper density boundaries is an order of c^3[(c^3)^1/e] .

    Those distinctly different isomorphic proportions co-exist, in a superposition of physical properties, that hold for all energy quantities in a self-limiting rationale that matches the observed limits of the EM spectrum thru the mass accumulation in elemental isotopes with a terminal rationale at 263.11 a.m.u.

    Thank-you very much again for your patience, Doc. I've learned something of great value. (Mom always said I was slow but good with my hands. :-) jrc

    Hi John, you are welcome, I am not a specialist about all this but I understand the general principle, I can understand also what you tell about the specifications and techn protocols, it is interesting in all case like paper ,

    ...oh...you did mention phase once..but not quantum. Ok...you love a determinate materialist classical reality so you are not alone...but somehow you are here in a quantum muck...

    I love classical physics because classical differentials model reality really well. I also love quantum physics because quantum phase is how the universe really works and quantum field theory works very well.

    What surprises me as a working scientist all these years is that the Science community cannot get its collective act in order and get a quantum gravity. Science still cannot describe a common basis for quantum charge and gravity relativity. Even very smart people like Steven Weinberg, Sean Carroll, Lee Smolin, and Sabrina Hosenfelder disagree vehemently about the nature of physical reality. And yet, none of them as a way to derive the universe from a few simple principles much less a way to derive charge and gravity from a the same simple principle...

      Doc,

      well... yeh, these days if you shoehorn in a lower case 'h' its quantum. But that lower case 'h' is derivative of Planck's classical distribution theorem and it (reasonably) assumes an equal partition of probability and demonstrates that a path of least resistance provides an escape from the ultra-violet catastrophe. So at some point in the spectrum we might also expect an equipartition of 'h', and we functionally assume that in the reduced Planck constant. In conjugal application, then, we can safely assume that 'h' is the averaged least observable value of action, so while that action by e=hf obtains that value for any observed wavelength, it matters little if a photon is a single phase outcome or a measure of aggregate phase actions. A partition of 'h' proportional to wavelength, into a rest matter phase particle small enough to be accelerated to light velocity would require the remainder of 'h' as the accelerating charge. That combined partitioning would be conserved in the outcome. Again in the interest of theoretical modeling, it matters naught if the reality is a single photon or the work function of many that quantizes the spectral lines. That least observable action per wavelength provides a means for a realistic phase cyclic model to assign requisite densities associated with primary force effects, and evolve a static state matter phase free rest mass. What's not quantum in a continuous function if the result is some observable quantized outcome?

      Hi To both of you,

      Steve , I agree that the classical model reality is essential and that this universe is simple, we can in logic explain the quantum mechanics with several tools and the quantum phases are important. It exists like universal partitions towards our main codes , but like I said unfortunally we don t know these foundamental objects and also the real philosophical origin of this universe, so what we analyse at this moment are just effects and of course we are very limitated due to problems of knowledges and technologies. I beleive also strongly that these phases and fields are essential, but we have probably a deeper logic to superimpose to reach our unknowns. I really doubt and it is just my opinion that we have just these phtons and relativity like primoridal essence, I don t tell that this relativity is not correct, of course it is relevant, I just tell that we cannot be sure that it is the only one piece of puzzle. Even Einstein told it, he considered a probable deeper logic to all this. The gravitational quantum fields for me are not electromagnetic or emergent from this electromagntism, I really think that it is a n other logic of encoding in our nuclei, this force is different. Regards

      Steve and Doc,

      I thought I'd posted a response but it is probably just as well that it didn't take. Too wordy. The gist was that the Quantum is a measure of action and a multiple of Quanta. And that Quanta is incomprehensibly small to our human experience. It is mentally meaningless to imagine that the fundamental unit of work is roughly equivalent to a decimal point followed by 34 zeroes and a 7 watt incandescent Christmas Tree light bulb.

      But given that it IS that small, the prediction by the limitation on degrees of freedom in SR, that at light velocity any inertially bound mass equivalent quantity of energy would become infinite, is definitely a mathematical consequence not the physical reality. The phase action is a function of velocity, and if given a postulate that energy density varies in direct inverse proportion to velocity, then at progressively higher velocities and corresponding lower densities the induction reactance of a charge field to a field intensity propelling acceleration would also become correspondingly lower.

      So we can treat physical phenomenon in simple terms of the action of change between a material phase and an energy phase. What may characterize a photon from a subluminal, gravitational mass may well be that the proportional upper density bound of a photon in its matter phase is lower than an empirical density which exhibits inelastic response... hence it is a particle of charge not kinetic ballistic impact. It is still a mass, but gravitational response measured as mass might require an inelastic density characteristic that would be proportional to a greater mass quantity matter phase. :-) jrc

        8 days later

        I am sorry...this is a word salad.

        You need to begin your universe with a very simple principles and show how those principles explain everything.

        5 days later

        That paper states that "When the SG magnets are oriented the same way (case (a)), the outcomes are always the same due to conservation of spin angular momentum between the pair of particles."

        But that statement assumes that all the "outcomes" of every detection event are in fact correct and thus indicative of the "true" state" of the entity being measured. But it has been demonstrated that such perfectly correct outcomes are not even a logical possibility, in purely classical systems, whenever the system has been constructed such that it manifests only one, single bit of information. In other words, even when the system is constructed with perfectly anti-parallel "entangled particles", the actually detected "outcomes" cannot possibly be anti-parallel in every case ("bit-errors" are inevitable in some detections), and the probability of detections failing to be anti-parallel, is a function of the misalignment between the polarization axis of the entity being measured and the axis of the measuring device.

        The paper then says "Instead, what happens in case (b) trials is that Bob's outcomes corresponding to Alice's outcomes for a particular setting correctly average to what one would expect for conservation of spin angular momentum (Figure 4)."

        That statement is correct - precisely because the average is based upon coincidence-detection, that systematically fails to detect the entangled pairs most prone to producing the bit-errors; because, the mechanism that causes the bit-errors is the same as that which causes coincidence-detection to fail to detect every particle pair.

        Rob McEachern

        I love your stubborn "Shannon bit noise explains quantum phase..." argument.

        The basic issue with quantum measurements is that once an observer measures two entangled spin states from a single electron outcome, that observer cannot then know what the precursor spin state was for that electron. This means that before observation, quantum spin states exist as a superposition of both spin states.

        Classical electron spin states represent revealed knowledge in that the classical observer measures a classical electron that the classical measurement reveals certain the spin states before the measurement, albeit within the classical Shannon noise level of the measurement. In the absence of perturbations, classical spin state outcome coming from a precursor necessarily means that spin state existed prior to the measurement and the measurement simply revealed that hidden knowledge.

        The revealed knowledge of classical Shannon noise has no bit limit since a higher resolution classical measurement bit is always possible. Single electrons represent a limit for a Shannon bit, but a single electron is a qubit since it has quantum phase.

        A electron simply cannot exist as a classical bit, so this classical explanation for quantum phase simply invents a new particle of matter called a classical bit. However, there is no way to measure a classical bit of noise without quantum phase.

        By carefully fitting bit errors as a function of quantum phase angle, a classical bit can fit quantum phase correlations like Bell's. There is of course a classical noise fit to the Stern-Gerlach quantum phase as well. These classical noise fits have no useful predictions for any other measurements. In fact, a classical noise "hidden" function that fits a quantum phase property is a proof of the validity of a hidden quantum phase, not proof of classical noise...here is the figure from 2019jul...Attachment #1: 1_mceachernCorrelate.jpg

        Hi , I liked also, but a question intrigues me, what is really an electron, I have my idea in my model and these spheres but they are intriguing in fact.

        Steve Agnew, do you know well the dirac electrons? and the fact that they are massless in their comportments , it seems relevant for the electronics and the phases, do you know the method to have this comportment , is it with crystal and changements in temperatures , and if yes why exactly ?

        the graphene seems the answer with the changements , amorphous to crystalline due to heat and the materials of course are important but what is an electron and why ?

        Steve Agnew:

        Everyone will either have to learn to love my argument, or at least live with it, since the argument is correct.

        The entire point of Shannon's Information Theory, is to avoid ever making any "measurements" at all; because measurements always involve making errors, in the real, non-ideal world. To completely eliminate every error, it is necessary to substitute error-free "decisions", for error-prone "measurements".

        "The basic issue with quantum measurements is that once an observer measures two entangled spin states from a single electron outcome..." then that observer has totally skewed up! An observer must never even attempt to make any such "measurements", because any such attempt is doomed to fail - many of the "measurements" will inevitably produce the exact opposite result, from the "true" result.

        To put it bluntly (I never have been "politically correct"), it is an exercise in stupidity, to ever even perform a Bell test, unless the observer first figures out how to absolutely guarantee that the axis of the measuring device, will always be perfectly aligned with the axis of the entity to be measured; because that is the only configuration in which the errors can be eliminated, even in the classical case, that the Bell test statistics are being compared with.

        Shannon's "decision" theory of information, is founded upon exploiting "exclusion" zones; think of the Pauli exclusion principle or the Demilitarized Zone separating North and South Korea. Everything of any consequence, is being systematically excluded from such a zone, meaning that nothing of consequence, is "allowed" to even exist within such a zone. Hence, anything "measured" within such a zone, must be something that should not be there. (For example, there are no "allowed" integers, between the values of 1 and 2 and there are no "allowed" letters between "a" and "b" in the English alphabet).

        That is what "quantization" is all about - "quantizing" information (not space and/or time, as virtually all physicists have supposed) by enforcing "exclusion zones" between every valid value; all "valid" values of all physical states are necessarily discrete rather than continuous, because only discrete (hence potentially error-free) values can enable perfectly repeatable (deterministic) behaves to ever exist (emerge from chaos). In other words, it is the existence of discrete, "valid" states, that enable "Cause and Effect" itself to exist as a physical phenomenon. Continuous states can and do exist; but they cannot enable perfect "cause and effect" because there is no "perfectly measurable" cause to trigger a correspondingly perfectly reproducible effect. Deteminism exists precisely where discrete states are being exploited to ensure perfect (error free) information recovery.

        In the case of the polarized coin measurements, there is an "exclusion zone" being constructed between 0 degrees and 180 degrees; hence, nothing "valid" can ever exist between 0 and 180 degrees. Hence, if you think you "measured" something within the exclusion zone between 0 and 180 degrees, then you measured something that should not be there - something invalid! The polarized objects being measured, in the classical case, were constructed to make that true! There is nothing "valid" to ever be measured at angles other than 0 or 180 degrees, since the exclusion zone was constructed such that only noise, distortion and inter-symbol-interference can exist between 0 and 180 degrees - nothing "valid" (AKA capable of being correctly "measured" exists there). It is an observed fact, that Quantum entities behave in exactly the same manner - that is where the name "Quantum" came from in the first place!

        "Until Shannon, it was simply conventional wisdom that noise had to be endured. Shannon's promise of perfect accuracy was something radically new."

        Rob McEachern

        It is not clear what you mean by your argument being correct...your arguments now are a random laundry list of Stern-Gerlach and double-slit diffraction and Shannon noise theory.There is absolutely nothing wrong with Shannon noise...it is just the classical noise of chaos. What is wrong is that Shannon noise is the approximation that Shannon noise is always completely independent of either sender or receiver as the attached Wiki diagram shows.

        This is a very good approximation for a coin flip, but is a invalid for quantum phase of Stern-Gerlach or 2-slit diffraction. Note that even with a coin flip, there are in between coin outcomes because the coin is 3-D and a coin edge can affect the outcome. So a coin can end up on its edge as the attached pic shows, or you could catch and manipulate the coin that to influence the outcome, which is a common magic trick after all.

        Since Shannon never addressed the role of quantum phase in his information theory, Shannon had the quantum phase of qubits. But Shannon did know about quantum phase of von Neumann and that is quantum information theory.

        I agree with you that Shannon theory injects classical "random" bit noise independent of either precursor or outcome. However, Shannon noise injection has no lower classical limit and if Shannon noise RMS voltage is an order of magnitude below the bit voltage, Shannon noise will affect very few results. Two orders of magnitude and so on without any limit will eventually reach the limit of the wavelength of the universe.

        It is possible to tune a classical Shannon noise function to mimic outcomes of quantum phase noise, but without quantum superposition and entanglement, such tuning does not result in any useful predictions. In particular, classical and quantum noise behave differently with temperature, pressure, or other environmental effects.

        Even one electron represents a measurable quantum event way above one bit of Shannon noise with many common measurements. One electron can be in a superposition state and one electron can interfere with itself and result in two possible paths or spin state outcomes. Those outcomes are necessarily subject to quantum phase uncertainty but not to the chaos of Shannon noise...Attachment #1: ShannonNoise.JPGAttachment #2: headEdgeTails.jpg

        "A question intrigues me, what is really an electron, I have my idea in my model and these spheres but they are intriguing in fact."...Steve Defourny.

        An electron definition is really quite tricky since, as a fundamental particle, it is like asking why the universe exists or why matter exists at all. Electrons exist because they exist, which is an identity and hardly helpful. Science says you must simply believe in electrons and then gives their mass, charge, spin, and so on.

        You may want to know what is a matter-action electron. Since the only fundamental matter-action particle is the dust of quantum aether, it is quantum aether dust that makes up electrons and all fundamental particles, including photons. At the CMB, only a small fraction (~1e-7) of aether was frozen into quarks, and then quarks froze into electrons and other particles.

        Before the CMB, forces were too weak and there was aether equilibrium. Since the CMB cooling, the reverse reaction has been kinetically limited except for very high energies like stars and black holes...and particle accelerators. Antimatter is then what made up the precursor to our matter universe.

        The only matter that survived CMB cooling was neutral hydrogen and small amounts of other neutral atoms according to the Schrödinger equation. The Rydberg energy of hydrogen ionization stabilizes our universe by the Schrödinger equation and all else follows...

        So a proton and neutron are both made up of three quarks as pairs. Quarks are then part of the same Rydberg energy stabilization as is hydrogen and all matter. An electron is then made up of a three quarks as pairs as well. Electron mass is a result of the same Rydberg stabilization energy that forms quarks into protons and then electrons into hydrogen.

        Although the freezeout of CMB matter involved many possible outcomes including antimatter outcomes, only those outcomes that met the Rydberg energy formed stable hydrogen and thus only about 1e-7 of aether makes up observable matter. Unlike a proton or neutron which have both mass and charge radii, an electron has only a charge and not a mass radius. This is because of the very different quark quantum phases for an electron versus a proton or neutron.

        The matter-action outcomes are, of course, not accepted at all by Science, but have not been falsified either...

          You seem have no idea at all, in regards to what Shannon's Theory was originally all about: To ensure error-free information recovery (no uncertainty! - none!), rather than transmitting any physically-meaningful "data", that can be "measured", an emitter needs to transmit a sequence of intrinsically meaningless random noise. That entire sequence of transmitted noise is "detected" in one, single, irreducible, integration "event", (incorrectly interpreted, in quantum theory, as a collapse of a wavefunction) via a correlation process (a matched filter, which has a priori knowledge of the sequences to be transmitted). Each entire sequence is thus either successfully "detected", or not - all or nothing - there is nothing being "measured". Whatever "meaning" or significance is attached to each such detection event, is just that; "attached" extrinsically. Shannon's theory is all about the design of such correlation based detection; how does the duration and bandwidth of the "noise-like" sequence effect the maximum correlation "processing gain" and thus the probability of never mistaking the detection of the desired sequences, for the detection of any other noise sequence. Longer sequences with larger bandwidths result in greater processing gain, and thus, less likelihood of ever mistaking a desired sequence for any other sequence.

          In other words, rather than directly transmitting any physically-meaningful "data", that might actually be desired (and subjected to an attempt at "measurement") by some intended receiver, an emitter must instead, deliberately transmit the only type of "signal" (random "noise-like" sequences with suitably high processing gain) that can ever be unmistakably detected by any receiver. The actually desired "data" can subsequently be deduced (not measured!), via an a priori agreed upon "encoding" of which data (such as a bit-value = 1, or "spin up") corresponds to which a priori known, random noise sequence.

          The issue of concern in quantum physics, is what happens when the sequence duration and its bandwidth, are simultaneously reduced to the minimum-possible time-bandwidth product, that would still enable the sequence to be just barely detectable (in the presence of "channel noise"), even under the most optimal conditions possible (the receiver has complete a priori knowledge of the optimal matched filter etc.) That is what the Heisenberg Uncertainty Principle is ultimately all about. And that is what is being demonstrated in my paper - and even though the matched filter being used, in the paper, is not actually the optimal filter, it nevertheless is already "good enough" to reproduce the entire quantum correlation curve (not just the few points produced by typical "Bell tests"), with detection efficiencies greater than that which has been supposedly "proven" to be the maximum possible, for any conceivable, classical system (AKA "local realism").

          Rob McEachern

          Hi Steve, thanks for developing, it is well explained. I like the works of Dirac , the matter action of course seems relevant , we need to know more because like I told we have limitations in knowledges unfortunally about the main origin of our reality and we don t know really also these foundamental mathematical and physical objects , your explainations were a pleasure to read, I study in the same time, thanks still. Regards