I'm opening this thread shamefully late, based on recent FQXi funded research by Gerardo Adesso and colleagues, whose work was covered in Phys.Org -- as spotted at the time, by John Merryman.

We will, of course, be covering Adesso's work in more detail on the site as we profile his grant, in depth. But in the meantime, here's a space to discuss the team's exciting paper in Physical Review Letters.

From Lisa Zyga's article on Phys.Org:

"Quantum coherence and quantum entanglement are two landmark features of quantum physics, and now physicists have demonstrated that the two phenomena are "operationally equivalent"--that is, equivalent for all practical purposes, though still conceptually distinct."

"The physicists arrived at this result by showing that, in general, any nonzero amount of coherence in a system can be converted into an equal amount of entanglement between that system and another initially incoherent one. This discovery of the conversion between coherence and entanglement has several important implications. For one, it means that quantum coherence can be measured through entanglement. Consequently, all of the comprehensive knowledge that researchers have obtained about entanglement can now be directly applied to coherence, which in general is not nearly as well-researched (outside of the area of quantum optics)."

Read more here.

It seems to me that things which are " ... equivalent for all practical purposes, though still conceptually distinct" is a restatement of the measurement problem.

Here, a miracle occurs.

    Tom,

    As I have pointed-out before ab+ac = a(b+c) are "equivalent for all practical purposes, though still conceptually distinct." Unfortunately, conjuring-up "interpretations" of the mathematical equations of quantum theory, are not "practical purposes", and remain "conceptually distinct" from the measurement problem. One cannot measure or observe any differences, between the results produced by mathematical identities, even when the physical entities modeled by those identities, are indeed "conceptually distinct".

    Rob McEachern

    Rob,

    Can you show me that the case of mathematical identities differs from the measurement problem? It is otherwise superfluous to say that two things are "conceptually distinct" while being operationally equivalent.

    If indeed mathematical identities are all that is required, then the model is complete -- things that are not differentiable are identical. But since the measurement problem remains, the model is incomplete. This looks like a kludge.

    I guess I agree with you. Except to say I do not see that the measurement problem is "conceptually distinct" from attempts to show that quantum coherence depends on entanglement. Entanglement is not an observable.

    Choose coherence or choose entanglement. Miracles are superfluous, like phlogiston.

    Tom,

    "Entanglement is not an observable." Causes are generally not observable. Only their effects are observable.

    Suppose one measured some data, produced by a black box, and found that it "obeyed" the equations 2sin(a)cos(b)=sin(a+b)+sin(a-b).

    Which is the correct, physical cause for this effect?

    1) inside the black box is a circuit that sums two sinusoids, to produce a superposition.

    2) inside the black box is a circuit that modulates one sinusoid with another.

    The measurement can decide which mathematical identities completely describe the observable data. But those mathematical identifies cannot uniquely determine which type of device/circuit created the data.

    These two things AWAYS produce measurably equivalent results, but they are not "conceptually equal".

    "things that are not differentiable are identical." The two things above are highly differentiable. If one opens the black box, the circuits will be observed to be obviously different. They just cannot be differentiated by observing only the outputs they produce.

    Why do such thing matter? The Fourier Transform describing a Histogrammer, is mathematically identical to the Fourier Transform, used to describe superpositions in QM. The former provides a "conceptually distinct" explanation, for why QM measurements correspond to probabilities. The latter does not, even though the two are mathematical identities.

    Rob McEachern

    "The Fourier Transform describing a Histogrammer, is mathematically identical to the Fourier Transform, used to describe superpositions in QM. The former provides a "conceptually distinct" explanation, for why QM measurements correspond to probabilities. The latter does not, even though the two are mathematical identities."

    Which is exactly why one model is superfluous. If one assumes a probabilistic outcome, one engineers a probabilistic model. One can as easily devise a deterministic model. "Conceptually equal" just isn't a scientific concept.

    The unanswered questions are metaphysical. I don't see how one answers them with another metaphysical idea.

    There are no certain answers to metaphysical questions. Nor can causes be certainly identified for physical effects. All that can be done, with certainty, is to show that a theory either agrees or disagrees with the observations. Even when it does agree, it may not agree for the reasons one supposes that it does. That is the problem with metaphysical questions.

    "If one assumes a probabilistic outcome." One does not need to assume anything. One merely has to take note of the perfect correlation between some computed results, and some observed behaviors. All the various assumptions come into play, only when one attempts to "interpret" the cause of such observed effects.

    The QM probability models make predictions that can be compared to observations.

    "One can as easily devise a deterministic model." Perhaps. But, what verified predictions have they made, that differ from those made by the probabilistic models?

    Rob McEachern

      "'One can as easily devise a deterministic model.' Perhaps. But, what verified predictions have they made, that differ from those made by the probabilistic models?"

      What verified predictions have been made by probabilistic models?

      The prediction comes post observation.

      One of the more famous QM predictions was Dirac's prediction of the existence of the positron, before it was observed.

      "The prediction comes post observation." All theories are preceded by at least some observations - those that inspired the theory. But subsequent use of the theory, has enabled many predictions of future observations. Virtually all sophisticated, communications systems use probability models to predict error (both random and non-random) performance; including predictions of what types of techniques will enable the detection and mitigation of those errors.

      Statistical Mechanics has also developed many "laws", that resulted in theoretical descriptions of yet to be observed phenomenon.

      Rob McEachern

      Dirac's prediction is from special relativity. It is already implied in E^2 = m^2c^4 (pc)^2. It's just that he took seriously the case of a particle with zero momentum and nonzero energy that relativists had dismissed as nonphysical. Symmetry demands it, however.

      Not all theories are based on prior observations. Newton's dictum, hypotheses non fingo, favors correspondence of mathematical theory to physical result, and drove the whole of rational science for 300 years until scientists got into the bad habit of imposing metaphysical ideas on events they can't explain rationally.

      Predicting error is useful for control systems, with arbitrary boundaries. For nature at its foundation, unboundedness is a primary property.

      "It is already implied in E^2 = m^2c^4 (pc)^2." That does not even imply the existence of an electron.

      I read Newton rather differently; hypotheses non fingo, simply means he did not care to speculate about the cause of gravity; "why" it was an inverse square. His theory was based on previous observations, both celestial and terrestrial.

      Nature, as a whole, may be unbounded, but nothing within it appears to be.

      Rob McEachern

      You could be right. Maybe electron is only the name we give to a particle of fundamental negative charge. The equation, does, however imply symmetry, no matter what we call the particle.

      You may also be right about Newton. If the cause of gravity is metaphysical, it will never have an explanation, so better not to deal with it as a scientific issue.

      But your most interesting statement: "Nature, as a whole, may be unbounded, but nothing within it appears to be," is the crux of our dilemma. A physically real spacetime, vice a mathematical artifact we call spacetime, would solve the problem. I find myself in agreement with Petkov's essay.

      Petkov stated that: "Determining which mathematical entities have counterparts in the external world is indeed quite challenging partly because it is a bit atypical task for physicists since it does not involve calculations."

      I would argue that it is not "because it is a bit atypical", but because it is neither falsifiable nor verifiable; the problem is not that "it does not involve calculations", rather, it is that it does not involve observation - of the supposed cause for an observed effect.

      Rob McEachern

      It's verifiable, although not falsifiable.

      This is the point of my essay section the correspondence principle and Popper falsifiability.

      Continuous one to one correspondence without a sign change cannot be falsified. Continuous functions in Minkowski spacetime, however, are reversible. The challenge is to develop the mathematics which show where the function reverses, because the chirality is externally identical. I think the key to observation is in the quantum jump phenomenon.

      Tom,

      Your essay concludes with:

      "If reality is objective and not observer-dependent and if theory is objectively correspondent to experimental results,..."

      I would say that theory can only correspond subjectively to experimental results, since some experimental results (near the Shannon limit) can only be obtained subjectively; the subject/observer must know, a priori, what and how to observe, in order to observe anything at all, of any significance.

      Rob McEachern

      While that may be true, Rob, we are continually finding more sophisticated ways to observe and learn. Better mathematical models guide us, and we don't know where or if they limit out.

      If we can keep extending domain and range by closed form, we can make exact -- not probabilistic --- measurements.

      Rob,

      Tom argued: "If reality is objective and not observer-dependent and if theory is objectively correspondent to experimental results,..."

      I consider reality not observer-dependent but by definition the conjecture of something objective and the future nonetheless open to an incalculable plurality of what you are calling initial conditions. While I agree with most of your points, I would like you to comment on MP3 vs. Fourier transformation.

      Eckard

      Tom,

      I'm not sure what you mean by an "exact measurement", except in the sense that I said - that the observer knows exactly what to measure, and what to ignore. But even then, probability can come into play, when you consider sequences of measurements, and determining/estimating the entire sequence "at once", rather than as individual measurements, rather like recognizing a word, rather than the individual letters (much less fonts) making up the word.

      I think that we can indeed "keep extending domain and range by closed form", but the question remains, is the extended knowledge useful for understanding the physical world - is it about math, or physics?

      Another issue is with "closed form". Wiki gives the following definition: "In mathematics, a closed-form expression is a mathematical expression that can be evaluated in a finite number of operations."

      While Physics may have chosen its domain to be so limited, that it can be fully characterized in a finite number of operations, I'm not so sure that the rest of reality can be. In particular, I do not think that an observer with "free-will" can be characterized in that way. That is the ultimate measurement problem. It reminds me of what an author once said about writing a book - a book is never finished, it is merely abandoned. An observer can choose to abandon additional measurement attempts, but I'm not so sure that there is any other way to render measurements, able to be completed, in a finite number of operations.

      Rob McEachern

      Eckard,

      MP3 is a lossy, data compression technique, used to describe sequences of discrete audio samples, such that when the audio is reconstructed from the MP3 coding, the human auditory system is not likely to take much notice of the losses.

      Fourier transforms are not generally "lossy", nor are they "tuned" to the representation of audio data.

      There are Fourier transforms (continuous/continuous), Fourier series (continuous/discrete) and discrete Fourier transforms (discrete/discrete).

      Both techniques are "after the fact", in that the data (at least a block of it), must already be "in hand", in order to compute the representation.

      Rob McEachern

      Robert,

      Your posts to Tom and Eckard at 18:26 and 18:40 were helpful to me in getting a more general idea of your focus. I'll read you at times in various discussions but am ill equipped to follow many technicalities as you present them. Glad to have you state, more simply, what is the obvious to you.

      So can we synopsize by saying that on the other side of the coin, even if we were to finitely determine a true objective reality, we could not know with any certainty that we had done so in all totality? That our free will and curiousity can both, make up things that are not, and disable us from recognizing the tangible? Not to interject, just looking on, :-) jrc