• [deleted]

Dear colleagues,

I found the essay quite stimulating. Congratulations.

Regarding the issue at hand here, I view very favorably the the idea that Quantum Gravity could be behind the modifications that look effectively like CSL.

In that case I think the issue of energy conservation should be considered in a rather different light. In General Relativity there are no generic laws of energy conservation (Energy becomes a well defied concept only in rather spacial situations, such as space-times with time-like Killing Fields). The

only thing that is relevant in a general context is the conservation of the energy momentum tensor (i.e it should have zero divergence). But that, again, relies on physics taking place in a well defied space-time metric.What will be the form of whatever is left of such notions in a situation where metric is ill defied, or fuzzy, or fluctuating, is, I believe anybody's guess.

5 days later
  • [deleted]

Interesting essay. Nice job.

    Dear authors

    I really like your essay and the way that it tackles a crucial issue for present day physics that so many choose to sweep under the carpet. You suggest "1. Given a system of n distinguishable particles, each particle experiences a sudden spontaneous localization (i.e. collapse of position state) with a mean rate lambda, to a spatial region of extent r_C." I agree that this needs investigation. But my own view would be that this would be very likely to depend on the local context, in much the same way that state vector preparation does (see here for a discussion). Thus the rate lambda would be environmentally dependent. Penrose' idea is one way that this dependence might occur; but it could be that it is a far more local effect than that (i.e. on the scale of the measuring apparatus).

    George Ellis

      Dear Angelo, Tejinder and Hendrik,

      I think the criterion for this departure from linear QM may come with horizon scales. The de Broglie wave equation tells us the wave length of a particle with momentum p is λ = h/p. If we use the momentum p = mc (thinking in a relativistic sense of p = E/c) we may estimate the wave length for a Planck momentum particle p = m_pc = 6.5x10^5gcm/s, for m_p the Planck mass. The wave length for such a particle is then 1.0x10^{-32}cm, which is close to the Planck length scale L_p = sqrt{Għ/c^3} = 1.6x10^{-33}cm.

      A quantum system is measured by a reservoir of states. The superposition of states in that system is replaced by entanglements with the reservoir of states. The standard measuring apparatus is on the order of a mole or many moles of atoms or quantum states. This then pushes the effective wavelength of this measurement, or maybe more importantly the time scale for the reduction of quantum states measured to an interval shorted than the Planck time T_p = L_p/c. This might mean that measurement of quantum systems, and associated with that the stability of classical states (the table not being in two places at once & Schrodinger's cat) involves this limit that is associated with quantum gravity.

      John Wheeler discussed how there may be different levels of collapse. With gravity there is the collapse of a star into a black hole. He then said this may be related to the "collapse" (to use an over played buzzword) of a wave function. He said the dynamics of black hole generation and the problems with quantum measurement might well be related, or are two aspects of the same thing. It might also be pointed out there are theoretical connections between QCD and gravitation, where data from RHIC and some hints with the heavy ion work at the LHC, that gluon chains or glueballs have black hole (like) amplitudes similar to Hawking radiation.

      In an ideal situation it might the be possible to maintain a system with around 10^{20} atoms in a quantum state. A reduction of such idealism may reduce this to a lower number. This does leave open the question of how the physics of superfluidity, superconductivity and related collective overcomplete or coherent states fits into this picture.

      My essay is not related to this topic directly, though my discussions on replacing unitarity with modular forms and meromorphic functions could have some connection.

      Good luck with your essay.

      Cheers LC

        Dear George,

        Thanks for your liking our essay, and for your interesting viewpoint.

        What you suggest might very well be the case, for a consistent theory of spontaneous wave function collapse. However as you know it is not what is assumed to happen in collapse models such as CSL. There, the collapse rate lambda is a uniquely fixed constant, which does not depend on anything. If collapse models were a fundamental theory, it would play the role of a new constant of nature. The equations of motion then tell you that, when you have a systems of particles, the collapse rate of the center of mass scales with the size of the systems. This scaling seems to be something like the contextually feature proposed by you. But the value of lambda remains always the same.

        We are currently reading your detailed paper on quantum measurement mentioned by you above. Your essay here on top-down causation is fascinating. Do you have a picture on how corresponding mathematical models can be built, including specifically in the context of quantum measurement?

        Regards,

        Authors

        Thank you Daniel. We broadly agree with your viewpoint.

        Authors

        Dear Lawrence,

        1. You seem to subscribe the idea that decoherence solves the measurement problem, if we interpret correctly what you write. We strongly object against the possibility that decoherence alone provides a solution to the measurement problem. See [Adler's paper against decoherence] for a thorough criticism, which we think is convincing enough.

        2. About John Wheeler's idea. They are certainly very appealing, and there could be much truth in them. However, they have not been translated so far into consistent mathematical models. In our essay, we stick on purpose only to ideas which find application in well-defined mathematical models, like collapse models and trace dynamics. Moreover, collapse models make precise predictions, which can be tested experimentally. In this way, one has what we think is a perfect match between speculation, mathematical modeling and experimental analysis.

        3. Regarding superfluidity, superconductivity and related collective overcomplete or coherent states. They can be very well described within collapse models, and the answer is that they behave as we see them behaving. In other words, collapse models do not predict a (too) different behavior for such collective phenomena, with respect to standard quantum mechanics. The reason is that these phenomena do not involve the *superposition of a large number of particle in two appreciably different positions in space*, the only type of superpositions which are strongly suppressed by collapse models.

        Regards,

        Authors

        As noted at the beginning of your article:

        "The principle of linear superposition...Along with the uncertainty principle, it provides the basis for the mathematical formulation of quantum theory." You then suggest that it might only hold as an approximation.

        I view the problem rather differently. Fourier Analysis is the actual mathematical basis for quantum theory. Superposition and the uncertainty principle are merely properties of Fourier Analysis. In other words, they are not properties of the physical world at all, but merely properties of the mathematical language being used to describe that world. Even the well-known, double-slit "interference pattern" is just the magnitude of the Fourier Transform of the slit geometry. In other words, the pattern exists, and is related to the structure of the slits, as a mathematical identity, independent of the existence of waves, particles, physics or physicists.

        For the better part of a century, physicists have been misattributing the attributes of the language they have chosen to describe nature, for attributes of nature itself. But they are not the same thing.

        Fourier Analysis, by design, is an extremely powerful technique, in that it can be made to "fit" any observable data. Hence it comes as no surprise that a theory based on it "fits" the observations.

        But it is not unique in this regard. And it is also not the best model, in that it assumes "no a priori information" about what is being observed. Consequently, it is a good model for simple objects, which possess very little a priori information. On the other hand, it is, in that regard, a very poor model, for human observers; assuming that it is, is the source of all of the "weirdness" in the interpretations of quantum theory.

        Putting Fourier Analysis into the hands of physicists has turned out to be a bit like putting machine guns into the hands of children - they have been rather careless about where they have aimed it. Aiming it at inanimate objects is acceptable. Aiming at human observers is not.

          • [deleted]

          Dear authors,

          When reading your excellent essay, a (perhaps silly) question compes to my mind. You write: "Suppose one has prepared in a controlled manner a beam of very large identical molecules...". What I wonder is: Mustn't there be an upper limit where the very large (hence comlex) molecules can no longer be assumed to be positively identical? Might the lack of controlled identity be the limit where linear superposition no longer holds? Might it be a question of molecular complexity, rather than size/weight?

          Best regards!

          Inger

            Thanks for that. I don't yet have a mathematical model in the case of measurement: am thinking about it. The first step is to look at state vector preparation, which is an analogous non-unitary process, involving a projection operator depending on the local macro context. With that in place, the steps to a contextual measurement model - maybe with a new universal constant, as you say - may become clearer. But the essential comment is that the local measurement context may be the "hidden variable" (it's non-local as far as the micro system is concerned, so the non-locality criterion is satisfied). It's hidden simply because we don't usually take into into account.

            George

            • [deleted]

            Dear Authors,

            You are tackling one of the elephants in the physics room, and as George Ellis commented above this one is hard to sweep under the rug.

            Your solution of Continuous Spontaneous Localization [CSL] seems like a good idea to me.

            My own work points to the Planck mass as the upper limit of all quantum phenomena including superposition. See my essay for two methods that show this. My essay is "An Elephant in the Room". This is a different elephant than yours (there are plenty of elephants to go around).

            Here is a vague outline of an experiment that I believe can be performed that would correlate with your theory:

            1. Chose a crystal like diamond to investigate. This is done because diamonds are considered to be a single molecule independent the number of carbon atoms.

            2. Create bins of diamonds with increasing numbers of carbon atoms up to the Planck mass.

            3. Test these diamond bins via the University of Vienna for the property of interference.

            4. I suspect that interference phenomena will gradually decrease with mass and will disappear at the Planck mass. This experiment (if it can be performed) should provide confirmation of your theory.

            Good to see you in this contest.

            Don Limuti

              Inger,

              Your question is not silly at all. It is very near the heart of the issue. One need only go a little bit deeper to arrive at "the issue."

              What is the significance of the particles all being "identical" in the first place? If they remain, forever identical, then they cannot change with the passage of time. If they cannot change with the passage of time, then they cannot store any information whatsoever, within their internal structure.

              But a larger entity, constructed from a number of such identical particles, can store information, by the relationships (like distances) between them. Entities that store information, can behavior towards other entities in a "symbolic" manner, and not just a "physical" manner. Even a tiny virus particle has genetic information stored within it, that enables it to exhibit such "symbolic" behavior.

              What is the significant difference between "symbolic" and "physical" behavior? It is this: in the latter, observed data measurements are treated as "real numbers", in the former, they are treated like "serial numbers." Real numbers have most significant and least significant digits. Serial numbers, like credit-card numbers do not; change one digit anywhere, and it symbolizes someone else's account number; introduce one genetic mutation, and it may code for a different protein.

              All the "interpretations" of mathematical models in physics have assumed that entities only interact "physically." That is true for entities devoid of any information storage capacity, like subatomic particles. But it is not true of macroscopic entities, especially human observers. Physical behaviors can be viewed as encoded into the equations. But symbolic behaviors are coded into the initial conditions. By ignoring the exact (individual digits) of the initial conditions of the information stored within complex entities, physicists has thrown the baby out with the bath-water.

              All the supposed "weirdness" in the "interpretations" of quantum theory, derives from the fact that physicists have failed to take into account that human observers interact "symbolically" with their experiments, as well as "physically."

              • [deleted]

              Dear Robert,

              Thank you very much for your enlighening reply! You gave me more than a hint of the role of information theory in physics, which I would like to follow up further. I entered this essay contest in order to have the opportunity to ask some silly questions to people that are more knowing than me - and kind enough to answer. See, if you like, my essay "Every Why Hath a Wherefore".

              You saved my day!

              Inger

              • [deleted]

              Dr. Singh and Colleagues:

              You ask an important fundamental question about quantum linear superposition. But implicit in that question is the assumption that linear superposition should be universal. Instead, I would suggest that linear superposition applies ONLY to primary quantum fields such as electrons and photons. Please see my essay "The Rise and Fall of Wave-Particle Duality", http://fqxi.org/community/forum/topic/1296. In this picture, Quantum Mechanics is not a universal theory of all matter, but rather a mechanism for generating localized particle properties from primary continuous fields, where these localized (but not point) particles then follow classical trajectories (as derived from the quantum equations). Composites of fundamental fields such as nucleons and atoms are localized composite objects WITHOUT wave properties of their own, and hence completely without linear superposition. Beams of neutrons or atoms do not require de Broglie waves for quantum diffraction from a crystal lattice, which instead reflects quantized momentum transfer between the beam particle and the crystal. Remarkably, this reinvisioned quantum picture is logically consistent and avoids quantum paradoxes. Even more remarkably, this interpretation seems to be virtually new in the history of quantum theory, although it could have been proposed right at the beginning. The FQXi contest would seem to be an ideal venue to explore such concepts, but this has drawn relatively little attention.

              Thank you.

              Alan M. Kadin, Ph.D.

                I don't think decoherence solves the measurement problem per se. It does indicate how superpositions of a quantum system are teken up by a reservoir of states in entanglements. This then reduces the density matrix of the system to a diagonal matrix which correspond to probabilities. Decoherence does not tell us which outcome actually happens.

                I framed this within the decoherence perspective. It seemed as if the criterion for the sort of nonlinear quantum physics would happen when the time of the state reduction occurs at a time comparable to the Planck time. This can happen for a system with approximately 10^{18} amu or proton masses. This might be the maximal size at which a system can have quantum properties.

                Cheers LC