• [deleted]

It is only a chat in a blog, not a complete theory: I read your article and I start to thinking on your interesting essay.

I think that for the third law of thermodynamics exist a single quantum function for a macroscopic object: a superconductive apple (superconductive Newton's apple) is a quantum object that produce tunneling, double slit interference and others quantum effect.

The problem with the real apple is the unattainability of the absolute zero, because of the phonon oscillations (thermal absorption); is it possible to built material transparent for phonons, like diamond, or glasses, or irregular lattice material (only apple shape)?

Exist in the Universe macroscopic objects near the absolute zero?

I think that a black dwarf is a macroscopic object with a single quantum function that emit like a single neutron in a gravitational field (hypothetical observable quantum jumps of the black dwarf).

Saluti

Domenico

    Dear Dr. Singh,

    I certainly agree that 'ugly as sin' is not a scientific characterization, and that all phenomenological models (MOND, etc) share this feature to some degree. I further agree with your reasons for taking GRW seriously, so it really is just a case of "one may not like the explanation". I do like your explanation for 'restoring' energy conservation.

    I also agree that "The violation of Bell inequalities is an experimental fact". It is the assumptions underlying Bell's inequality that I believe to be wrong. Because this topic has been discussed in great detail on other threads, I will not clog your space here. Thank you for agreeing to read my essay, I look forward to your comments.

    Best,

    Edwin Eugene Klingman

    • [deleted]

    Tejinder:

    Way over my head, but interesting. Does nature recognize fancy mathematics? My essay is perhaps overly simplified, but addresses the real problem of Physics. Wherein lies "consciousness"? Very murky, but emergentism (growth) and panpsychism (memory are properties suggested that aligns them with probabilities of a 1-D, 2-D, and 3-D geometric world. See:

    To Seek Unknown Shores

    聽聽 http://fqxi.org/community/forum/topic/1409

      • [deleted]

      Dear authors,

      there is little to read about the decoherence theory to explain why macroscopic superpositions are so difficult to achieve. What do you think about this theory and it's recent experimental tests that confirm this framework?

      Would be happy about some answers.

      Best regards,

      Michael Lee

        • [deleted]

        A clearly written essay, which, unlike many others here proposed, concerns physics and not science fiction.

        One may like these models or not, but one has to acknowledge that they give a logic and fully-consistent explanation of the failure of the superposition principle for macroscopic objects.

        B. V. Oman

          Hello Michael,

          You may kindly want to have a look at the article by Stephen Adler at

          http://arxiv.org/abs/quant-ph/0112095

          and Section I of our review article

          http://arXiv.org/abs/arXiv:1204.4325

          • [deleted]

          Dear colleagues,

          I found the essay quite stimulating. Congratulations.

          Regarding the issue at hand here, I view very favorably the the idea that Quantum Gravity could be behind the modifications that look effectively like CSL.

          In that case I think the issue of energy conservation should be considered in a rather different light. In General Relativity there are no generic laws of energy conservation (Energy becomes a well defied concept only in rather spacial situations, such as space-times with time-like Killing Fields). The

          only thing that is relevant in a general context is the conservation of the energy momentum tensor (i.e it should have zero divergence). But that, again, relies on physics taking place in a well defied space-time metric.What will be the form of whatever is left of such notions in a situation where metric is ill defied, or fuzzy, or fluctuating, is, I believe anybody's guess.

          5 days later
          • [deleted]

          Interesting essay. Nice job.

            Dear authors

            I really like your essay and the way that it tackles a crucial issue for present day physics that so many choose to sweep under the carpet. You suggest "1. Given a system of n distinguishable particles, each particle experiences a sudden spontaneous localization (i.e. collapse of position state) with a mean rate lambda, to a spatial region of extent r_C." I agree that this needs investigation. But my own view would be that this would be very likely to depend on the local context, in much the same way that state vector preparation does (see here for a discussion). Thus the rate lambda would be environmentally dependent. Penrose' idea is one way that this dependence might occur; but it could be that it is a far more local effect than that (i.e. on the scale of the measuring apparatus).

            George Ellis

              Dear Angelo, Tejinder and Hendrik,

              I think the criterion for this departure from linear QM may come with horizon scales. The de Broglie wave equation tells us the wave length of a particle with momentum p is λ = h/p. If we use the momentum p = mc (thinking in a relativistic sense of p = E/c) we may estimate the wave length for a Planck momentum particle p = m_pc = 6.5x10^5gcm/s, for m_p the Planck mass. The wave length for such a particle is then 1.0x10^{-32}cm, which is close to the Planck length scale L_p = sqrt{Għ/c^3} = 1.6x10^{-33}cm.

              A quantum system is measured by a reservoir of states. The superposition of states in that system is replaced by entanglements with the reservoir of states. The standard measuring apparatus is on the order of a mole or many moles of atoms or quantum states. This then pushes the effective wavelength of this measurement, or maybe more importantly the time scale for the reduction of quantum states measured to an interval shorted than the Planck time T_p = L_p/c. This might mean that measurement of quantum systems, and associated with that the stability of classical states (the table not being in two places at once & Schrodinger's cat) involves this limit that is associated with quantum gravity.

              John Wheeler discussed how there may be different levels of collapse. With gravity there is the collapse of a star into a black hole. He then said this may be related to the "collapse" (to use an over played buzzword) of a wave function. He said the dynamics of black hole generation and the problems with quantum measurement might well be related, or are two aspects of the same thing. It might also be pointed out there are theoretical connections between QCD and gravitation, where data from RHIC and some hints with the heavy ion work at the LHC, that gluon chains or glueballs have black hole (like) amplitudes similar to Hawking radiation.

              In an ideal situation it might the be possible to maintain a system with around 10^{20} atoms in a quantum state. A reduction of such idealism may reduce this to a lower number. This does leave open the question of how the physics of superfluidity, superconductivity and related collective overcomplete or coherent states fits into this picture.

              My essay is not related to this topic directly, though my discussions on replacing unitarity with modular forms and meromorphic functions could have some connection.

              Good luck with your essay.

              Cheers LC

                Dear George,

                Thanks for your liking our essay, and for your interesting viewpoint.

                What you suggest might very well be the case, for a consistent theory of spontaneous wave function collapse. However as you know it is not what is assumed to happen in collapse models such as CSL. There, the collapse rate lambda is a uniquely fixed constant, which does not depend on anything. If collapse models were a fundamental theory, it would play the role of a new constant of nature. The equations of motion then tell you that, when you have a systems of particles, the collapse rate of the center of mass scales with the size of the systems. This scaling seems to be something like the contextually feature proposed by you. But the value of lambda remains always the same.

                We are currently reading your detailed paper on quantum measurement mentioned by you above. Your essay here on top-down causation is fascinating. Do you have a picture on how corresponding mathematical models can be built, including specifically in the context of quantum measurement?

                Regards,

                Authors

                Thank you Daniel. We broadly agree with your viewpoint.

                Authors

                Dear Lawrence,

                1. You seem to subscribe the idea that decoherence solves the measurement problem, if we interpret correctly what you write. We strongly object against the possibility that decoherence alone provides a solution to the measurement problem. See [Adler's paper against decoherence] for a thorough criticism, which we think is convincing enough.

                2. About John Wheeler's idea. They are certainly very appealing, and there could be much truth in them. However, they have not been translated so far into consistent mathematical models. In our essay, we stick on purpose only to ideas which find application in well-defined mathematical models, like collapse models and trace dynamics. Moreover, collapse models make precise predictions, which can be tested experimentally. In this way, one has what we think is a perfect match between speculation, mathematical modeling and experimental analysis.

                3. Regarding superfluidity, superconductivity and related collective overcomplete or coherent states. They can be very well described within collapse models, and the answer is that they behave as we see them behaving. In other words, collapse models do not predict a (too) different behavior for such collective phenomena, with respect to standard quantum mechanics. The reason is that these phenomena do not involve the *superposition of a large number of particle in two appreciably different positions in space*, the only type of superpositions which are strongly suppressed by collapse models.

                Regards,

                Authors

                As noted at the beginning of your article:

                "The principle of linear superposition...Along with the uncertainty principle, it provides the basis for the mathematical formulation of quantum theory." You then suggest that it might only hold as an approximation.

                I view the problem rather differently. Fourier Analysis is the actual mathematical basis for quantum theory. Superposition and the uncertainty principle are merely properties of Fourier Analysis. In other words, they are not properties of the physical world at all, but merely properties of the mathematical language being used to describe that world. Even the well-known, double-slit "interference pattern" is just the magnitude of the Fourier Transform of the slit geometry. In other words, the pattern exists, and is related to the structure of the slits, as a mathematical identity, independent of the existence of waves, particles, physics or physicists.

                For the better part of a century, physicists have been misattributing the attributes of the language they have chosen to describe nature, for attributes of nature itself. But they are not the same thing.

                Fourier Analysis, by design, is an extremely powerful technique, in that it can be made to "fit" any observable data. Hence it comes as no surprise that a theory based on it "fits" the observations.

                But it is not unique in this regard. And it is also not the best model, in that it assumes "no a priori information" about what is being observed. Consequently, it is a good model for simple objects, which possess very little a priori information. On the other hand, it is, in that regard, a very poor model, for human observers; assuming that it is, is the source of all of the "weirdness" in the interpretations of quantum theory.

                Putting Fourier Analysis into the hands of physicists has turned out to be a bit like putting machine guns into the hands of children - they have been rather careless about where they have aimed it. Aiming it at inanimate objects is acceptable. Aiming at human observers is not.

                  • [deleted]

                  Dear authors,

                  When reading your excellent essay, a (perhaps silly) question compes to my mind. You write: "Suppose one has prepared in a controlled manner a beam of very large identical molecules...". What I wonder is: Mustn't there be an upper limit where the very large (hence comlex) molecules can no longer be assumed to be positively identical? Might the lack of controlled identity be the limit where linear superposition no longer holds? Might it be a question of molecular complexity, rather than size/weight?

                  Best regards!

                  Inger