Essay Abstract

Tony Leggett has suggested that quantum theory cannot be applied to complex macroscopic objects. This essay supports that idea by giving two specific examples of complex systems where this is true, because of the essential nature of the quantum measurement process, which cannot be described by standard quantum theory (none of the alternatives proposed in the end get round this limitation, in practical terms). I then place this result in the larger context of the ubiquitous occurrence of top-down causality in complex systems, which is the key process whereby genuine complexity emerges from the underlying physics. The implication is that the ability of physics to comprehend the dynamics of complex systems, such as life, is strictly limited: physics underlies and strongly constrains what happens, but in the end does not determine the unique outcome that actually occurs. This is determined by autonomous emergent higher level dynamics.

Author Bio

George Ellis is a relativist and cosmologist living in Cape Town. Recently he has been considering the way in which physics underlies the functioning of complex systems, including human life.

Download Essay PDF File

  • [deleted]

Dear George Ellis,

For teleportation of macroscopic body we must transform one into a quantum object. In fact, quantum theory must be applied to complex macroscopic objects for teleportation. For this purpose we must cut out all interactions between the body and environment by creation of absolute isolation. I propose to envelop a body with a closed hole surface.

Can quantum theory be applied to complex macroscopic objects using this method?

Sincerely, Leshan

  • [deleted]

I thought your article was insightful and points to a serious problem with the foundations of physics. I think this is germane to problems of Ising models and lattice gauge systems in particular. I will just say that I have written in my essay:

http://www.fqxi.org/community/forum/topic/494

where I look at how the cosmological constant is set according to a quantum phase transition. The argument in part stems from black hole complementarity, but this also leads to the matter of quantum phase transitions. This is a situation where quantum fluctuations are strong enough in a low temperature system so that the Euclideanized version of time acts as the temperature in determining the order of a system t ~ ħ/kT. The ordering has to do with the F_4 fluxes on D7-branes which determines the cosmological constant. The flux of the 4-form on the brane is a combinatorial structure, and the computational (quantum computational) nature of this "network." This turns out to be NP-complete. Take a look at Abhijnan Rej essay

http://www.fqxi.org/community/forum/topic/505

for more details. So this leads to an Ising spin-like structure or statistics of states which settles on a quantum critical point, or a renormalized value that deviates from that.

This seems to my mind to lead to a twist on the issue of complex networks. It seems to me that the occurrence of the classical domain is connected to the occurrence of the feedback structure. It is at this point the quantum system enters into decoherence. Quantum computers only run bounded quantum probability algorithms, which encompass polynomial time algorithms. This then suggests there is some level of algorithmic complexity where quantum systems fail to function properly and the system becomes increasingly unstable to decoherence. This is I think tied to the mass-gap, for massless particles such as photons are remarkably stable against decoherence. It is massive particles and systems which are the most vulnerable to decoherence.

Thnaks for you thoughts here,

Cheers LC

  • [deleted]

Dear John Ellis,

I recall your essay on time as rather reasonable. My essay quotes a book by Schulman in a context, which suggests that his strange frontier of physics is simply based on a mathematical flaw.

Regards,

Eckard

  • [deleted]

Dear Dr. Ellis,

I have read your superb essay On the applicability of quantum physics. It is an excellent summary of the current state of the art of quantum mechanics and its inability to connect with the macro world. I also respect your efforts that lead to the Templeton Prize. My lifetime motivation to study physics, engineering, and molecular biology has been mainly spiritual, but with no hocus-pocus.

Clearly, your work shows something fundamental is missing. Whatever is missing is either at the "top" or "bottom." The problem today is there are many issues and many mechanisms, some top-down and some bottom-up. I think you would agree that one missing mechanism would be better than many. It would be best if it were 1) already in the empirical record, 2) could bridge the spiritual gap without hocus-pocus, and 3) was consistent with quantum computation, quantum mechanics, entanglement, relativity, measurement, QED, embryogenesis, etc.

I contend that quantum computation, as nature uses it, is the fundamental "missing variable." This natural quantum computation is a network of all atomic computational systems. This natural network occurs in "hidden time." Kurakin resolves the single double slit experiments via hidden time. As Kurakin says, "The whole Universe makes the choice" which atomic detector "measures" not the photon per se, without violating Bell's theorem (Kurakin, 2004).

In my FQXi essay section 6, I summarize my proposed solution to the dilemmas your essay described. It presents one relatively simple mechanism that is both top-down and bottom-up. It is top-down in that nature's use of quantum computation is all encompassing and in that sense is spiritual. It is bottom-up because it is at the quantum level of proteins and where free will resides. It is 100% scientific.

I hope you find my essay interesting and would be honored by any comments and suggestions.

Sincerely,

George Schoenfelder

Reference

Kurakin, Pavel V. (2004). Hidden variables and hidden time in quantum theory. Keldysh Instituted of Applied Mathematics, Russian Academy of Sciences.

Dear Prof Ellis,

First, I must admit to being a 'fan' of yours since reading 'large scale structure . . .

Second, As you might see from my essay, I like the idea hierarchical network structure you propose,

It seems to me that networks provide a natural framework to understand how deterministic systems (modelled by Turing machines) can become non-deterministic when connected into a network, since one machine interrupting another can send it off on an entirely different trajectory (cf Turing's 'opracle machine').

One can then see superposition as an algebraic representation of a network in which many different states are stored in the memories of different machines. When one connects to the network, one may "observe' these different machines. So we may look at the internet as a superposition of files ('state vectors').

Further, we may model the onset of determinism in a dynamic system by the halting of a Turing machine in a particular state which is not visible to the observer while that machine is running. This is our normal experience when using a computer and waiting for it to write some output.

The behavious of a layered network is constrained both by the users of the network and by the software available within the network,thus accomodating the top-down and bottom up approach.

Finally, it seems to me that quantum mechanics becomes rather intuitive to us when we look at in terms of the conversations in the bar rather than the hard sphere dynamics of the pool table.

All the best,

Jeffrey Nicholls

George

I agree with your conclusion that complex entities cannot currently be addressed by established physics because of "Top Down" issues. A couple of points:

1. Physics has 2 sides: the empirical experimental side and the theoretical side expressed in some formalism (logic & maths). By "physics" you effectively mean modern "mathematical" theoretical physics. Your conclusion applies to the inadequacy of existing logic and maths, the formalisms side of physics.

2. The top level entity need not be biological etc.. It can be an ordinary physical object with "emergent" "Top Down" properties. Two Nobel physicists (Anderson "More is Different" and Laughlin "A Diffrent Universe") have written extensively on this. "Top Down" and "Emergent" are effectively synonyms.

3. The fundamental issue is the failure of our best formalisms to provide the tools we need to handle "Emergence". Our Logic and Maths are simply not good enough. Although they are very good, they limit science to Reductionism.

4. The problem is not confined to the maths. There are major deficiencies in the tools of logic currently available.

This issue is one of the 10 points in my essay which substantiate a similar conclusion to yours. Only I emphasise the limit on physics is a consequence of deficient formalisms.

  • [deleted]

Dear George,

yours is an excellent attempt to work out the scope of present day Physics where one may attempt to explain complex processes outside Physics to be understood in a fundamental way using quantum and classical physics together. Nice job done as it may enable some one to work out a novel way out of our present ' confinement '. Let me introduce an obscure term ' consciousness ' that controls the way the human mind works actually. Sometimes our concepts and precepts used limit the way the physics growa! Let me start with space and time and the ideas we have built around them, being linear and homogeneous in nature. These are concepts that help us develop physics bitby bit. We have no experimetal proofs for them , except through the testing of predictions of physics that we develop using the concepts. Thus, i see a huge importance for the human mind 'development'. To me it can do wonders if only the right concepts get developed. It is human nature generally to try the paths that are being followed / are in fashion. Thus path breaking is just not doen in a bold way as required. In a certain way, the essay of Tejinder Singh provides an approach he calls mesamorphic region, in between Quantum and classical physics. i will like to go further and say that we need to harmonise in a linear continuous way all the existing theories in order to end in one for every thing. Then only we come to understand the nature. The latter is simple but our mind are complex and the very nature of mind is basically represented by ' wandering '. The truth does not wander and there lies the dilemma. We have definetely suceeded in Physics to explain the truth in a better and better ' relative ' manner. But just like the exponential, we are never able to reach the finality. Einstein was philosophically very right in suggesting that he does not like randomness and disorder to prevail as per QM. Then this fight between determinacy of classical physics and indeterminacy of QM needs to be harmonised. Experiments have their limitations due to technology though there also we are constantly improving the sensitivity and accuracy of our measurements. But then all such things are following a set routine we call science methodology that has been built in a 'historic' manner.

We need to break such a stalemate through broadening our horizons through a paradigm shift here! What it is or going to be, should take our maximunm attention and effort in physics/sciences.

In my essay on this forum , at the end of it i happen to suggest a marriage between physicists and life scientists, not only through providing the latter with advanced analysing instruments but through actual active collaboration between them in working teams. It may have happened sporadically. It needs to happen vigorously. Then. the limitations of present day physics to provide breakthroughs in life sciences may actually disappear and a new area of non-specialised science may appear. Specialities have advantages but we forget its funadamental disadvantage that lies in its limited area of operation.Infineteness of the universe needs to be tackled by the 'infinitness

of the human mind, so that it has maximum overalp with the cosmic mind that has originated us all!

Dear Professor Ellis,

Thank you for these original thoughts about how macroscopic behavior emerges from complex assembled systems. Focusing on nonlinearity, rather than on decoherence, makes sense, although both provide valid explanations. Thermostat and adaptive selection are good examples. Do you have other examples that are nearer to the quantum world? I was thinking of non linear optics. If the field intensity (=photon intensity) exceeds a given threshold, media respond non-linearly, as if above a certain degree of complexity, one-to-one interactions no longer govern the physical behavior of the system. The simple interactions are 'buffered' by delayed interactions, which induce possibilities for feedback. These delayed interactions must be taken into account for the dynamical description.

I also appreciated your appendix which is a very good condensate for the principles of quantum measurement. Just after equation (4), personally, I would moderate the formulation in order to be closer to experimental physics: "Immediately after a measurement the state of the system is known to be a specific eigenstate, and any immediate further measurements will give _eigenstates and eigenvalues very close to the first result_." The effective results are indeed always subject to experimental uncertainty. By assumption, we consider these results to be the same as immediately before but I don't know of any experiment that validates this assumption.

By the way, I promote the FQXi contest on my twitter profile and my blog. Would you mind if I post quotes of yours, linking to your essay?

Regards,

Arjen

  • [deleted]

Dear Arjen,

Hello. I wanted to add a few remarks to the interesting observations you make above. Decoherence by itself cannot explain measurement, and must be accompanied by the many worlds interpretation. This is because while decoherence destroys interference amongst alternatives, it preserves superposition amongst them, because it works in the framework of the standard linear quantum theory. Many worlds branching is needed, so that we percieve only one branch, and hence it appears to us as if superposition has broken down.

On the other hand, nonlinearity can in principle explain wave function collapse by itself, without having to invoke many worlds.

Regards,

Tejinder

  • [deleted]

I might not get it, but:

1. isn't it the case that quantum feedback (which goes under the name of "quantum control") is quite an active area of research, and, so far as I can see, there is nothing to prevent feedback such as that in a thermostat to be accomodated with QM. So far as I know, experiments have been done on this (in quantum optics) and it works nice. Again, according to the usual rules of Q.M.

2.Nonlinearity in QM: there are plenty of nonlinear Schrödinger equations in Q.M.. They appear when there is interaction between the components of the system. The resulting macroscopic wavefunction (e.g. Gross-Pitaevski) obeys the same rules when it comes to measurement for example. And this is why just by adding nonlinearity you cannot get rid of the conceptual problems of Q.M.

Sorry, I just don't understand why nonlinearity is a problem for standard Q.M.: once interaction is properly taken into account (such as in second quantization) it emerges naturally.

Thank you all for the many posts. I regret that due to pressure of work I can't reply in detail to all of them, except to say yes please refer to my essay in a blog if you wish; however I must repond to the interesting queries by Darwin. I will do so in two parts, so that my reply is not too long.

Quantum control

In any quantum control process, the result of a measurement is used to determine the future evolution of the system. Such devices can be constructed and function beautifully; but just like the Copenhagen interpretation of any ordinary measurement, their operation can only be understood through a mixture of classical and quantum theory, because the measurement part of the operation can't be described by the Schroedinger equation. This is true even when continuous measurement takes place, because (see arXiv:quant-ph/0611067) such measurements involve first a unitary measurement between the system and an auxiliary system, and then a von Neumann measurement of the auxiliary system. It is the latter event that is not described by standard quantum theory. The theory of quantum feedback control shows how the effect of the measurement changes the master equation for the system state (equations (4) and (5) in arXiv:quant-ph/9912107), but not how the measurement takes place. There is to be sure a phenomenological equation for the measurement process (equation (7) in arXiv:quant-ph/9912107) but that equation is not linear: hence it does not satisfy the superposition principle, and is not derivable from the Schroedinger equation. This is where the non-quantum aspect of the process is implicitly introduced

continuing

The non-linear Schroedinger equation:

From Wikipedia: "In theoretical physics, the nonlinear Schrödinger equation (NLS) is a nonlinear version of Schrödinger's equation. It is a classical field equation with applications to optics and water waves. Unlike the Schrödinger equation, it never describes the time evolution of a quantum state. It is an example of an integrable model." The reason it does not describe the time evolution of a quantum state is because it does not obey the superposition principle, which is central to standard quantum theory, see http://en.wikipedia.org/wiki/Quantum_superposition ("Quantum superposition is the fundamental law of quantum mechanics. It defines the collection of all possible states that an object can have."). Hence for example it does not describe either interference or entanglement.

In any case, when proposing any equation as a description of a physical system, you have to show it actually correctly predicts the dynamics of the system. Neither the linear nor non-linear Schroedinger equation describes either a feedback control system or adaptive selection, neither of which obeys the superposition principle. Taking the former case, suppose we could describe the temperature by a quantum state |Temp>. The dynamics of a feedback control loop are such that for an initial state |Temp_1>, we find |Temp_1> --> |Temp_goal>, where |Temp_goal> is the desired output temperature. For a different initial state |Temp_2>, we agagin find |Temp_2> --> |Temp_goal>. Superposition requires that that if input A_1 produces response X_1 and input A_2 produces response X_2 then input (A_1 + A_2) produces response (X_1 + X_2) (see http://en.wikipedia.org/wiki/Superposition_principle). This simply does not happen when we consider a feedback control system. You can't describe what is happening by writing down a Schroedinger equation for |temp>, so standard quantum theory does not apply to such feedback systems.

  • [deleted]

Thanks for the answer. Let me start by saying that I appreciate you taking the time to answer this. I think the problem you touched upon is really important. One (unimportant) terminology issue: you refer to "standard" Q.M. as if it would exclude the measurement postulate. The "standard" Q.M. we learn (or some of us teach) at universities usually includes the measurement postulate.

If I understand correctly, you give two examples of classical systems in which a certain function (feedback, adaptation) canot be achieved say via a standard quantization procedure. But why is this surprising? Isn't it the case that, if I define my function or goal to measure with high precision both momentum and coordinate I can achieve this in classical physics and I cannot do it if the system is quantum? Or, if my goal is to copy some property (state) of a system into another, I can do it if that property is a classical one and I cannot do it if it is a quantum one (no-cloning theorem)?

The point that Leggett makes in this context (and which is correct, I believe) is rather of a different nature: he is speculating that, given zero decoherence, it might be the case that if the system is macroscopic/complex, Q.M. might fail to give an accurate description. For example, Q.M. (and this includes the measurement postulate) might predict interference and in the experiment we don't measure any. This would mean really that Q.M. is not a universal theory and it would be probably the greatest discovery in physics in decades! This, to me, is very different from the examples you give, which do not show that Q.M. is unable to describe a thermostat but that a certain functionality, formulated in classical terms, is not reproduced as such by a simple quantization procedure. (By Q.M. I mean here the Q.M.-as-usual, that is, including the measurement postulate.) In reality, both examples that you give are open systems: the superposition principle then cannot be applied as such, and decoherence has to be introduced in a careful way. But I don't see any reason why such systems cannot be described by Q.M. - again, including at a certain point the usual separation between the classical and quantum, and so on. I stress here that it is indeed annoying that we cannot write a description without these assumptions - but we already know that this is the unfortunate situation so far in physics!

Suppose for example that I have an ensemble of spins, from which I can extract spins one by one, pass them through a Stern-Gerlach apparatus, and determine the up or down value of each of them. This is a von Neumann mesurement. By extracting enough spins (but not all), I can calculate the temp. of the spin bath. Then I use this information to regulate the temp. of the bath in which the spins exist (they can be the spins in a solid, in which case I change the phonon temp., just by heating the solid). This in turn will regulate the temp. of the ensemble.

In the system above, there is no contradiction between Q.M. and the idea of a thermostat. You might object that, yes, but I have not given a complete Q.M. description of the computer used to calculate the temp. and so on. Indeed, but this has been the situation with Q.M. since Bohr's times. Such a unified description of the quantum and classical is lacking in every experiment done in quantum physics in the last 100 years, and there is nothing special with thermostats from this point of view.

  • [deleted]

George Ellis,

Thanks for an extraordinarily clear and well written exposition on the weaknesses of quantum mechanics and the strengths of the alternative--complex systems science.

That laterally-distributed control (Y. Bar-Yam, New England Complex Systems Institute)--vs. the conventional hierarchical view--explicitly defines varieties of change in a time-dependent complex network, makes for a powerful explanation of the world's apparent nonlinear order. We are informed that order is self-organization with feedback.

Excellent.

Tom

  • [deleted]

Darwin, the points you make are very relevant and to try to do them justice I'll give a three part response.

There is a profound hiatus at the core of quantum theory: the process of measurement, projection of the state vector to an eigenstate, cannot be described by the standard quantum theory process of unitary evolution of the state vector e.g. due to the Schroedinger equation. There are broadly speaking four attempts at resolution: (1) the Copenhagen interpretation, namely macro objects are not governed by unitary evolution, nothing more needs be said; (2) one needs to add on to the theory a second kind of evolutionary process, characterized as collapse of the wave function, and then try to determine rules for when and how this happens; (3) the Everett route of assuming a continually branching wave function, and associated "many worlds"; (4) one claims decoherence solves the problem.

I won't repeat here my arguments as regards the last two approaches, please see my essay in that regards (I just repeat here that decoherence gets rid of entanglement but does not give a unique physical measurement outcome). The point I want to make here is that if this issue is mentioned at all in treatises on quantum theory, the postulate of the measurement process is added on at a late time as an optional extra postulate, it is not presented as a key part of the standard theory (see e.g. Isham's book). But most books on quantum theory don't even mention that there is a problem. Here's an exercise for you: take a random selection of text books on quantum theory (not popular books: that's a different story) and see if the measurement process is mentioned in the index. I've tried it; less than 30% of quantum text books even mention the issue; and it is not mentioned at all in almost every book on quantum field theory (Wald's book is the only one I have seen that does mention it). Now ask an average graduate student in the field what she or he can tell you about the problem. Most of them have not even heard of it. So for those who are properly steeped in the subject, your statement is largely true; but for most of them, it is not. It's not a part of the subject as taught to most physics students.

  • [deleted]

Continuing:

So is the problem any worse in the case of networks of interactions than it is in the standard measurement problem? Yes I think it is. See arXiv:0904.4483 by Chiribella et al for a nice presentation of the growing field of quantum networks, obtained by assembling a number of elementary quantum circuits into a network and resulting in quantum channels. Does not that counter my essay? No it does not, because that theory only applies to networks with no cycles (see p.5 of that paper). These can be represented by Directed Acyclic Graph, and so do not include the feedback loops I consider in my essay. The point is that in the case of those loops, one feeds the result of the measurement process back into the system in order to determine its future evolution. You can't do that in terms of a theory involving unitary evolution alone, because that can't handle the measurement issue. In the standard measurement issue per se, this problem does not arise: the measurement result resides in the macro world and does not get fed back into the micro world.

You can handle this conceptually in terms of a combined classical and quantum description broadly in the spirit of the Copenhagen interpretation, i.e. you add in to the system some elements not covered by standard unitary quantum evolution rules. So yes you can make the relevant models, but only by adding non-quantum elements into the theory. So we are back at the issue of how that can possibly make sense, if macro objects are based in quantum physics at the micro level.

  • [deleted]

Finally:

Does this have any consequences? Not for most quantum theory and quantum field theory applications, which calculate energies of states and statistics of outcomes so successfully. But there is an area where it does matter: namely when people start talking of the wave function of the universe, and develop consequences of that idea, claiming inter alia that time is an illusion (see The End of Time by Julian Barbour, for example).

These theories assume that quantum mechanical ideas can be applied to the universe as a whole, using some form of unitary evolution and without any process of collapse of the wave function being included in the description. My essay says that won't work, because feedback control loops exist in the real universe (e.g. in the human brain). To my considerable surprise this ends up supporting the Copenhagen view to some degree: it can be legitimate to suppose there are macro objects not described by standard quantum theory as exemplified by the Schroedinger equation.

If true, then a significant task is to determine all the cases where such arguments are applicable, and I have suggested that Darwinian selection is another such case (rather analogous to quantum field theory, which needs separate consideration). What others might there be? Additionally a key task for the future is to extend quantum theory to give some "quantum_plus" proposal that does give a satisfactory description of the measurement process. Maybe the issue will look different in the light of such future theories.

  • [deleted]

Dear George and Darwin,

I have been following the interesting discussion you are having here. May I draw your kind attention to my essay in this forum : there I argue that quantum theory by itself predicts a quantum_plus theory [which addresses the measurement process] when one notes that the dependence on an external classical time is an unsatisfactory feature of quantum theory. I predict that mesoscopic objects obey a mechanics wshich is neither classical nor quantum, and hopefully experiments in the coming decades will test this.

Thanks,

Tejinder

  • [deleted]

What intrigues me is the possible role of gravity in this matter. I won't write maths here, but only give a description of something. Assume there is a wave function with some domain in space with a "support." This region of support is defined as where the wave function assumes its value and outside of it the wave function drops off to exponentially small values. We now consider this region of support as on a frame which is falling towards a radially symmetric gravity field, such as a black hole. Of course the Weyl tensor kicks in and distends this region into a more highly elliptical shape as the frame falls inwards. The wave function will then distend and it is not hard to do a Wigner quasi-probability calculation to show that the wave function becomes parametrically amplified or squeezed.

This seems to suggest a possible role for gravitation in this problem. I will as a caveat say I have troubles with Penrose's idea of there being Planck scale fluctuation in some R process during a measurment. Yet for quantum gravity and cosmology this apparent role of gravity in the squeezing of quantum states seems to be a possibility. The early measurement of the CMB and other aspect of cosmology indicate that the universe started in a uniquely low entropy configuration. The small anisotropies of the universe would suggest that the averaged Weyl tensor in any local region was very small, at least after inflation.

With decoherence it is the case that reduction of the density matrix to its diagonal form, with off diagonal elements for overlaps reduced to zero, that there is no dynamical prediction for an outcome. I would then argue this reduction simply defines a macro-state for the entropy of the decoherence or measurement. Any possible real observation is equivalent from an information perspective to all others. What we lack is the information required to determine how an "actual" outcome obtains. We are faced with a sort of reduction of quantum probabilities to classical-like probabilities, and then in the real world with some sort of classical collapse. So the picture has two steps, or what might be thought of as a seam that is not elegant.

So we are faced with the prospect of there being some sort of "top-down" process which is involved with quantum outcomes, or for that matter the emergence of the macroscopic or classical world. Zurek has advanced the decoherence hypothesis to einselection to address this question. Your essay is another attempt to frame this quesiton as well.

Cheers LC