There has always been an aura of mystical speculation surrounding quantum mechanics and the role of consciousness. The interpretation of wave function collapse (usually referred to as "the measurement problem") seems to lie at the heart of it. So, I'd like to consider the following: Image 1

The role of the observer making an observation in quantum mechanics is the same as that of a classical observer making an observation of a system that may inhabit one of a number of possibilities. You look at it, and then you know what happened -- until you look at it, you don't know and can only assign probabilities to what may have happened. The weirdness of quantum mechanics is that the possibilities interfere with one another, whereas classically they don't.

I'm curious if others here would agree or disagree with this description, and how it might need to be amended or changed?

    • [deleted]

    What you say comes close to what is often claimed in the Copenhagen interpretation. Therefore, the collapse of the wave function is NOT a dynamical process in this interpretation (better called a "non-interpretation").

    However, in classical physics you can and do assume that only one of the possibilities is real (that is why you call them possibilities). It is your knowledge that was incomplete before the observation. Mere possibilities cannot interfere with one another to give effects in reality. In particular, if you would use the dynamical laws to trace back in time the improved information about the real state, you would also get improved knowledge about the past. This is different in quantum theory (for pure states): In order to obtain the correct state in the past (that may have been recorded in a previous measurement), you need all apparent "possibilities" (all components of the wave function - including the non-observed ones). So they must have equally been real.

    Hence, either a collapse has occurred - or the world has branched into many quasi-classical "worlds". (The difference is that they could in principle recombine.) Heisenberg's original hope that the quantum system was disturbed during the measurement is not tenable. Instead, various systems (the observed one, the apparatus, the observer, and the environment) get entangled.

      8 days later
      • [deleted]

      Can anyone explain what is the rule of various amplitudes in the equation of wave function in the Many World Interpretation of QM.

      I mean they can't show probability because all possible outcomes are happening as a result of branching, so what those coefficients tell us.

      • [deleted]

      The rule of amplitudes in the equation of the wave function gives *personal* expectation of finding oneself in front of such or such eigenvalue after a measurement.

      If we are "memory machine", as in Everett's work, we are in principle duplicable. In a classical self-duplication, from an outsider (third person) perspective, there is no probabilities and the process of duplication can be considered as purely deterministic. But, even in this classical case, from the personal point of view of each duplicated person, it seems as if they have *personally" live an indeterminacy. The probabilities are purely first personal, and the amplitudes can be be related to expected proportion of possible outcomes taken from a continuum of possibilities (in non relativistic QM).

      As I have recently expressed here , while the MWI is nice in several ways, it seems to me to bury the probabalistic nature of QM, as well as the 'Born rule' for the probabilities, in the very hardest of places: how our conscious experience evolves through time. I might be willing to believe a probability interpretation if it could be shown that the number of 'copies' of me that experienced outcome A versus B was given by the relative squared amplitudes of the wavefunction; but I don't see how that emerges, at all, from the many-worlds view applied to a single experiment with two outcomes of highly different probabilities.

      • [deleted]

      I think that it is misleading to say that in the MWI the universe is constatly branching. The branching only appears to happen from the perspective of local observers. The entire multiverse may well be in an exact eigenstate of the Hamiltonian and thus only evolve in a trivial way.

      Suppose then that a static wavefunction is the real wavefiunction of the multiverse. Time evolution experienced by some observer is then just an illusion. In reality the observer and his time evolved counterpart are different components of the same wavefunction related by a unitary transformation.

      Now, if a unitarity transformation, not necessarly the time evolution operator, that maps one region of Hilbert space into another then those two regions are pretty much the same (compare this with rotations in ordinary space). So, if here exists an intrinsic measure or probability distribution it should be invariant under unitary transformations. This, surely is the basis for the Born rule.

      Hmm, many good points (Including several from Dieter Zeh, an inventor of the idea of quantum decoherence -- a good sign for the FQXi forums!).

      I think the main difficulty in accepting the probabilistic (or, as Matthew Leifer just described, epistemic) interpretation of quantum mechanics is the frequentist view of probability. The frequentist view requires us to think of "many copies" of reality, which, as Anthony points out, clashes with our experience. In contrast, the Bayesian view of probability as a likelihood of what we will experience is a much better fit with quantum mechanics.

      A frequentist insists that an experiment be repeated many times, or in many locations, and the probability determined by the number of occurrences of one thing or another. For a frequentist, each experiment is very much a separate, concrete instance with a unique outcome. But a Bayesian thinks of a probability as a likelihood of how one occurrence might turn out -- she is used to holding multiple possible "realities" in her head at once. These are "many worlds," but they are not the concrete worlds of the frequentist but possible worlds, with likelihoods. When she makes a measurement, she gets new information and adjusts these likelihoods accordingly (collapse). A Bayesian's view of the world really is as a collection of possibilities with likelihoods. It is but a small step from this view of probability to quantum mechanics, in which the probabilities interfere.

      • [deleted]

      Bruno,

      That would be a satisfactory explanation if the wave equation is meaningless to the third person. Is it? Because if it's not there should be a bias based on the wave function, and no probabilistic interpretation for that bias.

      And if it's the case... then what is the third person point of view? what does he see? Can we expect for a third person point of view to be less informative than the firs person view?

      Anthony,

      I just read your post. I guess the name is 'Prestige' ;)

      Count,

      I guess in contrast to what you said, local observers would not be aware of branching.

      Garret,

      And about the frequentist and Bayesian view point. It seems as long as we're dealing with a quantum phenomenon any probabilistic interpretation would be based on frequencies, regardless of further interpretation which might be Bayesian. Maybe I'm missing your point.

      • [deleted]

      Siamak,

      You seem to say that the pure classical indeterminacy would provide a satisfactory explanation of the quantum amplitude in case the wave function is meaningless from a third person point of view. Actually I do agree with you, although I am not sure that our agreement share the same basis. Perhaps you could elaborate a little bit. The problem, once, like Everett or some other many-worlders, we accept the computationalist or the mechanist hypothesis in the cognitive science, is that in such a case, we have to accept an a priori strong form of computationalist first person indeterminacy: we don't know and cannot know which computational history support us. This does not mean that the Universal Wave Function is really meaningless, but it shows that, as far as the Universal Wave is meaningful, we should recover it without postulating it. From this I tend to believe that in fine we have to justify physics from computer science or number theory or more generally mathematics. This would lead to a form of platonic mathematicalism. Well, this is related to my own work in the field. You could look at this text where I explain a little bit more in a hopefully not too much technical way. But ok, I mainly agree with you, I think.

      • [deleted]

      Thanks for your link. I've already read some parts of it and I would finish it soon. But frankly I dindn't get your point in the previous post. 'we don't know and cannot know which computational history support us'

      Here is the the problem:

      -I asked what is the meaning of amplitude in the wave function in the MWI of QM.

      -Your answer is: it shows the indeterminacy of the first person in finding himself in one of probable situations.

      -I say, What about the third person. My belief is he should have a more comprehensive picture of the event of the branching(at least the same amount!). How the third person gonna percieve branching? What is the role of amplitude in the third person point of view? We know that it wont have a probabilistic effect. Are you claiming that the wave equation just describes the first person view?(I guess you're not... so?)

      • [deleted]

      Hmmm... I see the third person view as a description of what perhaps *is*, and it has to include, in some way, what I believe in in some indubitable manner, i.e elementary arithmetic. I have done on a very elementary presentation of Arithmetic what Everett has done for pure (= without collapse) Quantum Mechanics. Technically this is akin to a reconstruction of Lucas-Penrose (erroneous) use of Gödel.Godel shows only that if we are machine then we cannot know which machine we are, nor can we know which computations would support us. This gives the arithmetical basis for different notion of indeterminacy.

      You were right above: the ontic third person view is poorer than the epistemological first person observer, but the observable are inside (modal) view of that elementary arithmetic.

      Neither the third person nor the first person really perceive (arithmetical) branching, except that both can infer it somehow like mendeleev infer the existence of non observed atoms by completing a theoretical+empirical table.

      All I say, mainly, is that, about the nature of "matter", some "theological" points can be made precise enough so as to be tested empirically. I am thinking sending an attempt to a mailing list to explain the whole thing, but "applied mathematical logic" is not something simple to explain without describing at least some "machine" or formal systems. I will give you the link.

      Accepting Everett (on "matter"), I would say the Wave equation belongs to the third person view, but accepting the computationalist hypothesis ("I am a machine", not: "the universe is a machine"), then, well, I think quite plausible that the wave equation belongs to a non trivial notion of first person plural notions, but it is yet an open problem!

      Still preliminary results are going in the direction that in the "many-worlds", the worlds are perhaps more subjective constructs than strict Everetian would appreciate, I don't know. All points of view are represented by modal variant of Godel-Lob arithmetical provability logics. Those are made necessary as a consequence of incompleteness, but sound machines can discover them by their self-referential provability andf inferability powers.

      • [deleted]

      Oh! I was short. What Everett did for QM, was the obvious (obligatory with any monism) idea of embedding the "physicist" in reality. Note that this was taken for granted by Newton: physicist obeys to the gravitional law! Otto Rossler, and according to him Boscovitch, but there is also Hans Primas, Finkelstein and probably many others realized that this embedding leads by itself to non trivial invariance principles. Rossler concludes that physics is the science of the interface between us and the "rest". This is quite coherent with the generalized embedding of the "mathematician" in "arithmetic/number-theory/computer-science.

      Actually the arithmetical embedding leads to a purely arithmetical, but empirically falsifiable, interpretation of Plotinus' (300 AD) theory of Matter. I present this here (CIE 2007).

      • [deleted]

      Thank you Bruno, it was descriptive.в€љО©

      I believe Everet's view remains no room for subjectivity of physics which is в€љО©becoming more prevalent as a result of recent experiments.в€љО© At last we still have the problem of mind and a subjective description of the world seems more attractive.

      • [deleted]

      Hmmm ... I would say yes, ... and no.

      Yes, because Everett can be said having rendered the quantum facts coherent with a "theory of mind" which has a venerable tradition (mechanism). Indeed, Everett has substituted the unintelligible wave collapse by a mechanist assumption, allowing the observer to obey to the linear wave equation. It has then singled out the necessity of distinguishing third person descriptions (like the Quantum Wave describing a couple observer-observed, and first person descriptions, like the subjectivity of the observer describing its own *relative* memory, and thus has anticipated the rather succesfull decoherence theory (imo).

      No, because Everett is not aware (well, few people seems to be aware) that once we assume mechanism or computationalism, the machine are, in principle, confronted to a bigger (a priori) form of indeterminacy, making a priori still stranger the quantum indeterminacy.

      If we take seriously the mechanist hypothesis or the computational or digital version (hereafter named comp), we have to justify why the quantum (consistent) histories win the "measure battle" which, with comp, involved many type of possibly non quantum-like histories.

      But Everett underestimated without doubt the subtleties of what a self-observing machine can be. It is here that computer science and mathematical logic can offer hope to justify why the quantum laws seems (correctly) to stabilize statistically.

      If you want, with comp, matter, and in particular its destructive interference features, seems a priori even *more* weird. We cannot, for the comp-reason decribed above, directly appeal to Gleason theorem or to decoherence theorems, we have to justify that move directly from a more thorough study of what a digital memory machine can prove and guess correctly about herself and her possible histories. It is still possible that such a line would refute the mechanist thesis, but the preliminary results I got seem on the contrary to consolidate the marriage between the quantum and the digital.

      Note that such approach could demistify, not only the collapse, but also the relation between consciousness, quasi definable in this setting by unconscious or automatic guesses in a reality (cf Helmholtz), and the quantum reality (capable of stabilizing those guesses through some "entangling" of consistent computational histories.

      a month later
      • [deleted]

      I somehow missed how and by whom the wave function collapse was demystified. As some of you know, my way of demystifying it is to demystify wave function evolution, which appears to be taken for granted here. I suppose we all agree that the wave function serves to assign probabilities to the possible outcomes of measurements. We disagree on the meaning of "probability" (a problem much older than QM), on what constitutes a measurement, and on why measurements are given special treatment by the general framework of contemporary physics. Leaving these questions alone, I propose (have done so, will do so) that the time dependence of the wave function is a dependence on the time of the measurement to the possible outcomes of which it serves to assign probabilities. It is not the continuous dependence on time of a physical state of affairs of any kind. In other words, the wave function has neither two modes of evolution (unitary and collapse) nor one (unitary) but none. It is an algorithm, not the kind of thing that evolves.

      Then what about the time between two successive measurements? The only way to make sense of this notion is to think of it as the time of an unperformed measurement -- a measurement that could have been but was not performed in the meantime. We can of course continue to regale us with stories purporting to describe what happens between successive measurements, since they are not even wrong. If you want to check such a story, you must make a measurement, yet by making a measurement you learn nothing about what happens between measurements.

      What bugs me about all these discussions (collapse or no collapse while evolution is taken for granted) is that they drown out the real questions concerning the ontological implications of the testable ways quantum mechanics assigns probabilities in actual measurement contexts.

      For further thoughts in this direction please visit This Quantum World or check out my papers.

      5 months later
      • [deleted]

      I'm an outsider so forgive me if my comments seem naive. It is clear you all know details that I do not know, yet great thinkers often ignore details because it can lead you astray (e.g. "Mathematical Discovery: Hadamard Resurected). For example, Einstein didn't know more details than his peers, but he did question their assumptions.

      Did relativity and quantum theory evolve from classical physics? That would imply that modern physics is a special case of classical physics, starting with the same axioms but more refined. This is obviously false. It is clear that classical physics is actually a special case of modern physics. Put another way, for any of you who have any interest in evolutionary theory, classical physics was a thought "dinosaur". And just like real dinosaurs that were unlikely to evolve into the current ecosystem of Earth, modern physics needed to evolve from new axioms, not directly from classical physics. So I question the approach of trying to extend relativity and quantum theory to explain the as yet unexplained. In other words, I'm not sure dinosaurs will ever evolve into humans.

      So while I can't point to an answer, I can point to some assumptions that might be wrong. First of all, the problem of consciousness is obviously one we've inherited from culture and we can't solve it because it isn't really a problem. Many philosophers have gotten rich calling it a problem, but there is no evidence of any problem beyond the fact that unless you and I are the same consciousness, we won't ever know the the other is conscious. It is wreckless to mix in the notion of consciousness in a forum discussing quantum physics, for one will never help the other just as you can't distinguish two entangled particles until they are unentangled.

      Most importantly, there is no evidence of this thing called time. Until we can all admit this simple fact, we will continue to be trapped by our culture and human form into concepts like "evolve", "measure", etc. Comparing one state to another requires memory, which requires observing an observation of an observation. As Bruno nicely pointed out "we don't know and cannot know which computational history support us." Making a measurement requires change. Change itself cannot be observed. Therefore when I say something changed then I mean that what I most recently observed is different from a memory of what that thing was like. We have no way of knowing (with our current understanding) if in fact all possible things happen from one instant to the next. Or if our Universe was just created from a computer simulation in its current state, including my own memories of the "past". This assumption that what we observe as time is actually real is a major assumption, and you may not want to question it. Many did not want to question absolute time when Einstein proposed relativity. It complicated the Universe a whole lot. But the Universe made more sense for it. So is anyone willing to question this assumption and see if any predictable features of our observed Universe can be made?

      • [deleted]

      So - if the MWI holds true - does that mean which lottery numbers one might select is irrevelent - and what is relevant is the moment one decides to check them..? Presumably any combination of numbers is a winning combination in some region of the multiverse - pick the right moment for checking them and I 'collapse' into a winning region for the number set I've chosen..?

      Although I like this idea as it removes any blame on my part for picking the 'wrong numbers' - it still doesn't help me win the lottery...

      25 days later
      • [deleted]

      I posted this in the FAQ by accident, so I'm posting again here

      I'm a complete non-scientist, but in trying to understand your theory, I was reminded of Carl Jung, who believed that mankind often created symbols in an attempt to reconnect with the unconscious " archetypes" that he believed formed the essence of the universe.

      Jung was convinced that these archetypes were psychoid, that is, "they shape matter (nature) as well as mind (psyche)" That archetypes are elemental forces which play a vital role in the creation of the world and of the human mind itself.

      so when I saw your youtube video of E8-- it looks so much like so many primal man-made symbols that Jung would flip over-- suddenly your theory had a real power for me, even if I couldn't understand the details of the science. so for that -- thanks, dude. you're my official new hero.

      I don't know if the theory is "factually" correct, but I'm sure it's instinctually correct

      as a soul surfer my bet is you've had some of these same thoughts. care to comment?

      9 days later
      • [deleted]

      Trying to preserve the original question posed, how does the measurement problem relate to the existential fact that until the observer makes an observation, they don't know and can only assign probabilities to what may become reality upon observation?

      Time or consciousness as experienced by the observer is in some way linear, I am not sure how either concept could be sustained otherwise, but it is also paradoxically true that the observed cannot also be linear. The observed world (the source of information) exists adjacent or lateral to the linear stream of consciousness (time), a simple analogy being the observer sitting and watching a movie film. The observed film is a sequence of frames (quantum) and only appears to be linear (classical), but something has to bind the frames together (the observer). If objects in the film, or if the classical objects of our observations instead followed linear paths from point A to B we all know they would instantly become trapped by the infinite measure of steps within any distance.

      So the picture we have then is of a linear observer (time) passing through a sequence of static frames. The observer, let's call him/her "the pilot" is the only required linear element in this model. The world relative to each pilot is quantum mechanical, even other observers, meaning the lateral world is a construction or a series of discrete states or moments. However, the duration of time spent in any given frame or any collection of frames is always zero, and cannot be otherwise, since this would also lead to insurmountable infinities.

      Time in this framework is not a duration in which something classical exists, time is rather a direction in space that travels from one unique frame to another, unique from the directions within each frame. Such directions in space are fourth dimensional, each direction is foundationally dependent on the primary adjacent frames, but they have no length in any given frame. In the same way movie frames create a sense of time, this fourth dimensional space is dependent upon change, it is dependent on objects having unique positions relative to other objects (adjacent the observer). The result is a unique volume different in character from the frames (expansion, collapse, curvature), as well as a unique sense of time (change) radically distinct from the actual existence of the frames. The result is that both the position and the momentum of a classical object cannot be determined, since there is no momentum in a given frame, and no duration or position in between the unique configurations of any two frames.

      This I think is the foundation of why the observer's observations are probabilistic, but since I am not a member of the club I will remain quiet unless spoken to.

      • [deleted]

      Garrett, I guess this is why I have become so disappointed in scientists, because they never want to discuss anything off the beaten path. They are always so afraid of looking unprofessional or unscientific or unintelligent. It is as if we live in a brutal dictatorship and everyone is afraid to discuss freedom. Consequently you can't discuss anything unique with scientists, no matter how "clever". Here we are at a website founded on fundamental questions in cosmology. You yourself asked a question looking to get feedback on an unexplored issue but the talk immediately reverts to the safety of the familiar. There needs to be a revolution from within.

      Lisi wrote:

      "a classical observer making an observation of a system that may inhabit one of a number of possibilities."

      Isn't a classical system deterministic, so there is no such thing as a classical system that may inhabit one of a number of possibilities. The classical observer only assigns probabilities to events he/she has limited information about. Knowing every property of an actual coin flip, the classical observer knows the precise outcome. In principle the information is available. Contrastingly the quantum observer cannot know all the properties of a quantum event. The information isn't localized. So the roles are not the same. They are only similar if the classical observer lacks information and then the similarity is that they both lack information. In lacking information they both can calculate the chance of say winning the lottery, but in doing so the classical observer assumes a measure of uncertainty that doesn't exist in his/her classical world.

      If we had complete information of a classical universe would we still question if other universes were possible, since complete information would predict only the one universe. Perhaps there is a fundamental flaw in the notion that complete information can predict a single outcome (Godel?).

      However, I do realize you are pointing to something. I have wondered if quantum mechanics is somehow tied to the quantum observer's lack of information, a mistaken theory resulting from our role in reality as observers. Of course we can't say where the particle is if we don't measure or observe it. But beyond this surface thought I sense you've brought up something deeper, something basic concerning predictability and uncertainty.

      The classical observer's ability to calculate probabilities based on limited information is due to how limited information predicts more than one possible outcome. Any limited information about a system naturally predicts a set of possible outcomes, many of which are equally possible (coin flip). Likewise in a quantum event, the lack of local information (the measurement problem) predicts a set of possible outcomes, many of which are equally possible. I have had this insight before but never clear enough to record it, but I think this is the fundamental reason why uncertainty exists in nature, or in time, because there are equally probable outcomes, in the atomic world, as well as for the observer, (viewing Schrodinger's cat), and equally probable outcomes interfere with one another (heads and tails), in human calculations and quantum events.

      Imagine walking down a stairs but instead of one step below there are many. Why? Because there are many different steps that could proceed from the step you are standing on. In the range of possibilities, there are many next steps that are as equally probable or possible. This is how nature works, and uncertainty exists in time, because a past state cannot lead to one single future state. Time cannot be linear because the information in past states is by nature unable to determine a single specific future state. The present configuration of the universe cannot ignore all the equally possible futures and dictate a single future, because the equally possible futures are equally real (and our universe is a sampling of that reality).