Dear Professor Ellis,

Thank you very much for commenting about my presentation.

We can disagree about unitarity. At this point I find both unitary and discontinuous collapse explanations incomplete. I will take this opportunity to explain why it is not that clear there is a discontinuous collapse. For example, in the case of the experiment with Mach-Zehnder interferometer, with delayed choice. Our choice concerning the second beam splitter seems to affect what happened to the photon at the first beam splitter. Assuming there's a discontinuous collapse, it should happen somewhere between the first and the second beam splitter. But this means that a photon which was split and travels along both arms of the interferometer, suddenly collapses on one arm. This would be strange, and conservation laws would be violated. Assuming that the collapse happened before the photon entered the first beam splitter, then why not saying as well that it never happened, or that it happened at the Big-Bang.

The case of preparation-measurement, which is generally considered the irrefutable proof of collapse, was explained at the slides you quote, "The measurement of O1 in fact refines both the initial conditions of the system \psi, and those of the apparatus \eta". Those slides present a possible unitary explanation of the collapse, by using the entanglement with the preparation device. It is very similar to the Mach-Zehnder interferometer experiment with delayed choice: what we choose to measure determines the way the system interacted with the preparation device.

Another argument I find convincing is that of conservation laws. They normally follow from unitary evolution - from commutation with the Hamiltonian. Is there a method to obtain the conservation laws, method which holds even when there is a discontinuous collapse?

I admit though that, besides these arguments and others I put in those slides, one can't find an irrefutable experimental evidence for or against discontinuous collapse. If there's discontinuous collapse, it will always hide no matter how we rearrange the experiment. If it takes place unitarily, locally appears like serendipity, as if a disturbance distributed between the preparation device and the measurement device "accidentally" puts the system in the observed state, so that it doesn't need to collapse. But, even if we allow discontinuous collapse, this kind of "accidents" happen. This is in fact what makes QM contextual. So, if we have anyway to accept strange contextual nonlocal backward in time behavior, then the worst about unitary collapse is already accepted when we accept discontinuous collapse.

Best wishes,

Cristi Stoica

  • [deleted]

The system tells me I'm logged in but this box does not know it ...

Christi I'll have to look at this further but just a brief response:

"The case of preparation-measurement, which is generally considered the irrefutable proof of collapse, was explained at the slides you quote,.....Those slides present a possible unitary explanation of the collapse, by using the entanglement with the preparation device."

Well I know many people claim entanglement solves the measurement problem; but there are many more (including me) who disagree. The key issue is that you have to be able to deal with individual measurements, not just ensembles; this is because you don't have an ensemble if you don't have individual events. Diagonalising the density matrix does not reduce all its terms except one to zero, so decoherence does not do the job of showing how individual events happen in the real world.

There are essentially two ways state vector preparation takes place: (i) dissipative, as in wire polarizers, and (ii) by separation followed by selection, as in Nicol prisms and the Stern Gerlach experiment. The first is non-unitary all the way; the second is unitary until selection takes place, which is effectively a projection. Neither can be described in a unitary way.

"It is very similar to the Mach-Zehnder interferometer experiment with delayed choice: what we choose to measure determines the way the system interacted with the preparation device. " yes indeed: fully in line with my proposals of the significance of top-down/contextual effects.

Best wishes

George

Dear Professor Ellis,

"Well I know many people claim entanglement solves the measurement problem; but there are many more (including me) who disagree. The key issue is that you have to be able to deal with individual measurements, not just ensembles; this is because you don't have an ensemble if you don't have individual events. Diagonalising the density matrix does not reduce all its terms except one to zero, so decoherence does not do the job of showing how individual events happen in the real world."

I fully agree. Decoherence program interprets at will the same density matrix as describing sometimes a statistical ensembles and sometimes the partial trace of an entangled state. By doing this, they mask the fact that their interpretation acts back in time. My proposal is not based on this. I just claim that there are unitary solutions which satisfy apparently incompatible measurements taking place at different times, and give some examples. Given that such unitary solutions exist, no matter how unbelievable may seem to us, it follows that we can't know for sure that the collapse was discontinuous, or was unitary all the time. My point is that a temporal description appears as selecting the initial conditions back in time, but this happens anyway even if we admit discontinuous collapse, so I don't think one should reject only unitary collapse because of this guilt. And I claim that the global consistency has priority over initial conditions, and the global space+time perspective makes this more acceptable. About the examples you provide, I think I discussed the Stern-Gerlach and shown how can happen without projection. And dissipation in the polarizer, can one prove it is non-unitary even if we consider the full system?

Best wishes,

Cristi Stoica

  • [deleted]

Correction: "found in vol.1" should read "found in vol. 2".

An addendum concerning "quantum theory maybe having influences into the past":

If quantum theory is a model that assumes unitarity then even I think so. I just see a crucial difference between model and reality and therefore I maintain the criticism I am expressing in my Fig. 3.

A profane top-down example: Fans of a political systems who experienced its break down may still consider it the future and remember how they already experienced that future.

Eckard

  • [deleted]

About that 'geometric algebra rather than octonions' : one can view complex octonions as geometric. After all, it contains complex quaternions, which can act as a Lorentz operator, as well as being isomorphic to the Pauli algebra - thus we get to Hestenes. One might notice a weird fact which could be relevant to the Arrow of Time. For (-+++) the (-) is the 'volume unit' of the octonion division subalgebra, yet for (+---) the (+) is the 'volume unit' of the Split octonion subalgebra ... which is not a division algebra. One usually considers it a free choice of convention whether to use +--- or -+++ but in this case it has further consequences. One suspects that the Arrow of Time should not be divorced from the Arrow of Matter, with the universe made of Hydrogen rather than anti-Hydrogen, or the Arrow of Weak Isospin, acting only on Left Handed fermions. Perhaps this is relevant to Time Reversal in the micro-world, but not in the macro-world. The Lorentz signature is not just relevant to spacetime, but to anti-matter, sort of like using Pauli algebra geometriclly as well as for Spin and Isospin. Pauli also has a connection to a Mass-Decay matrix, which I saw in Peter Renton's book on ElectroWeak on page 456. Anyway, I am a bit surprised that Relativists seem to ignore complex octonions, since it automatically leads to both +--- and -+++ . One might even say that Time has something to do with Color Neutrality - that 'volume unit' of octonion subalgebra being color neutral. That connection you mention about Neutrons - interestingly involves isospin. One needs a notation that distinguishes the different roles the Pauli subalgebra can play - as it relates to geometry or spin or isospin. Physicists seem to just use the same Pauli matrices, which makes things confusing.

Some time ago Hector Zenil [Sep. 23, 2012 @ 21:17 GMT] posted as follows, and I missed it. Here is a response

"How robust your hierarchy depicted in Table 2 for a digital computer system is in the light of Turing's universality? From Turing universality we know that for a computation S with input i we can always write another computer program S with empty input computing the same function than S for i. Also one can always decompose a computation S into S and i, so data and software are not of essential (ontological?) different nature."

Well it's the standard decomposition in current computers. Turing addressed universality but not how to do interesting things. That requires this hierarchical structure - else we'd all have to be writing machine code if we wanted to use computers, so very few people would be using them.

Yes indeed their is that ambivalence of how one implements things: I emphasize that in my essay. Its the key feature of lower level equivalence classes underlying higher level function.

"I also wonder if it isn't statistical mechanics the acknowledge that the view you are arguing against is not the general assumption in the practice of science."

I am arguing that statistical physics is crucially limited in what it can do. It can describe unstructured systems very well. Most of the systems around us are structured in one way or another, and their essential causal properties are not statistical. Computers are an example.

George

Joel,

"One suspects that the Arrow of Time should not be divorced from the Arrow of Matter, with the universe made of Hydrogen rather than anti-Hydrogen, or the Arrow of Weak Isospin, acting only on Left Handed fermions." I like the sound of that. You've studied this more than I have; I may get round to it eventually, then I'll get some advice from you.

Just one thing: you say "One usually considers it a free choice of convention whether to use +--- or -+++ but in this case it has further consequences".I can't see that this can be a *physical* difference: it's purely a representational convention which one chooses here, and that arbitrariness will follow through to any other linked formalism one chooses. They are all arbitrary by +/-.

Anyhow this is all off topic. I'm not going to pursue it further here (if I did I'd want to know if there is any relation to twistors .... no don't answer that!)

George

Following up my post of Sept 27, 2012@15:51 GMT, here is a second issue that has arisen through these discussions.

Issue 2: Lagrangian formulation, holonomy, and non-local physics

In a response to "The Universe Is Not a Computer" by Ken Wharton, I said

"You stated 'As regards the LSU formalism, this non-local approach is very interesting. You state "Instead of initial inputs (say, position and angle), Fermat's principle requires logical inputs that are both initial and final (the positions of X and Y). The initial angle is no longer an input, it's a logical output.' Yes indeed. What this Lagrangian approach does is very interesting: it puts dynamics into a framework that resembles the process of adaptive selection (the dynamics is offered a variety of choices, and selects one that it finds optimal according to some selection criterion, rejecting the others). This kind of process occurs in various contexts in physics, even though this is not widely recognised; for example it underlies both Maxwell's demon and state vector preparation (as discussed here ). I believe there may be a deep link to the dynamics you describe. This may be worth pursuing."

This Langrangian kind of approach leads to selection of paths between the initial and final point, and hence of velocities at the starting point, in contrast to the initial value approach where that initial velocity is given ab initio. This theme has come up again in the presentation posted by Cristinel Stoica [Sept 28 2012 @ 06:27] GMT and is actually implicit in the Feynman path integral approach to quantum theory, as so nicely explained is Feynman's book "QED". What happens in determining the classical limit is selection of the 'best' path, after trying all paths (Feynman and Hibbs: pages 29-31). This determination is obviously non-local, and so is influenced by the topology of the path space, as shown so clearly in the crucial Aharanov-Bohm experiment.

This of course supports the general view that what really underlies physics is parallel transport and holonomy; Yang-Mills theories fit into such a broad picture.

One of the deepest questions underlying physics is "Why variational principles?" If the dynamics is viewed as resulting from such a process of selection of a particular path from the set of all paths, there is a glimmer of hope for an explanation of this foundational feature, based in adaptive selection. This is one of the key forms of top-down action from the context to the local system, because selection takes place on the basis of some specific predetermined selection criterion, which is therefore (because it determined the outcome) at a higher causal level than that of the system behaviour being selected for.

At least that's an idea to consider. It needs developing to make it firm - if it works.

George

Addendum: Conditional branching in computer programs

A further case of essentially the same logic occurs in the operation of digital computer programs. Complex programs are built out of simple operations by

(1) chunking bits of code as a named unit (a module such as a subroutine, or an object in Object Oriented Languages) with local (internal) variables, so that this code can be called by reference to this name; this can be done hierarchically; and

(2) allowing conditional branching or looping, characterised for example by "if .... then... else ..." or "while A then X else Y". The program continues on one course if a condition T is true, and another one if it is false.

This is in essence another case of adaptive selection (see here and here ): there are several options, and one is selected in preference to the others if a selection criterion (the truth or falsity of T) is fulfilled. The roads not taken are in essence discarded. The effect is to change the sequence of the underlying operations according to this selection condition.

If the relevant variable is a global rather than local variable, then top-down causation takes place from the global context to the local module (what happens locally is determined by a global variable). This logical difference then goes on to cause different flows of electrons at the gate level (cf. the discussion of computers in my essay).

The underlying physics of course allows this logic to operate, indeed it enables it to happen. Essentially the same process of decision choice between branching possibilities happens all over the place in molecular biology (see Gilbert and Epel: Ecological Developmental Biology for details).

George

    • [deleted]

    Dear George Ellis,

    yesterday evening i watched your fqxi talk about existence on youtube (Copenhagen meeting). I am impressed by your deep and tough-minded manner of analyzing and argumenting against some approaches that claim to have solved the problem of time and the problem of causality in QM.

    I now read your latest paper "Space time and the passage of time" and it confirms my impression of your working style and the results you achieved.

    Because my own interests concerning the fundamentals of physics lie on the problem of causality in QM, QM's different interpretations and the role time could play within those frameworks, allow me to make some remarks on the issue from my point of view:

    For me, the interpretation of the time-dependent (but also the time-independent) Schrödinger equation as a real QM-physical wave-dynamics is a tricky illusion. From my point of view, the interpretation of this wave-function is such, that it only can be interpreted as an indicator for the existence of a new principle: namely the principle that QM always renders the past consistent with what takes place in the present (with present i mean every measurement/interaction that fixes the former future possiblities into a definite single result). In this sense, QM, in my opinion, is able to change past facts ("physical retrodiction"), for example measurement devices (like double-slit apertures and so on), to be consistent with for example delayed choices in delayed choice experiments. So every physical hints/marks are rendered to be consistent with the former unknown future that was given by the uncertainty of QM as well as by the uncertainty of the possible future events that may take place in the macro-realm.

    For me, this all does mean that the Schrödingers wave-function, if at all existent in some physical sense, does collapse, because it cannot be continued in the new context (with context i mean the change of past facts, enabled at the time of an interaction via entanglement of a physical device with a certain other device that was yet unknown at the time a "particle" was entangled with the first device). The Schrödingers wave-function cannot be continued in the new context, because at the moment the system gets decoupled from entanglement, the causal context has changed and the wave-function, interpreted as deterministic, makes no more sense to be continued. So, we now need a new wave-function to further describe the system's possible evolution until a new interaction/measurement occurs ... and so on.

    What my proposal/interpretation really does, is to show that QM could "mimic" causality via intermediate steps of instantaneous information transfers between the "deterministic evolution" of the "wave-function" (surely with the help of a yet unknown "principle" *what choice* is to make is best and due to what criteria - surely a criteria that has something to do with achieving the macrocosmical causal consistence).

    In my framework, this mechanism of "mimicing a strict determinism" has a cause - and therefore is causal - even though in a different sense than we usually use the word "causal". Usually we think of this word as a whole chain of strictly deterministic events and at the end we arrive at paradox results, like for example time-symmetry in QM, or the future that partially influences the present, or that our brain's single steps and our thoughts are fundamentally pre-determined and so on.

    I argue that non of those paradoxes are neccessary.

    For example, in my approach the time-symmetry of QM is just an illusion, because we interpret the wave-function *without* taking entanglement into account. Surely, the many-worlds interpretation (Everett-style) does take entanglement into account, but the price of this interpretation is that it leads itself ad absurdum by assuming a strict determinism that at the same time can (must?) serve as a way out of the non-locality trap. By asssuming a strict determinism, the doors to some adeventuresome conspiracy-theory is opened. Surely, the MW-interpretation not neccessarily incorporates a conspiracy in reference to - for example - the EPR-results and Bell's theorem (because the MW-interpretation does not deny entanglement). But another adeventuresome conspiracy occurs: namely why we humans can come to consistent insights about nature's behaviour if there is overall determinism in nature? To consistently explain that we are nonetheless able to realize that logics and the behaviour of nature have some huge intersection and are consistent/coherent in many regards, one really has to assume very, very special initial conditions at the beginning of our universe. But that would indicate that QM-probabilities must be in fact interpreted as physical (or meta-physical) ingredients enabling the permanent consistency of the whole universe/multiverse.

    So, what i have done with my interpretation (in my humble opinion) is to eliminate the multiverse in favour of multiple "particle"-interactions that allow a non-deterministic view of the future which is open, allow to explain the need and purpose of entanglement for this task, explain why the "wave-function" *must* "collapse" (if it shoulc be at all physically existent and not merely a false interpreted mathematical statement) and to show why time does really flow and why time and space are emergent features/results of QM-acitvities.

    I would be very happy if you could take a look at my own essay and at the same time i want to apologize for predominantely writing about my own work here. I read your essay very properly and also your arxiv paper mentioned above and after having watched your talk i simply was excited about your lines of reasoning that seem - at least for me at this point in time - to be very similar to my own ones.

    Best wishes,

    Stefan Weckbach

      Dear Stefan

      I think your approach is a sensible approach that takes the problematic issues seriously and tries to make sense of how quantum mechanics works, and in particular how, according to delayed choice experiments, it "reaches back" into the past in an apparently acausal way.

      "QM could "mimic" causality via intermediate steps of instantaneous information transfers between the "deterministic evolution" of the "wave-function" (surely with the help of a yet unknown "principle" *what choice* is to make is best and due to what criteria - surely a criteria that has something to do with achieving the macrocosmical causal consistence)." This mechanism you sketch out is, I believe, more or less in accordance with what I have written about in my post above: Oct. 2, 2012 @ 07:24 GMT: yes it is about choice between competing possibilities. Please see that post.

      I'll try to get time to comment on your thread.

      George Ellis

      Dear Professor Ellis,

      First, I'd like to note that the idea that both bottom up and top down causation (in the sense you defined it, which seems adequate to me) are necessary to tell the whole story about causal interactions seems so obvious to me that I am rather surprised that anyone would seriously debate it.

      What I would be interested to know is whether you have considered that dimensionality may itself play an as yet recognized role in top down causation. More specifically I mean this: What if the dimensions that characterize our reality are emergent, with some coarse (and as yet to be precisely defined) correlation with scale? Then it would seem to me that possibly the largest scale phenomena for which we have difficulty finding an adequate explanation may also be unrecognized top-down effects, in effect manifestations of events that must be properly explained within a higher dimensional analog.

      I realize that this sounds rather vague, and that is partly because I don't yet know enough about cosmology, but I have focused in my work in the opposite direction. It turns out that when one attempts to represent an object in a higher-dimensional space, then it exhibits two phenomena which one finds in (and to some extent define) quantum mechanics: superposition and collapse.

      The "superposition" in this case is due to the fact that any property of the lower-dimensional object which requires for its expression the same number of dimensions as the space in which it is represented must take into account all of the possible values for the dimension that the object lacks. The "collapse" in this case is due to the fact that if one "attributes" to the object extent along the dimensions it lacks, then it is no longer necessary to take into account all of the values, the superposition reduces to the representation of the property which has just the value due to the attribute extent.

      (I apologize for this awkward wording, it comes about because I am trying to express this in the greatest generality. My submission to this contest contains a simple and I believe very accessible illustration of this idea.)

      In any event, if this is what lies behind quantum mechanics (i.e. that we are "observing" prior to a "measurement" objects that exist in a lower-dimensional analog of spacetime) is it not possible that at the other extreme of scale a similar process might lead to phenomena we cannot adequately describe using our theories which hold strictly for spacetime events? If so, then dark energy and possibly dark matter may be the top-down phenomena par excellence.

      I enjoyed your carefully crafted essay and am glad that you chose to participate.

      Sincerely.

      Armin

        • [deleted]

        Dear George Ellis,

        thank you so much.

        I think your approach to take adaptive selection seriously is as promising as it is important. Selection mechanisms do cover all parts of science, biology and - meaningfully! - also human cognition (assumptions taken for granted, the working of our senses etc.), statistics and so on.

        Your elaboration on that helped me to remind me how important the principle of selection is and how easy at the other side it is to understand/contemplate it - but also to forget it!

        You stated in your talk mentioned by me above that in our dreaming states we incorporate what is important for us to know/to incorporate. What's important is not conscious, but unconscious. This is very interesting in my opinion, because everyday logics seems to say, what is important has to be all the time present in our consciousness.

        Well, in my opinion, what really has to be all the time present in our consciousness, should be the fact that human minds permanentely do select assumptions in favour of others and built their world around them. This happens in society as well as in science.

        The principle of adaptive selection, as you explained it, in my opinion, is surely more than an assumption. It does reflect the coherence of the external and internal world of human beings. The fact that adaptive selection does play a key role in so many branches of science is meaningfull at its own right - in my opinion.

        Again, concratulations to your current essay as well as to your arXiv-papers which i enjoyed very much!

        Best wishes,

        Stefan

        • [deleted]

        I see on your thread that other people believe you caused a major decline in their rankings by giving them a low score.

        Did you do the same to me?

        George Ellis

        • [deleted]

        Dear Armin

        Thanks for the nice comments.

        You say "possibly the largest scale phenomena for which we have difficulty finding an adequate explanation may also be unrecognized top-down effects, in effect manifestations of events that must be properly explained within a higher dimensional analog." I probably agree with that if you have a fibre bundle over spacetime in mind; otherwise nit so sure about it. The problem is that dimensions are discrete: it's a big notch going down a dimension.

        You state the following: " The "collapse" in this case is due to the fact that if one "attributes" to the object extent along the dimensions it lacks, then it is no longer necessary to take into account all of the values, the superposition reduces to the representation of the property which has just the value due to the attribute extent. " Sounds just like the idea of adaptive selection (see the post above at Oct. 3, 2012 @ 04:44 GMT). Seems concordant with this.

        Best wishes

        George Ellis

        This is probably my last post unless something comes up that deserves a reply.

        The negative side has been the hate postings from people who resent that I am a main stream scientist, and postings by one individual that had no serious scientific content, they were just trashing exercises. I just delete these. There have also been the numerous postings that have had nothing to with this essay; their only message has been either look at my own essay, or special relativity is wrong, or both. My post of Sep. 28, 2012 @ 07:41 GMT is as much as I am prepared to do about responding to the special relativity deniers.

        Interesting but disturbing was the hostile attack by a physicist who did indeed raise some potentially significant points. This has been useful to me in terms of alerting me to issues I have to deal with. I answered him in full, summarising my various responses in my posting of Sep. 23, 2012 @ 08:05 GMT; he has never replied. This is I suppose not surprising, because I have shown his main contention to be wrong. The disturbing part was the completely closed mind these postings displayed. This author was a classic example of the "shut up and calculate" school: don't think about the bigger picture, and attack with hostility those who do. This demonstrates a serious failing of present day physics education: this narrow minded kind of approach will never produce great science.

        Then on the positive side I have had really nice discussions with many respondents, particularly T H Ray, Georgina Parry, J C N Smith, nmann, Robert H McEachern, Ian Durham, and Frederico Pfrimer.

        I have been amazed at how high my essay has risen in the ratings, as it is not in the mainline of what is usually discussed on the FQXI website: quantum theory, quantum gravity, the nature of time, the start of the universe, etc. And this raises the final issue, which is indeed a big question:

        * Are questions to do with complexity to be regarded as foundational questions?

        My essay has assumed they are. If it ranks high, it is because many others here at FQXI agree with me. Thank you for your support!

        George

        Hi George:

        Your description of science's true goal is very limited and underestimated. It is not merely to understand the mechanisms whereby physical things work (the natural sciences) and how livings beings exist and function (the life sciences), but also to understand how the universe (beyond matter) works and how the human mind and consciousness work. Once these are understood, the wholesome science will be able to naturally provide meaning and purpose to the universe and life in it.

        Because of the underestimation of science's true capability, the mainstream has imprisoned itself into the vicious circle of material-only theoretical and testing pursuits leading to inconsistencies, singularities, and irresolvable paradoxes (QM & GR) that leave 96% of the universe unexplained with a virtual dead-end. The ultimate test of any theory is its universal, and not just worldly, prediction. Experiences with QM and GR show that even countless worldly experiments or observations are no guarantee of their universal validation.

        Further, an operational framework of the Top-down approach would remain undefinable until the "Top" is describable in true scientific terms. The "Top" here means the ultimate universal reality or Cosmic Consciousness or Free Will thru which everything emerges or into which everything merges. We cannot a priory underestimate science's capability in describing the holistic Top-down (Holistic) approach that would naturally reveal purpose and meaning to the universe and life in it.

        However, if this is not understood as the true goal of science, we would promote demeaning of science by default.

        Best Regards

        Avtar Singh

        Dear Mr. George Ellis,

        I found your essay very late, so I please you that you give opinions about my essay, although it is end of the contest.

        You wrote that cosmological time arrow is top-down effect. This is also my idea, written in my article, section 6.

        I agree also that the mach principle exists, thus that Newton's bucket can be explained.

        I suggest also conscious decisions. They are also top-down causation.

        Similar ideas of top-down causation were written also by Mrs. Walker on this forum.

        Best regards, Janko Kokosar

        • [deleted]

        Thanks Tom for this very positive comment, much appreciated.

        george

        Dear Janko Kokosar

        Thank you for that. We agree on the top down nature of the cosmological effect in determining a consistent local arrow of time, and the top down nature of conscious decisions. So I am happy our essays are in concordance.

        Mach's principle is not so clear to me. There are rotating solutions of the Einstein Equations where Newton's bucket result is not true, so again like the globally consistent arrow of time, a selection of solutions is required for this weak version to be true. That is top-down action to the local scene from distant matter: but it does not matter much for local physics that on Earth distant stars are at rest in your local non-rotating rest frame.

        Of course it does matter very much that inertia *exists*, and nobody has a viable strong version of Mach's principle that derives this from cosmological conditions - which was Mach's original hope.

        Yes the paper by Mrs Walker is very nice.

        George Ellis