Hi Dr. Brown,

Personally, I am uncomfortable commenting on MOND as I'm not an astrophysicist. I would like to know why, according to Gisin "'..."covariant nonlocal variables" might be irrelevant near the Planck scale...'". I will send you an email within the next day or so.

Thank you,

Dale C. Gillman

Dear Antoine,

While reading your essay I made the following remarks:

"In Bohmian mechanics, the reality is made of particles which flow along a natural probability current built from the wave function" quite right, I can almost agree with that. But is it still Bohmian thinking when you decide to name the particles "emergent phenomena" and declaring that the probability current is the source of these emergent phenomena and then dividing strictly the source and its emergent phenomena?

Your paragraph "making collapse objective" is a way of trying to explain the material existence of our reality without the intervention of a conscious agent. This is a quite different approach as mine...

"Entanglement becomes a force". Both Alice and Bob are a conscious agent that are communicating...so they are in space-time, which is in my perception an emergent phenomenon. The probabilities of entanglement are branches in the time and space-less source of these emergent phenomena, one branch leading to the experienced reality where Alice and Bob are experiencing entanglement and another (or more) that return inside its source (as a hidden variable).

"We started with a model with jumps, non-linearly collapsing the wave-function. In such a universe, randomness and discontinuous behavior undoubtedly lie at the deepest level of reality." This deepest level of reality is this the source of ALL emergent phenomena? Are "jumps" needed to create an emergent phenomenon? Of course these are all questions that I also try to answer. Your answers lie (partly) in the "continuous GRW", and I think that my answers will be difficult for you to accept because they are more philo and less math.

Your treatment of carving and painting is very interesting, I enjoyed it. When you are painting then in the painted surface there is an infinity of carving probabilities. "Sure, the collapse model paints the branch chosen, which is certainly ontologically useful, and makes the story - the way we connect mathematics with reality - far simpler than in the Many-Worlds approach" You are like me searching for new approach of the essence of reality. (and you bring forward very good arguments). I have come to the same conclusion that these material macroscopic superpositions don't exist at all.

All in all for me it wa very interesting and useful reading that I valued high.

I hope you will take some time to read MY ESSAY too and leave a comment, I am aware (because of the ratings (from a 10 to a 2) that you "love" it or 'hate" it.

Thank you

Wilhelmus

Dear Prof Antoine Tilloy

Thank you for an excellent essay

I did not do much work in Wave function collapse, Mostly it was in Cosmology and Astrophysics

Undecidability, Uncomputability, and Unpredictability are very much undesirable properties and out-comes of any theory. That theory might have developed by a very reputed person or by a group of well-educated and knowledgeable persons. There is no point of poring resources, money and highly educated man power into that theory when that theory is failing on above three points.

I just elaborated what should be the freedom available to an author when the " real open thinking" is supported. Have a look at my essay please.

"A properly deciding, Computing and Predicting new theory's Philosophy"

=snp.gupta

12 days later

Dear Antoine

Thanks for your nice essay which was very illuminating for me. I see that despite many years of research, many people are still concerned about the measurement problem. I also notice that nobody has taken into consideration in their models how much the scale of the system under study is affected by the measurement. In my experience macroscopic systems inherit quantum effects although they are not influenced by the measuring process and thus behave as classical systems

I just have a question. Are you aware of stochastic electrodynamics theory?

Best Regards

    4 days later

    Dear Isreal,

    Thanks for your comment. I think that the scale of the system under study as you say is rather crucial to understand the concept of measurement in any reasonable reconstruction of quantum mechanics. However, I think it is a rather standard belief in the foundations community, and not something nobody takes into consideration. All the models that I know use decoherence (hence a large system effect) to explain measurement at some point. In these accounts, decoherence itself is not sufficient, but it is necessary to make things work properly. As I argued in this essay, even for collapse models, decoherence plays an important role in the emergence of classicality.

    As for SED, I am aware of it, and have worked on relativistic collapse models that bear similarities with it. But I never got into the details of the original formulations, and at least when it is explained at the layman level (SED is just ED with a stochastic EM field), it seems unclear how genuine quantum non-locality could emerge.

    Best regards,

    Antoine

    Sounds great, thanks for your reply. I appreciate it. Good luck in the contest!

    Israel

    22 days later
    • [deleted]

    Dear Antoine,

    I have had a most enjoyable morning thinking about your essay. What made me really interested is that I believe I may have discovered one of those alternative interpretations of the maths of Quantum Theory that you describe. But, I realized, reading your essay, it is different to the others in that it is neither a model with wave function collapse or a model without - it is a strange mutant somewhere in between.

    So, it describes particles that make lots of little jumps, which is the hallmark of wavefunction collapse, but achieves this by the constant creation of new wave functions without having to collapse the old ones.

    These jumps are probabilistic, and this jumping behaviour is a key part of determining the behaviour of the particle, so the jumping behaviour makes lots of predictions that can be tested.

    This is all done using the conventional maths of Quantum Theory but instead of starting by thinking about the behaviour of the electron (a rather complicated particle), it instead starts with the much simpler photon.

    There are more details in my essay.

    I would be interested to know what you think.

    All the best,

    David

    7 days later

    Dear Antoine Tilloy!

    Your article refers to real fundamental issues. It's great! We rated your essay to the maximum (10 points), we liked everything! The found images are very expressive. We think that the basic metaphysics of this issue is the opposition of discrete and continuum. Collapse - is a gap. We think that here we can talk about ATEMPORARY EVENT (Hegel's 'Wesen is gewesen'). And at the macro level, we can say that the acceleration of the macrobody occurs spasmodically. It is possible with this axiom to begin classical kinematics.

    Truly yours,

    Pavel Poluian and Dmitry Lichargin,

    Siberian Federal University.

    Dear Antoine,

    happily, I stumbled across your essay before the close of the contest. You provide an intriguing discussion of the GRW model that was, in this form, completely unknown to me. Granted, I have never studied collapse models in detail---essentially, to me, they're an option that I don't expect will be borne out, but with which I would be completely happy if it does (it would mean my own ideas are wrong, but learning that would mean progress on a very tricky question).

    Basically, my naive understanding hitherto was that a collapse model yields something like the dynamics of open quantum system as fundamental, which produces the loss of coherence needed to observe a definite outcome. Thus, collapse theories, I thought, have at least one distinct advantage over other 'interpretations' in that they typically lead to different predictions---a 'large enough' in whatever sense interference experiment would no longer show interference fringes at some point, if collapse theories were right.

    Your essay seems to put a bit of a damper on this view: if I understand you correctly, you essentially consider the purified state of the system under consideration, and show that dependent on what we consider happening to the ancilla, we obtain different metaphysics---different pictures of how stuff works, deep down. While empirically, we can't distinguish between these options---hence, introducing empirical undecidability---they do paint a different picture of 'what's really going on below', as Leonard Cohen put it. So, the ontological ambiguity one might've hoped removed by the collapse theory seems to make its comeback, after all!

    However, I'm not quite clear whether I understand why this is necessary. Wouldn't it be simpler to just *not* have an ancilla, at all? Or, wait, perhaps I get it now: the ancilla is really just a guide for the eye, and you can write different metaphysics even for the pure reduced dynamics---right?

    Anyway, as you can see, I'm still in the process of coming to terms with your argument---in part, perhaps, due to my limited familiarity with collapse models, but also, because you've presented a highly original---and well-presented!---perspective on these issues.

    You've chosen to focus on undecidability of a different kind than most of the entries here. I think it's a good thing to have a little variety, but I'm also reminded of an argument---perhaps just an analogy---by William Seager that might suggest this empirical undecidability and the mathematical variant are closer than one might think---essentially, because one can re-cast the phenomenon of Gödelian undecidability as the unavoidable existence of multiple inequivalent models (in the sense of mathematical logic) of the same theory. As Seager puts it in his Theories of Consciousness, Chap. 15:

    "[T]here must be a model of the axioms of arithmetic that make [the Gödel sentence] G true. And so there is. But, equally, there must also be a model of the axioms that makes G come out false. Else, by the completeness of first order logic, G would be provable from the axioms.

    [...]

    The analogy [...] is that science cannot specify more than the bare structure of the world, rather in the way the axioms of arithmetic cannot, obviously, specify more than the structure which is compatible with all its models."

    I have, on occasion, pondered whether this might be more than just an analogy---or perhaps, what this analogy tells us about the relation between our knowledge of a domain and the domain itself. If such speculation amuses you, you might find something of value in my essay, or the paper introducing the framework.

    Best of luck in the contest!

    Cheers

    Jochen

    It is unfortunate that the equation describing the probability of an event was called a "wave function." It is actually a description of probability, and it defines the CURVE expressing various probabilities. A probability curve is not a physical thing that "collapses", with or without a pop. When a probability becomes an actuality, the curve function is simply resolved.

    You mentioned Schrodinger's Cat. Instead of a cat, place a clock in a glass vacuum vessel inside the box. The clock can only run in a vacuum. If a particle decays and breaks the glass, the clock dies. Open the box and see for yourself: If the clock stopped you can see by the indicated time that it had nothing to do with your wave-function collapse, and it was always either running or not. "Superposition is a superposition of probabilities, Let's get real.

    Quantum theory (not quantum mechanics) has been just another expensive deviation in the history of science.

    Dear Antoine,

    I very much enjoyed your essay (as I have previously enjoyed your papers and our discussions).

    So, let me take the opportunity to ask for your reaction to my challenge to one particular aspect of your posture. It concerns an issue which we have previously discussed to some extent, but I think your essay gives us the opportunity to get a bit deeper into it .

    I am referring to the robustness of the linearity requirement. In fact, I would like to challenge the generality of the following statement: `` The price is stiff: non-linearity is allowed at the wave-function level, but it has to vanish exctly at the density matrix level".

    I know you have acknowledged explicitly that N. Gisin no-go theorem (as all no-go theorems) is only as strong as his assumptions (which include as far as I understand, no considerations of possible limitations on what can in fact be measured, other than those imposed by causality itself, as well as other possibilities that you raised yourself, like deviations from Born's Rule) .

    What I want to consider next are some concrete reasons to doubt the strict validity of that linearity as a characteristic of viable quantum theories.

    Let's start by assuming that some relativistic version of quantum theory involving spontaneous collapse is the adequate description of nature to a very good approximation at least in regimes where spacetime can be described by general relativity and the objects we are describing in quantum terms, are so small and light that their gravitational effects can be ignored, i.e. they are just test objects as far as gravity is concerned. If we depart substantially from the last condition there is of course good reasons to think that we would eventually enter a regime where a full quantum theory of gravity (QG) will be needed, and that quite possibly the standard notions of space-time would be lost. At that point of course our collapse theory will be meaningless as well, as it is formulated on the assumption that we have a suitable notion of time available to us (as in the nonrelativistic versions of spontaneous collapse theories are Schrödinger-like equations prescribing the time evolution of quantum states) or, a notion of spacetime (as used say in the relativistic versions such as Bedigham's , Tumulka's or Pearle's).

    So let me consider, instead of jumping right into that QG regime, an idealized one parameter set of situations connecting the regimes where gravity might be ignored, passing trough one where it might suitably approximated by its Newtonian description ( as you have considered yourself), to one requiring that full QG regime where the vey notion of spacetime itself might be gone. Along that one parameter set of situations I expect we would, encounter situations where spacetime retains its usual meaning, but still, truly general relativistic effects will become relevant. As you know very well general relativity involves fundamental nonlinearities. Moreover, GR implies that the state of matter, by affecting the spacetime geometry itself does also, in general, affect its causal structure. Thus, it seems that there should be, along that one parameter set of situations, points where we have both: notions of spacetime (so might still be in the realms where one could sensibly use some version of spontaneous collapse theories) and, still some nonlinearities would start arising from the GR aspects of the problem.

    That would, it seems to me, imply that at some point the linearity must be broken, that the superposition principle will have to give way to something else. The superposition principle would then survive as a good approximation valid in situations where gravity could be ignored, or at least where it could be treated within some linear approximation, such as a that provided by Newton's theory, or even linearized GR. In those situations the theory would indeed reduce to one satisfying linearity at the density matrix level. In more general situations it would not.

    Now, how could something like this avoid being ruled out by Gisin's no-go theorem. One possibility is that the experimental arrangements envisioned in the theorem, (as they would in the pertinent case, certainly involve important gravitational effects) would be impossible to realize as a result of the modified theory itself. That is, the arrangements devised so that Alice could send a faster than light signal to Bob, might involve for instance, the setting up of some sort of superposition of energy momenta distributions, corresponding to spacetime superpositions, which according to the theory, would be simply impossible to achieve. On might imagine for instance that according to the theory, a collapse might have to take place, with probability 1, before Alice and Bob are able to complete the ensemble the experimental setup. In fact there is precedence for the impossibility of certain type of measurements (not involving gravitation) , that at first sight seemed quite feasible [ see for instance, Y Aharonov, D. Albert, ``Can we make sense out of the measurement process in relativistic quantum mechanics?" PRD 24, 359 (1981) & R. Sorkin, ``Impossible measurements on quantum fields", in Directions in general relativity: Proceedings of the 1993 International Symposium, Maryland, Vol. 2, 293 (1993)]. Another possibility, taking us a bit outside what I had been considering, is that the attempt to create the set-up, would involve, in a sense, creating something like a "spacetime causal structure which is not well defined", so that at the end, whether or not a signal was sent faster than light between Alice and Bob would remain undecidable. Actually things might turn out differently and the causal structure might end up being emergent, and defined depending on the ``outcome" of the experimental development itself. Something of this nature is exemplified in [ "Large fluctuations in the horizon and what they teach us about Quantum Gravity and Entropy" R. Sorkin y D. S., CQG16, 3835, (1999); When is S =1/4 A ? ", A. Corichi & D. S. ,MPLA 17, pg. 1431, (2002), and "A Schrodinger Black Hole and its Entropy", D. S., MPLA17, 1047, (2002)]. Another interesting option is that in the context at hand ( and as a result of the fact that in the semi-classical description one would be ignoring the quantum nature of the gravitational degrees of freedom), Born's rule would end up being effectively modified as you suggested yourself ( but in what I took you considered as highly improbable development, please correct me if I read it wrongly). In other words to expect the unexpected does not seem out of place in dealing with the interface of gravitation and quantum theory.

    In fact, it seems to me that several of the steeps used in the derivations presented in the essay, in particular those that involve taking averages over ensembles, ought to be revised in the kind of gravitational context I am describing, for various reasons. To start with we do not even know how to make sense of the sum of two spacetimes, and much less "the average of various spacetimes", and even if we did, it seems quite likely that the intrinsic nonlinearities of GR would invalidate some of the usual steeps averaging procedures.

    It is of course a tall order to actually propose such a theory, but accepting that we might have to consider breaking with linearity (even at the level of the density matrix equation) might be the first necessary steep.

    I clearly do not expect you (or anybody at this time) to have any clear and definite answers to the questions raised by the above considerations ( I might be wrong of course) , but I am just curious to see what your first reaction would be.

    Again, congratulations for a very nice essay (which I will acknowledge with a very high mark) .

      Antoine,

      Fascinating approach, beautifully presented, but perhaps you might give me your view on whether this 'DFM' mechanistic sequence recently published may have any effect on your last line conclusion;

      Pairs retain a common polar axis, so anti-parallel. A's polariser electrons are interacted with at some random tangent point latitude on a Poincare sphere. We know rotational momentum varies by Cos Latitude. We propose 'curl' changes inversely, (-)1 at poles, 0 at equator. Rotation on all 3 axes is of course possible! Complex paired vector additions then give a new polarisation and effective 'elliptical major axis orientation' of the re-emission. Value change due to CosThetaLat.

      A SECOND interaction with the (2 channel) photomultiplier electrons then gives a further CosTheta vulue to the first, so we have Cos^2Theta subject to each detector angle setting. 'Click' rate depends of which channel aligns with the major axis (intensity amplitude).

      If you know your subject you should find that reproduces QM's data set and the Dirac equation with no A,B communication required. Indeed exactly as John Bell anticipated. Let me know.

      My essay this year touches on that, referring to the fuller derivation last year, but also other consequences. (a paper is in peer review now).

      Do ask any questions or give your advice.

      Many thanks

      Peter

      Dear Daniel,

      Thank you very much for your thoughtful comment on my essay, it means a lot.

      I have been thinking quite a lot on this problem of linearity, and so before trying to answer your points on this, let me try to explain how I evolved on this question. As you have perhaps noted, my discussion of this in the essay is at the same time more careful and shorter than in previous articles.

      At first, I encountered this need for linearity at the density matrix level when we were constructing semiclassical theories of gravity with Lajos Diosi. There, the main interest of our approach was precisely to remove non-linearity. And so to me, the achievement felt even more so amazing that I was convinced all non-linear approaches were doomed. I still think it is great to be able to remove non-linearity, and that having a theory that is linear at the master equation level is much more convenient, everything else being equal.

      But at the same time I realized, and this is the core of the present essay, that asking for linearity, although it makes everything simpler, does not allow us to go beyond orthodox quantum theory at least at the empirical level (namely, all the experimental results are reproducible my quantum theory, only not the standard model). And so, by asking for a simplification, linearity, we remove at the same time anything genuinely new that could have happened empirically. Naturally, metaphysical interests remain (the measurement problem is solved), but the proposal becomes much less radical than one might have hoped.

      And so, while I think the issues of non-linearity should not be underestimated, I think it is important to see also how requiring linearity removes the empirical novelty of these collapse like approaches.

      Now, let me explain why I remain convinced that the price to pay to have fundamental non-linearity is much higher than people think. Nicolas Gisin's formulation of the no-go in terms of faster than light signaling is probably the most impressive, but in the end I do not think faster than light signaling is the main issue. In essence, I believe the problem is more one of predictability, and ability to separate systems into subsystems for all practical purposes. With non-linearity, the statistics of a subsystem is influenced by what happens arbitrarily far way, and so effectively we have a force without limit on its range. Further, while non-linearity can start very small, there is no reason to expect that it typically remains small macroscopicaly. Just think of the collapse process in GRW: very small but massive modifications for macroscopic bodies (because of linearity however, these massive modifications have no empirical signature). In general, the weirdness coming from non-linearity has no reason to be confined to microscopic degrees of freedom, unless there are precisely the right cancellations. If non-linearity comes from gravity, you can expect the macroscopic non-linear corrections to be of the same order of magnitude as gravity itself, hence dozens of orders of magnitude larger than quantum mechanical effects for macroscopic bodies.

      How would we see these brutal modifications, or convince ourselves that they don't exist? It's very hard: with non-linear dynamics, the Born rule is no longer valid (a priori not even approximately valid). So you can't trust the wavefunction, you have to go back at the local beable level. Non-linearity also forbids the separation into subsystems. So you would have to prove that some sort of approximate Born rule can be derived for appropriate local beables, and that though systems cannot be exactly separated into subsystems even if they are far away (decoherence is not enough), you can still do it in most cases (because of a subtler non-linear decoherence). Usually, in approaches based on local beables like Bohmian mechanics, you have a rather straightforward argument (e.g. equivariance) to interpret the wavefunction probabilistically, but in a non-linear theory you have to do it from scratch and the best you can hope is that it holds approximately. Frank Laloë has tried to do this in a recent paper, where he tries to see that that the equivariant distribution is stable with respect to a small non-linear perturbation, at least for large bodies in the model he introduced. At my current level of understanding, I don't think that it works, but it is certainly an attempt in this direction.

      So this was to insist that non-linearity makes things difficult, because all the tools we use to make predictions break down (and it's not clear they only approximately break down, because the violations may be huge for a measurement apparatus). I would find fundamental non-linearity interesting, because it would mean we could falsify quantum theory itself and not just its specific quantum field theory instance. But is there a reason that non-linearity has to exist because of quantum gravity for example? There, I am less convinced than you.

      Quantum gravity is hard to discuss because it is not precisely defined yet. But if quantum gravity can ultimately be defined as something that looks like a slightly weirder quantum field theory (as String Theory aims to do for example), then the non-linearity of field equations has no reason to be translated into a non-linear dynamics on Hilbert space. The (quantum) self interacting scalar field has a non-linear field equation, yet its dynamics is purely unitary and can be rigorously constructed (at least in 1+1 and 2+1 dimensions). I don't understand why the non-linearity of gravity is different, for example, from the non-linearity of non-Abelian Yang-Mills.

      Another option would be that quantum gravity brings weird causal superpositions, and situations where there is some form of faster than light signaling, or something hard to interpret. But this would not be related to non-linearity in the sense we discussed, and in this context I don't really worry about faster than light signaling (because indeed, it probably does not yield anything that can be exploited, maybe the observables can't be measured, and the effect does not grow into unacceptable macroscopic corrections because of decoherence). Again it's not so much an argument for non-linearity than an argument to say that some of the mildest consequences of non-linearity are acceptable since they could appear somewhere else.

      Now in the essay, I merely want to state why we came to accept linearity, I think for good reasons (even though not necessarily with watertight arguments), but explain what the less appreciated empirical consequences are. Non-linearity makes things too difficult, linearity makes things simple but almost unique.

      Again, many thanks for your comments, they had me thinking quite a lot. I hope we get a chance to discuss more in the future.

      All the best,

      Antoine

      Dear Antoine,

      Thanks you very much for your kind and thoughtful reply.

      It would indeed be nice to have more opportunities to discus this and other issues.

      Let me just react here to one of your statements.:

      " I remain convinced that the price to pay to have fundamental non-linearity is much higher than people think. Nicolas Gisin's formulation of the no-go in terms of faster than light signaling is probably the most impressive, but in the end I do not think faster than light signaling is the main issue. In essence, I believe the problem is more one of predictability, and ability to separate systems into subsystems for all practical purposes. "

      I agree with you that linearity makes things manageable as far as our ability to analyze things is concerned. In particular as you note, the for all practical purposes' ( FAPP) separability is extremely convenient. But why should physics be that way. We already know (as shown for instance in Bell's theorem and related results ) that locality, a premise that seems well tied to separability, is not a fundamental feature of nature. It took us humans a long time to come to terms with that Why should it be so at the practical level in general. In fact, it seems to me that if things in nature were so, we would have to consider some kind of fundamental conspiracy. The world is nonlocal, there is no separability but nonetheless the laws of nature are so as to hide to a dramatic a universal extent those facts from us.

      Does it not seem more natural to think that we happen to live in a region of the universe where that separability works at the FAPP to a very large extent, simply because is only under such conditions that life might evolve. Thus we would be deceiving ourselves by a clearly understandable but contingent condition of our immediate environment rather that facing the fundamental realities of nature. No conspiracies there.

      Looking forward to seeing you somewhere and to continue our exchenge.

      In the meanwhile all the best,

      Daniel

      Write a Reply...