Essay Abstract

It is time to to take a pause of reflection on the general foundations of physics, re-examining the solidity of the most basic principles, as the relativity and the equivalence principles that are currently under dispute for violations at the Planck scale. A constructive criticism engages us in seeking new general principles, which reduce to the old ones as approximations holding in the physical domain already explored. At the very basis of physics are epistemological and operational rules for the same formulability of the physical law and for the computability of its theoretical predictions, rules that give rise to new solid principles. These rules lead us to a quantum-information theoretic formulation, hinging on a logical identification of the experimental protocol with the quantum algorithm.

Author Bio

I am professor at the University of Pavia, where I teach "Physical Theory of Information" and "Foundations of Quantum Mechanics", and enjoy research with a marvelous group of much younger collaborators.

Download Essay PDF File

  • [deleted]

Giacomo

Are you agree with my abstract?

http://fqxi.org/community/forum/topic/1413

    Dear Yuri

    I do agree on some of your points, but not ll f thm. In particular, I think that timeis discrete. You can always interpolate with a continuum time, but at the price of losing the locality of interactions, a too big price to pay. As for the fondants constants, these are just the three universal constants of Dirac automata, namely: the Planck time, length, and mass. The Panck constant is derived from them, as you can read in my essay.

    Thank you

    Mauro

    Dear Yuri,

    Some typos unexpectedly came out. Sorry. Here's the answer amended.

    I do agree on some of your points, but not with all of them. In particular, I think that time is discrete. You can always interpolate with a continuum time, but at the price of losing the locality of interactions, a too big price to pay. As for the fondamental constants, these are just the three universal constants of Dirac automata, namely: the Planck time, length, and mass. The Planck constant is derived from them, as you can read in my essay.

    • [deleted]

    Dear Giacomo

    Thank you for attention.

    I hope we will discuss after you read all essay.

    9 days later

    Dear Giacomo,

    I really appreciate your essay; it has given me some new perspectives on topics I have thought about a great deal. I have a couple of questions and comments:

    1. The topic of covariance comes up repeatedly in your essay, and I want to make sure I understand your precise view on how this principle should be regarded. You mention "violation of relativistic covariance" by the quantum automaton on page 1, and again in the section about relativity on page 3. You mention the "digital version of Lorentz transformations" on page 9, and compare homogeneous causal networks to causal set theory in this regard. Now, my view is that the principle of covariance ought to be re-interpreted in terms of order theory, with emergence of the Poincare group symmetries at large scales in flat spacetime. This may seem purely a semantic difference, but I think the emphasis is necessary because the representation theory of the Poincare group determines many of the properties of particle states in quantum field theory, and one of the roadblocks to progress on theories involving spacetime microstructure is the prejudice that manifold structure at the group level (resulting in Lie group symmetry) should somehow be maintained even when spacetime manifold structure gives way to something more primitive. In fact, I think that the "representation theory" of something more primitive than groups should be accepted as necessary and aggressively developed. I explain this more in my essay here: On the Foundational Assumptions of Modern Physics.

    2. I agree with your remark about absolute frames on page 2. More generally, I think that different frames at the fundamental scale (which I interpret as different refinements of the causal order) are not all created equal, although you can describe physics equally well using any frame.

    3. Your mention of metric emergence from pure topology by event counting on page 6 sounds similar to Sorkin's "order plus number equals geometry." This is one possible manifestation of what I call the "causal metric hypothesis" in my essay.

    4. On page 5, you mention the Hilbert space/operator algebra approach to quantum theory. I would be interested to know your view on Feynman's sum-over-histories approach in this context.

    5. I am grateful to you for invoking the Deutsch-Church-Turing principle; I was implicitly appealing to a similar principle in the last section of my essay without knowing its origin.

    Thanks for the great read! Take care,

    Ben Dribus

      Dear Ben

      thank you very much for your careful reading of my paper and your appreciation. By the way, your post also attracted my attention to your essay, which I thus read carefully, and I'm going to report on your FQXi thread. As you have seen (and will better see after my reply), there are some strong common points between our two views, but also some relevant differences, about which I'll try to change your mind, since I'm very convinced about them. But on this, see your thread.

      Before coming to your questions, I want to re-emphasize that my proposed new principles are already at some advanced stage, since so far I know the exact answer to many questions, and for many other ones I can foresee the route to answering them. What I can tell you is that the Dirac quantum automaton theory is almost finished in any dimension, and I'm going to put a set of two long papers on the web (one in collaboration with two my young collaborators Alessandro Bisio and Alessandro Tosini, and one with my elder collaborator Paolo Perinotti who just developed the theory for d=2,3). I have explored all scales, from the ultra-relativistic, to the Planckian-mass, and recovered correctly the usual Dirac theory in the "thermodynamical" limit (not the continuum limit!). We have already the leading terms for the Planck scale. For this I needed to develop a new dedicated asymptotic approach with Bisio and Tosini. One of the main thing that I have well understood is that the quantum nature of the causal network is crucial in recovering the Lorentz covariance. I like to epitomize this with the words: "a quantum digital universe".

      But now, let's come to your questions:

      Your POINT 1. I definitely agree with you that the principle of covariance, and all symmetries, are just approximate, and they are perfectly valid at scales above the Fermi's. What matters here is the Lorentz-more than the more general Poincare covariance, in the sense that homogeneity is inherent in the automaton description, and translation covariance/invariance are trivially recovered in the thermodynamic limit. Clearly, since all continuous symmetries are not true at the Planck scale, also all conservation laws must be rewritten and the digital form of Noether theorem should be given. But this is just a pleasant exercise in my case, and I hope I will find the time to doit in the next month. What I still don't know is what is the most general structure that is going to replace the Lie group of symmetry transformations, something more general of a discrete semigroup-and, I agree with you, this is a very relevant and fascinating problem. The quantum automaton gives a lot of phenomenology at all scales. All violations of symmetries can be seen already at the easiest level of dispersion relations, where it is a simple exercise to recover the Lorentz-covariant limit. Tosini is currently working in explicitly deriving the mechanism by which the Tomonaga rule (change of the quantum field state between infinitesimally close leaves in a foliation). But this is just a clever (and beautiful) exercise, as we already know that covariance is recovered, since we already recovered the Dirac quantum field theory.

      Coming now to the case of homogeneous classical causal networks, in order to see what happens, you should take a look at my manuscript with Tosini [my reference [26] G. M. D'Ariano and A. Tosini, arXiv:1109.0118 (2011)]. With classical causal networks it is easy to understand what is the digital version of a foliation, and also to recover a digital version of Lorentz transformations. However, the "digital" nature of the clock at the Planck scale leads to a "coarse-graining" of events when boosting the frame, making the Lorentz transformation a semigroup-or else requiring explicitly the knowledge of the rest frame. But there is a stronger and easier-to-understand reason for not considering the classical causal network: just the fact that, as explained in my essay, you cannot recover the usual Minkowski metric space as emergent from event counting over the quantum network, due to the anisotropy of maximal speed of information flow, since as proven by Tobias Fritz the set of points that can be reached in a given maximum number of steps on a homogeneous network is always a polytope that does not approach a sphere in the limit of infinitely many steps (a version of the Weyl "tiling issue"). Finally, coming to the Sorkin approach-essentially the random version of my homogeneous network-I don't like it, for the following three reasons: 1) homogeneity in my case is the universality of the physical law. A random law needs another higher-level law that is non-random and which regulates the randomness (in my case is the quantum nature of the circuit). 2) Randomness where? We don't want a background where the random parameter is interpreted in terms of the metric ... 3) Where is quantum field theory? At least, in my case, I can have the Dirac field precisely emergent. But, maybe there is a conjunction logical link between the two approaches that I'm missing.

      Your POINT 2. In my case the frame is only one (modulo translations): the one corresponding to the theory-e.g. Dirac's. You should understand that my and your viewpoints about causality are quite different. My network is not the usual causal network as that of Sorkin: it is a quantum automaton. Causality is a postulate of quantum theory, as established in my recent axiomatic work with Paolo Perinotti and Giulio Chiribella [[2] G. Chiribella, G. M. D'Ariano, P. Perinotti, Phys. Rev. A 84, 012311 (2011)]. Causality means independence of the probability distribution of state preparation from the choice of the following measurement (this is the same sense of Lucien Hardy's framework). Very shortly, this means that the causal links are the "wires" in the quantum circuit, whence they are fixed and they themselves establish the dynamical theory. I don't need to introduce a meta-dynamics for moving the wires. The theory is perfectly causal! I want to keep quantum theory, I don't want to modify it. Gravity must emerge as a thermodynamical effect a la Jacobson-Verlinde.

      Your POINT 3. I agree with Sorkin in "order plus number equals geometry". But I disagree about the topological randomness (see end POINT 1). From what I understand from your paper, this is a strong common point of view between us.

      Your POINT 4. My operational approach to QT is that of the above Ref. [2]. The problem is now recovering quantum field theory and the "mechanics" of the quantum, and this comes as emergent from the automaton. The automaton is perfect for the Feynman path integral: you have just a converging and well-defined path-sum over the quantum network of the automaton! But, as you will see on my comment to your essay, the global topology of the network is flat, e.g. it cannot be a torus: causal connection as a partial ordering IS transitive in my case, since I don't want to change quantum theory. Changing quantum theory in order to have time-loops, even for very long ranges, is a VERY difficult task, and we must to be well sure that we needed it before embarking on a new dead end!

      Your POINT 5. I strongly believe in the Deutsch-Church-Turing principle. Richard Feynman declared many times that he believed it. There are many other motivations besides those that I expressed in my essay. It is not only the point that there are no divergencies and everything is computable. And that the path integral is well defined. And that we have a perfect match between experimental and theoretical protocols. But also the fact that computability is an unstable notion for infinite dimensions (proved by Arrighi). And more. In a scientific experimentable theory "infinite" is a "potential" notion, not "actual" one. Infinite-dimension must be needed only in the "thermodynamical" limit.

      Thanks you very much for your stimulating questions. You are a "natural scientist" without prejudices, and hope we can discuss in person soon: it would make things much easier to explain!

      Mauro

      Dear Mauro,

      Thanks for the response, and I sincerely appreciate your detailed analysis and critique of my ideas on my thread. This is what I was hoping for when I entered the contest. In our discussion, I ask you to bear in mind two points. First, my formal education is mostly mathematical and there may be some initial terminological confusion. Second, besides my own reading, I have worked on this in almost complete isolation. Hence, I would not be at all surprised to find out that I have gotten some things spectacularly wrong.

      Now to the material. I don't feel I have a precise enough understanding of your model(s) to adequately respond to your remarks, so let me ask a few clarifying questions.

      I have seen several different definitions of quantum automatons, and I am not sure exactly what you are assuming. I read your arXiv paper about the classical case (G. M. D'Ariano and A. Tosini, arXiv:1109.0118 (2011), which is mostly familiar ground), and also looked briefly at Schumacher's paper (your reference [19]). In both cases there is an infinite lattice, and at the very end of your arXiv paper you hint at moving to the quantum picture by allowing superpositions of directed paths on the lattice. However, it seems there are only causal links in the 1+1-dimensional setting of your arXiv paper (no independent spatial structure), while Schumacher's paper seems to assume a spatial lattice with an independent discrete time step. So let me ask you the following:

      1. Are you assuming an infinite lattice? (I think the answer is yes.)

      2. Are you assuming only causal structure, or is there independent spatial structure? (I think the answer is both.)

      3. What exactly do the cells, or edges, or nodes represent? Unitary transformations? Causal links?

      4. Can you convert any classical causal network to a quantum one by means of superposition, in the sense that you are using the words?

      The answers to these questions should help get us on the same wavelength. Take care,

      Ben

        Dear Ben

        your essay and your ideas go far beyond what one could expect from a purely mathematical training. My best compliments! Getting some things wrong is part of the process at the beginning: better making some starting mistakes, than working unmistakably on a useless program. It is part of the adventure!

        Thank you also very much for your appreciation, which I know as sincere, out of academic competitions and twisted routes. Personally I have a strong believe in the value of my quantum automata program: after now 2.5 years that I started this program, it is really keeping all its promises, and you will soon next papers on Physical Review D. It was the same with my previous axiomatization program: it took eight years to develop completely, but now it is closed with the Physical Review A published with Giulio Chiribella Paolo Perinotti, which got a Viewpoint and quite a recognition around. If I now have decided to involve full-time three collaborators of mine, it means that I'm working seriously to it (and we are doing it currently with no funding, using some remaining overheads).

        Now, coming to your questions, which also help me clarifying my ideas when talking to others.

        The quantum automaton that I mean is a unitary evolution that is translationally invariant and local in the sense of Werner and Schumacher. To be more mathematically precise the evolution is an isomorphism of a von Neumann algebra, but here we really don't need such precise definition, since in the spirit of Deutsch-Church-Turing thesis we consider only states that have finite support over a locally invariant vacuum, whence we need to evaluate only evolutions in the causal cone, which for finite number of steps is finite (many people call these evolutions "quantum random walks", corresponding to finite numbers of particles in quantum field theory). Thus the lattice is obviously infinite, due to translation invariance. Translation invariance must not be regarded in a metric space, but more precisely as topological homogeneity. Locality means that a finite-dimensional algebra of a single system goes to the linear combination of the algebras of a finite number of next neighbor systems. This is a causal structure of topological nature only, no metric: there is just a simple rule that connects one system to other two (or few) system and so on, making a network. For a mathematician: an Alexandrov topology. The causality of Quantum Theory (first axiom!) gives the order relation. But there is something more than pure abstract causality: the quantum nature of the causal relation. So, in terms of cells, edges, and nodes: the cells are finite sets of finite-dimensional quantum systems, e.g. two qubits in the case of Dirac automaton. The edges are the causal connections and the nodes are unitary interactions between two (or few) quantum systems. Causal connections and quantum systems are the same thing: the cell is just a finite set of them. As an example, take just a simple homogeneous quantum network, i.e. a quantum computer, where infinitely many qubits are connected only through bipartite gates in a brick-wall way. See e.g. figures in my previous FQXi essay. In my case the systems are described by a complex operator in an infinite dimensional Hilbert space (but the algebra locally is finite dimensional!), corresponding to the field labelled by lattice points. The most synthetic and elementary mathematical definition of the automaton is now just a finite matrix (4x4 in the Dirac case) where elements can be just multiplication of the field in a cell by a scalar and operators shifting to next neighbor cells. All coefficients are constant, corresponding to homogeneity of the automaton. The matrix is "unitary", in the sense that it preserves the scalar product between any two states with finite support. That's all!

        Regarding your last question, I'm not sure I understand it, but I think that the answer is positive in the sense that if I take an abstract causal network (call this "classical"), namely an (unbounded) graph with no loop, describing a partial ordering, I can associate a unitary interaction to each node and a quantum system to each edge.

        Thank you again for your questions. It is very helpful for me to answer questions they are genuinely motivated by the understanding, as yours. It helps me a lot in making my ideas clearer, and affects next scientific writings.

        With my best wishes for a career as a natural scientist, not a technician of any technique ...

        Mauro

        Dear Hai

        in my approach inertial mass gets naturally a cinematical definition, solving the loophole definition of mechanics (if you are not considering Machian theories): it is just the slowing-down of information. It is a parameter of the automaton, and coincides with the rest-mass of the particle. Gravitation is still a work in embryo, and the idea is that it is a thermodynamical effect of purely quantum nature. The Higg mechanism is not quantum, and is far from the current status of the theory (just Dirac). In few years, if I get sufficient funding, I will be able to tell you if the Higg's mechanism will emerge as a semiclassical one from the quantum automaton in interaction, not the free one.

        Thank you for you interest.

        I'll take a look at your essay

        Cheers

        Mauro

        5 days later

        Dear Giacomo,

        Good essay. If I am not mistaken I think you Simplifying principle is actually Ockam's razor. There are some serious researchers that work in the direction of providing alternatives to Bayesian approaches, such as Kevin Kelly from Carnegie Mellon that you may find interesting. I completely agree that there are untouchable dogmas not only in physics.

        Your informational principles are appealing, I am tempted to try to understand them in further detail. Your informational view quite challenges my own ideas on an algorithmic world in ways I didn't expect, because you seem to suggest that some of this informational principles are not mechanical, which it is not completely clear from your essay, certainly in part because of the lack of space to further explain it. I am also delighted by your tidy illustrations, apparently using Mathematica =)

        It looks to me that your proposal is related to, if not, a theory of quantum gravity, I would have liked this to be made explicit in either direction.

        Dear Hai

        Honestly I have difficulties understanding your essay: you have a completely different methodology and language. Sorry! My understanding is not sufficient to express an honest judgement.

        Best wishes

        Mauro

        Dear Hector,

        Thank you very much for your appreciation. Personally I have a strong personal belief in the future of the quantum automata extension of quantum field theory proposed in the essay.

        Yes, my simplifying principle is the Ockam's razor, since I reduce the whole theory to just quantum theory of interacting systems, plus the Deutsch-Church-Turing principle (information density is bounded from above) and homogeneity. The Dirac equation is then just the free flow of quantum information, with inertial mass defined cinematically as the slowing down of the flow via the coupling between the two chiralities of propagation of information at the maximal speed-i.e. the causal speed (see my essay of last year for d=1). If you want to understand more, I will be happy to explain: what about a Skype meeting?

        I would be also very happy if you can provide me a good reference to the work of Kevin Kelly, alternative to my approach, which indeed has a natural Bayesian interpretation, but not necessarily. In a way, also my universe is algorithmic, but of a quantum kind: the algorithm is very small, it is just the automaton, namely a small number of quantum cells (made of few low-dimensional quantum systems) causally connected by a small set of unitary interactions, representing the physical law, or equivalently, the field theory. Can you provide me with a reference to your work on the algorithmic universe?

        Yes, the plots are made with Mathematica, with a quite sophisticate parallel graphics! I now will have soon the d=3 automaton: the graphics is astonishingly beautiful, with 3d pixels in space, whose size and transparency represent the quantum amplitude of the superposition, whereas the colors encode some phase and relative weights of the Dirac double spinor!

        The proposal is indeed related to the idea of deriving an alternative theory of quantum gravity, essentially via the Jacobson thermodynamic approach. I this point I have some idea in mind, which I will write on a forthcoming manuscript, when it will be clear that it goes to the right direction. For the moment I cannot say more, and you have to wait up to December, when I will have finished teaching my semester course of Quantum Mechanics. At that time I will also post on the arxiv two long technical manuscripts, one with my postdocs Alessandro Tosini and Alessandro Bisio on a powerful asymptotic analytical evaluation of the automaton dynamics in the thermodynamic limit for smooth states (what I call the field limit), and one with Paolo Perinotti on the d=2 and d=3 Dirac automaton from first principles. After that we'll move to QED, but I think we will not need this for gravity. We are four people currently working on this project, with no funding! I hope that we will get some funding soon!

        Thank you again,

        Mauro

          Dear Hector

          here a beautiful image of a digital 3D version of two particles ...

          My best

          Mauro

          Dear Mauro,

          I couldn't see the 3D version of the two particles.

          Best,

          Hector

          Dear Mauro,

          Re Kevin Kelly: http://www.hss.cmu.edu/philosophy/faculty-kelly.php

          Re my own algorithmic nature research: http://www.algorithmicnature.org

          Best wishes,

          -- Hector

          I uploaded the file, but I need to put some instruction in the text, but the link on howto doit is broken!

          • [deleted]

          Giacomo wrote:

          "Universal automata constants. The three quantities lP ; tP ;mP are the

          irreducible universal constants of the automata theory, and the adimensional

          mass is the only parameter characterizing the Dirac automaton. The Planck

          constant can be now rewritten in terms of the automata universal constants ..."

          Dear Giacomo

          Be careful with Planck length and read Wilczek doubts about it

          Wilczek:"we must extract roots",

          "can be taken outside the square roots",

          "In the strong system of units no square roots

          at all appear in [M], [L], [T ]."

          Read Wilczek http://arxiv.org/abs/0708.4361

            Dear Yuri,

            I agree with Wilczek: my system of universal constants is strong: they define indeed [M], [L], [T] units, and I don't need any square root, even for defining G!

            Thank you for suggesting this positive aspect of my system!

            My best

            Mauro

            Giacomo

            Thank you for an initial excellent resume, but can't claim to have kept harmonic resonance to the end. None the less I think I saw some astonishing analogies ("evolutions for finite number of steps2) etc. with some more astonishing findings of mine. I still need to comprehend you 'quantum automaton', but I hope you'll read my essay, considering your; "localized states and measurements, for whose description quantum field theory is largely inadequate," in a slight different way.

            I agree it is not "blasphemy to regard the non existence of an absolute reference frame as a dogma," and find a consistent alternative to the illogical 'fixed stars' frame, so problematic to astronomy.

            I expose some wrong assumptions, using epistemological elements in an ontological model, proving to reproduce a logical (TPL) construction of hierarchical compound propositions (read 'frames'). The SR postulates (Local CSL) seem to emerge direct from a Quantum Mechanism, looking like unification via Raman scattering and dynamic logic. Which I think may be very important. It certainly looks wrong and too simple at first, so meeting all requirements of the answer we seek, but is quickly intuitive.

            I do hope you can read it, visualise the kinetic evolution, and comment and advise.

            Very may thanks, and best wishes.

            Peter