Essay Abstract

Reality is ultimately digital, and all the complexity we observe in the physical universe, from subatomic particles to the biosphere, is a manifestation of the emergent properties of a digital computation that takes place at the smallest spacetime scale. Emergence in computation is an immensely creative force, whose relevance for theoretical physics is still largely underestimated. However, if the universe must be at all scientifically comprehensible, as suggested by a famous einsteinian quote, we have to additionally postulate this computation to sit at the bottom of a multi-level hierarchy of emergent phenomena satisfying appropriate requirements. In particular, we expect 'interesting things' to emerge at all levels, including the lowest ones. The digital/computational universe hypothesis gives us a great opportunity to achieve a concise, background independent theory, if the 'background' -- a lively spacetime substratum -- is equated with a finite causal set.

Author Bio

Tommaso Bolognesi (Laurea in Physics, Univ. of Pavia, 1976; M.Sc. in Computer Science, Univ. of Illinois at U-C, 1982), is senior researcher at ISTI, the Institute for Information Science and Technologies of the Italian National Research Council at Pisa. His research areas have included stochastic processes in computer music composition (1977-1982), models of concurrency, process algebra and formal methods for software development (1982-2005), and emergence in computational big-bangs (since 2005). He has published on various international scientific journals several papers in all three areas.

Download Essay PDF File

Dear Tommaso,

I am glad that you take part in this contest and my first impression is that your essay is very interesting. I wish you success.

All the best, Felix.

    Dear Tomaso,

    our essays were published almost simultaneously, and a fun thing is that I intended to name mine "GR 2.0 - Debugging a singularity". Remnant of that version is an endnote in which I compare the information loss in a black hole with memory leaks. I'll come back after I will read your essay.

    Best wishes,

    Cristi

      Dear Cristinel,

      so you have removed the 'debugging' concept from your title. At first sight I liked the title 'GR 2.0 - Debugging a singularity', for its double software-oriented flavor; but after reading (quickly) your paper I think your final choice has been more appropriate, since your approach does not seem to relate at all with the 'digital/computational' finite universe conjecture. You are definitely on the 'analog' side!

      Thank you Felix for your welcome message. I am really curious about the actual interest that the causal-set based, digital/computational approach that I have described might attract in this context. I expected a few more contributions along those lines, but so far I have not seen any, and I wonder whether I should be happy or worried about it.

      Dear Tomaso,

      I enjoyed reading your essay, which is well written and reveals a deep understanding. Discrete approaches like that you explore can add much to our understanding of reality. I personally believe that it may be more in causal sets than just the conformal structure, and I strongly encourage their study. And trying to obtain laws we consider fundamental as emergent phenomena of simpler laws is what science is about.

      Am I on the digital or analog side? It's complicated, I just added something about this here and here.

      Best wishes,

      Cristi Stoica

      Dear Tommaso

      I have read your interesting essay in which I would like to make a comment. In your essay you say the following:

      Furthermore, sometimes we identify new, unifying laws that allow us to jump one level down: laws that appeared as primitive (e.g. Newton's law of gravitation) are shown to derive from deeper laws (e.g. General Relativity).

      I wish this were totally true, but there are evidences that points in another direction. I will mention the following example about the speed of light and special relativity (SR):

      The value of the speed of light in vacuum was conventionally defined by the Bureau Intertanational des Poids et Mesures (BIPM) as V_r= 299 972 458 m/s. But this value was taken as a convention , this does not imply that the actual (or measured) speed of light possesses that exact value but the actual value is around V_r with a speed uncertainty within the interval 0V_r. Now we ask: are we violating the second postulate of relativity? Is the parameter c really equal to V_r? Why not c is taken to be equal to V_n or V_i? Recall that for SR to make physical sense, the parameter c must be higher than the speed of the inertial frame v, so that the Lorentz transformations do not render complex numbers. In this sense the selection c=V_r>v is partially justified, but we could have conventionally defined c=299 792 460 m/s and the physics would not be affected at all since the theory by itself only demands a constant with units of speed with any value but different than v. SR does not give us any clue even of the order of the value of the parameter c [Ellis, G.F.R., Uzan, J.P.: Am. J. Phys. 73, 240 (2005)]. But why did SR borrow (not borrow, steal) the value from another theory (electrodynamics)? Why is not SR capable of determining the value of its own constants? The theory then, with no relation to a measurement, cannot determine the value of c by itself. These arguments also apply for any other theory, see for instance the case in general relativity [Narlikar, J.V., Padmanbhan, T.: The Schwarzschild Solution: Some conceptual Difficulties. Found. Phys. 18, 659 (1988)].

      The conclusion here is that if the theory of general relativity is stealing the value of the constants (e.g. G= gravitational constant) from another theories (e.g. Newtonian mechanics or Maxwell electrodynamics) and is incapable of determining its own values, then this suggests that Newton Mechanics is not really derived from general relativity.

      Please feel free to make any comment

      Good luck in the contest

      Israel

        • [deleted]

        Hello dear Tommaso Bolognesi,

        A very beutiful essay full of rationality.Congratulations.

        Here is my humble point of vue in bad english, sorry , I am writing litterally and too quuick a bad habit,

        We see the encodings in the pure finite series....these codes compute our reality.It's a little if our particles, entangled spheres for me, knew what they must become in fact.In a real evolutive topology.

        It's a little if we said that they possesse different codes of becoming.Like an activation.A time code, a space code, rotationSSS codes,a polarity codes and this and that...if fact theses codes permit to transform light in mass by a kind of fusion between this two essentials.hv and its linearity and m and its gravitational stability.

        Now let's assume that the number for the entanglement for m and hv is the same...thus this number doesn't change during the fusion but the rotations yes...thus the mass also....only a different sense of rotation between mass and light can explin this difference.The volumes do not change.....the codes seem in the mass.But they are the same in their pure BEC.The space also thus we see an universal contact between all spheres, entangled....mass space and light.The rotations in time imply the difference.

        The topological and spherical system of rotatiopns of spheres, quant and cosm, is necessary....THUS A CENTER IS ESSENTIAL AND A SPHERE ALSO .

        These sets are finite for the uniqueness.In the 2 sense.

        Good luck for this contest.A team of winner I think, Lev, Moulay, Bolognesi,Stoica,Klingman,...

        Best Regards

        Steve

          • [deleted]

          I too found the essay fascinating. It's great to see some truly foundational takes on reality in this essay contest, and your perspective is most enlightening.

          Hi Israel. Thanks for the comments.

          Suppose one 'borrows' some constants (for example, Planck h, or universal gravitation G) from existing theories, and uses them in a new theory such that:

          (1) the predictions of the 'old' theories are confirmed by the new theory, yielding even better agreement with experimental results, and

          (2) more phenomena can be explained and predicted with high accuracy by the new theory, that fall even ouside the application scope of the 'old' ones.

          What's wrong with that? The idea is that the new theory 'absorbs' the old theories as special cases -- of more limited applicability and lower accuracy. I do not see the inheritance of physical constants from theory to theory as a problem, but as a nice feature of scientific progress.

          But perhaps you are addressing the problem of whether a theory is autonomously capable of justifying/determining the value of its constants?

          I am indeed fascinated by this problem, although it is a bit out of the scope of this contest. In my opinion, the most ambitious form of ToE (if it exists) should be able to do without any physical constant: all of them should be derivable -- should emerge from the rules of the game. It is nice to think that those values had not to be chosen and fine-tuned by Someone, before switching on the Universe... And I believe that theories fundamentally based on a discrete substratum, on computation, and on emergence -- the type I discuss in my essay -- have much higher chances to achieve this goal, almost by definition.

          Dear Tomaso

          Thank you for your reply. I have read my own post and it seems that there are some sentences missing in the argument about special relativity. I am rewriting it so you understand better what I mean.

          The value of the speed of light in vacuum was conventionally defined by the Bureau Intertanational des Poids et Mesures (BIPM) as Vr=299 972 458 m/s. But this value was taken as a convention, this does not imply that the actual (or measured) speed of light possesses that exact value but the actual value is around Vr with a speed uncertainty within the interval 0< a Vr and/or Vi > Vr. Now we ask: are we violating the second postulate of relativity? Is the parameter c really equal to Vr? Why not c was taken to be equal to Vn or Vi? Recall that for SR to make physical sense, the parameter c must be higher than the speed of the inertial frame v, so that the Lorentz transformations do not render complex numbers. In this sense the selection c=Vr >v is partially justified, but we could have conventionally defined c=299 792 460 m/s and the physics would not be affected at all since the theory by itself only demands a constant with units of speed with any value but different than v [Ellis, G.F.R., Uzan, J.P.: Am. J. Phys. 73, 240 (2005)]. But why did SR borrow (not borrow, steal) the value from another theory (electrodynamics)? Why is not SR capable of determining the value of its own constants? The theory then, with no relation to a measurement, cannot determine the value of c by itself. These arguments also apply for any other theory, see for instance the case in general relativity [Narlikar, J.V., Padmanbhan, T.: The Schwarzschild Solution: Some conceptual Difficulties. Found. Phys. 18, 659 (1988)].

          This being said, I totally agree with your two points of your last post. And indeed I am talking about this:

          You: But perhaps you are addressing the problem of whether a theory is autonomously capable of justifying/determining the value of its constants?

          Certainly as you say the TOE should be able to do without any constant. But I believe that if this were the case, that is, if a theory were able to determine the values of its constants and parameters, the theory would likely become independent of experience (measurements) as Max Tegmark argues. Tegmark, M. Found. Phys. 38, 101 (2008), and Tegmark, M., Annal. Phys. 270, 1-51 (1998).

          Israel

          Dear Israel,

          the Tegmark paper that you mentioned -- 'The Mathematical Universe' -- is very interesting; thank you for pointing it out to me. I agree with his remarks at p. 12 on physical constants. He mentions that in traditional quantum field theory the Langrangian contains *real* parameters, whose specification would require an infinite amount of bits. Under his Computable Universe Hypothesis, however, this is not allowed, and two possibilities are left:

          (A) either the parameters are 'finite' (effectively computable from a finite amount of information), or

          (B) there exists an uncountable infinity of universes, in each of which each parameter takes one value from a corresponding, finitely computable range.

          (Case (B) sounds 'maximally offensive to human vanity', borrowing his words; my vanity is actually doing fine, in this respect, but I admit that my preference would go to plan (A)...)

          The problem of HOW the ultimate mathematical theory of physics could determine the value of these parameters is not directly addressed in that paper.

          But I guess there is not much to say: we have to guess the *right* values. In a computational theory of the type I describe in my essay, rather than multi-digit numeric parameters, one has to figure out the correct algorithm and the correct initial condition (e.g., a 2-node, 3-connected, 3-valent graph --- as you see the involved numbers are pretty small!). How can we know that the values are right, and that the mathematical structure/theory is well tuned? By testing whether the emergent reality corresponds to ours, that is, by computing the 'inside view' (or 'frog view') from the 'outside view' (or 'bird view'). In Tegmark's words, this is one of the most important questions facing theoretical physics. And most exciting, I would add!

          • [deleted]

          Dear Tommaso,

          Very interesting essay.

          You mention in Section 3 first sentence: "No experimental evidence is available today for validating the digital/computational universe conjecture." Let me point you and your readers to one of my papers entitled "On the Algorithmic Nature of the World" (http://arxiv.org/abs/0906.3554) where we compare the kind of distributions one finds in real-world datasets to distributions of simulated--purely algorithmic/digital--worlds.

          The results may be marginal, that there is always some (from weak to strong) correlation with varying degrees of confidence with at least a natural model of computation, but we think the investigation provides a legitimate statistical test and real experimental evidence indicating the compatibility of the digital hypothesis with the distributions found in empirical data. This is based on the concept of algorithmic probability.

          This claim concerning empirical data requires of course great care, since one would need first to show that there is a general joint distribution behind all sorts of empirical data, something that we also did test, which results we also report in the same paper. The proof that most empirical data carries an algorithmic signal is that most data is comprehensible in some greater or lesser degree.

          People may wonder whether the compressibility of data is an indication at all of the discreteness of the world. The relationship is actually strong, the chances of finding incompressible data in an analog world are much greater (as it has been argued by some researchers that think the world is mostly random).

          If I have a chance I will further elaborate all this in a later essay to be submitted to this contest, together with the precise definition of what we mean by an algorithmic world (basically a world of rules that can be carried out by a digital computer).

          Great work. Sincerely.

            Hi Hector,

            great to see you are here too. I hope indeed that you will bring more water (or, rather, bits) to the mill of the algorithmic universe conjecture! And thanks for the comments.

            If I understand correctly, your work provides some estimate of how likely it is that the world we experience (through the statistical analysis of real data sets) be the output of some computation. You write that most empirical data seem to carry an algorithmic signal.

            As you may guess, I put the highest expectations on deterministic (algorithmic) chaos, and I believe that whenever a natural phenomenon can be explained in terms of it, then recourse to pure randomness (where every bit has to be paid) should be avoided, for reasons of 'economy'.

            But if deterministic chaos could completely replace genuine randomness, in ALL cases and for ALL purposes, including the support of our universe, then we would have a problem: how could we tell the difference between a deterministic and a 'genuinely' random universe?

            If I understand correctly (I have not read your papers yet, but the stack is thicker every day here!), you do find some way to differentiate between the two cases. In fact, in the last figure of my essay I show that deterministic chaos has something more to offer than genuine randomness: it induces the emergence of a phenomenon that I call 'causet compartmentation', which appears as a fundamental prerequisite for the occurrence of anything interesting at all in discrete spacetime -- and this cannot happen in a truly random spacetime.

            So, perhaps we agree on the fact that deterministic white noise is whiter, or, at least, more 'brilliant' than pure white noise!

            Maybe you have a quick comment on this?

            Dear Tommaso,

            This is a very good essay. I recommend to the informed reader to move on to the bibliography.

            I do have a question/suggestion: beside the whole universe, there are other smaller, man made universes, where this type of computational approach could explain something, like the emergence of the pattern of use of space in a city. I am no architect, but a mathematician. Recently I became aware of a host of research (in architecture) concerning SPACE. Here are some relevant references:

            I first learned about the work of Christopher Alexander from this secret life of space link which I am sure you will enjoy.

            Then I learned from Bill Hillier ("Space is the machine") about the existence of "axial maps" (Turner A, Hillier B & Penn A (2005) An algorithmic definition of the axial map Environment & Planning B 32-3, 425-444) which still escape a rigorous mathematical definition, but seems to be highly significant in order to understand emergent social behaviour (see Space Syntax).

            So I wonder if such a computational approach could be of any help in such a more concrete but mathematically elusive subject.

            Best,

            Marius

              Dear Marius,

              thank you for the pointer to the 'secret life of space' by blogger Leithaus.

              Having been involved in process algebra (even older than Pi calculus) for quite some time I cannot but agree that one of the attractive features of those formalisms is their peculiar way to simultaneously handle 'structure' and 'behaviour'. But I also fully share the concern expressed in that blog, about the usefulness of modeling the geometry of spacetime in Pi calculus:

              "...will it be of any use to encode these notions in the model, or will it just be another formal representation -- potentially with more baggage to push around."?

              Who knows! But the idea that formal analogies between Pi calculus specifications of some spatial geometry, on one hand, and of biological processes, on the other, might suggest that 'space itself is alive' does not sound convincing to me, to say the least (although we all know that space is indeed alive!...). One reason is that two specifications with very different structure (syntax) may well share the same semantics/behavior, indicating that the formal structure of a specification is not so important.

              One should rather concetrate on the semantics of the specification; and the semantics can be given in several ways, including by a mapping from syntax to ... causal sets -- the structure that I discuss in my essay. It would be interesting to see whether relatively simple process algebraic specifications could yield causal sets exhibiting the variety of emergent properties that I observe in causets grown by other models of computation.

              • [deleted]

              Hi Tommaso,

              In my research the nature of randomness is secondary so at the lowest level there might be (or not) 'true' randomness and it would be pretty much irrelevant (from the algorithmic perspective). A consequence of assuming my algorithmic hypothesis is, however, that randomness is the result of deterministic processes and therefore is deterministic itself (which I think is compatible with your model). If randomness looks so is only in appearance. What I further say is that if randomness had any place in the world, it may no longer do. Whether you start a computer with a set of random programs starting from randomness or from emptiness, there is no difference in the long term. By contrast, if the universe somehow 'injects' randomness at some scale influencing the world (and our physical reality), empirical datasets should diverge from the algorithmic distribution, which is something we have being measuring (to compare both one has also to build the algorithmic distribution, hence to simulate a purely algorithmic world).

              In my algorithmic world randomness is, as you say, also the fabric of information in the way of symmetry breaking. You can either start from nothing or true randomness but you will end up with an organized structured world with a very specific distribution (Levin's universal distribution). What I do is to measure how far or close data in the real world is to this purely algorithmic distribution.

              Sincerely.

              ..."One reason is that two specifications with very different structure (syntax) may well share the same semantics/behavior, indicating that the formal structure of a specification is not so important."

              Right. But this, I think, is already taken care of by Leithaus (Greg Meredith), with Snyder, in this paper: Knots as processes: a new kind of invariant.

              Which, to my understanding, seems somehow related to this paper by Louis Kauffman who was among those who started topological quantum computing (along with Freedman, Kitaev, Larsen) which is just a form of computation with braids.

              • [deleted]

              In fact, as many you confound a little the computing with the reality but it's well.Very good knowledge of maths computings.We thank you for that.Indeed the coomputing is not always a known matter for all.After all it's an application of physics.

              An ultimate mathematical theory of physics you say......I say, the physics before, the maths after.The algorythms invented by humans shall be always far of the ultim universal algorythm......of course The Universe, this sphere, God if you prefer doesn't play at dices at my knowledges.

              An important point is this one, can you create mass, lifes and consciousness, ?...evidently,never, the logic never says that.

              But in your line of reasoning, it's interesting your tools for a computing of evolution. But the mass must be analyzed rationally.

              The rotations of the entanglement are rpoprotionals with the mass and its rules of evolution and complexification....quantum spheres(finite number, decrease of volume)...HCNO...CH4NH3H2OHCN _COOH......AMINO ACIDS....ADN ARN .....EVOLUTION....LIFES ......PLANETS...STARS.......BH.......UNIVERSAL SPHERE.

              fOR A CONCRETE REALISM THE MASS MUST BE WELL INSERTED IN THE ALGORYTHM AND ITS SERIES.....ALL CAN BE CALCULATED IN AN EVOLUTIVE POINT OF VUE.....when the mass polarises light....

              Regards

              Steve