Hi Don,

1) Thank you.

2) You refer to the JOUAL 2009 conference . Four of the six regular papers presented at the event are now published in Complex Systems . I could add that one of those authors - Alex Lamb - has just submitted an interesting essay at this contest, which matches very well, again, the JOUAL focus! Go take a look.

3) That's what I call the 'Teilhard conjecture' (after T. de Chardin). We are still very far from its experimental validation, but if it turned out to be false, all the experiments I am running on computational, causet-based big bangs would be, mostly, wasted cpu time...

4) I fully agree.

  • [deleted]

Dear Tommaso Bolognesi,

"There exists a tiniest scale at which the fabric of spacetime appears as a pome-granate (Figure 1), made of indivisible atoms, or seeds. This view is reflected in models such as Penrose's spin networks and foams, and is adopted in theories such as Loop Quantum Gravity [14] and in the so called Causal Set Programme [6, 13]...."

Thank you for pointing out that this is a view.

"At that level, a universal computation keeps running. We do not know yet the program code, but, in accordance with a fundamental principle of minimality ('Occam razor'), we like to believe that it is small, at least initially."

And it grows by what means?

"Perhaps it is a self-modifying program: code and manipulated data might coincide. ..."

Self? In other words by mechanical magic?

"...This does not mean that we have to postulate the existence of a divine digital Computer that sits in some outer space and executes that code, for the same reason that, under a continuous mathematics viewpoint, we do not need a transcendental analog Computer that runs the Einstein field equations for animating spacetime. Computations may exist even without computers (and, incidentally, the concept of computation is much older than computer technology). 1"

Of course computations may exist without computers. Your remark about "outer space" could be applied equally well to extra dimensions and other universes.

I have printed off your essay and am going to read it; but, I must admit that your beginning appears to be ideological rather than scientific. If I am incorrect, I will learn that by reading your essay and will returnh to apologize.

James

    Dear James Putnam,

    your post triggers a number of reactions, probably (and unfortunately) more than I can put in a post here.

    The first general point I need to clarify is that, in an attempt to be concise in writing the essay, I adopted a style, especially in the opening that you quote, which may indeed sound more 'assertive' than that of a stardard scientific paper. On the other hand, I believe that a bit of 'assertiveness' may be helpful for stimulating discussions, in a context such as the FQXi forum.

    And in the sequel of the essay, in particular in Section 3 - 'Experimental validation', I do put my views in the right perspective.

    I did not feel like using more space just for supporting the validity of the idea of working with discrete models such as causal sets, in light of the aboundance of solid papers that share that view as a very reasonable working hypothesis, to say the least.

    You ask: By which means does a discrete spacetime grow?

    Rideout and Sorkin [Phys. Rev. D 61, 024002, 2000] discuss classical dynamics for causal set growth, using *probabilistic* techniques.

    Out of the several ways I have explored (by simulations) for obtaining causal sets by *deterministic algorithms*, the one that I personally consider most attractive consists in letting a 'stateless ant' walk on a trivalent graph G while applying, step by step, one out of two possible graph rewrite rules (known also in Loop Quantum Gravity, and used by Smolin, Markopoulou & friends). G can be equated to a growing space, while spacetime corresponds to the causal set of the computation being performed on it (the relevant references are [1-5], [8], [14] and [17]).

    One of the two graph rewrite rules introduces a new node in G, so this rule would be responsible for the growth of space. But spacetime (the causal set) grows at *every* rewrite rule application, since:

    1 rewrite rule application = 1 computation step = 1 node (event) added to the causal set (spacetime).

    In this respect, my approach is indeed in contrast with the following statement from your essay:

    "The universe evolves, but not because simplicity can generate complexity. Complexity can come only from pre-existing complexity. The greatest possible effects of complexity in the universe exist right from its start in a potential state."

    I tend to disagree on that, based on the phenomenon of deterministic chaos, or algorithmic randomness. The best example of this is perhaps Wolfram's elementary cellular automaton n. 30: in spite of the elementary initial condition and the very simple rule (coded by a boolean function of three variables), the computation dynamics exhibit surpring pseudo-random character, as if information were continuously injected in the process from outside.

    I like to believe (ideology again!) that our universe started from nothing, or almost nothing, and that space, spacetime, and complexity did increase: they were not 'already there' at the beginning, unless you mean that all the complexity of the mentioned rule 30 computation is 'potentially' present in the few bits that code its initial condition and boolean function.

    I remember a workshop at the Phys. Dept. of the Padova Univ. (Jan 15, 2008) with lively discussions on String Theory vs. Loop Quantum Gravity. During the panel, Gabriele Veneziano and Carlo Rovelli did agree on one thing: they both were unable to point out a crucial validation/falsification experiment for their respective theories. I believe I am in good company in claiming that, in physics, both imaginative hypotheses and accurate experimental verification are necessary. If you call the former 'ideological', then I am afraid my essay has a lot of ideology in it. And, although I also provide simulation results (more in the references) that suggest interesting analogies with physical phenomena (e.g. the emergence of interacting objects!), my research on causal sets (as well as that of many others) is still very far from precise numerical predictions.

    Since this is getting long, I answer to the remark on the self-modifying program in the next post. Looking forward to your comments after reading the essay. Thanks for your stimulating remarks!

    Tommaso

    Dear James,

    you quote my essay:

    "Perhaps it is a self-modifying program: code and manipulated data might coincide. ..."

    and ask:

    Self? In other words by mechanical magic?

    The funny thing is that I had removed this line from the quasi-final version of the essay, because I did not have enough space for expanding on it. But then I put it back, being ready to accept questions or criticism. Thank you for giving me an opportunity to explain.

    The idea of a self modifying program is, again, an attempt to satisfy the requirement of minimality. While I support the computational universe idea, what I find a bit annoying is the separation between (i) a (fixed) program P and (ii) the data D that it manipulates. Under this view, D represents our Reality, while P would be the rule that governs it, without enjoying itself the status of a Real entity. They are two things, and one of them is even 'unreal'. Two is bigger than one. If the program operates on itself, we would have only one thing: P = D = Reality. That would be more elegant. I believe that the Mathematical Universe idea by Max Tegmark also achieves this unity: there's only one thing, namely a mathematical structure.

    By the way, the concept of a self-modifying program is quite familiar in computer science, e.g. in logic programming (in the Prolog language etc.). Furthermore, self-reference is a recurrent concept when dealing with formal systems (Goedel's theorem), computation (universal Turing machines), not to mention consciousness. I would not be surprised at all if it played a crucial role in an ultimate, computational theory of physics.

    If you claim that there is something magic in a program that modifies itself (= Reality), then I'd expect you to claim the same for a program that modifies 'external' data (= Reality). In my opinion, there is no more magic in a program that runs our Reality, than in a set of differential equations that does essentially the same thing.

    Cheers. Tommaso

    PS

    So far I've done only few experiments on self-modifying Turing machines, without exciting results. In all the experiments mentioned in the essay, data and program are separated, and the latter is fixed.

    • [deleted]

    yes of course and a micro BH also because the singularity says that.

    Me also I am musician and poet and my father was a bus driver and now he is died.And after what at the age of 20 I was in the coma. no but frankly hihihihii several pappers and this and that ....a big joke yes and big pub .

    hop under review.Christi hiihhi you see I play everywhere as a child now,I love this platform and the vanities of scientists.

    Computer vs rationality of our universe. Big Bangs with a S no but frankly, you simulate what an universe or your universe.A big joke all that.It's just computing , not physics.On that good bye.

    Don't be offensed,I am just a little crazzy.Hop I am going to take my meds.until soon.

    Best

    Steve

    Dear Tommaso,

    Perhaps it would help to bring up the contribution of Alan Turing and his concept of universality unifying both data and programs. While one can think of a machine input as data, and a machine as a program, each as separated entities, Turing proved that there is a general class of machines of the same type (defined in the same terms) capable of accepting descriptions of any other machine, and simulate their evolution for any input, hence taking a program+data as data, and unifying both.

    This is why one can investigate the 'computational universe; today either by following an enumeration of Turing machines, or using one (universal Turing) machine running an enumeration of programs as data inputs. Because both approaches are exactly the same thanks to Turing's universality.

    Best.

    - Hector Zenil

    • [deleted]

    Dear Tommaso Bolognesi,

    I have printed your response. I am impressed. Your response is not in agreement with me; but, that is a minor point. Your response was directed at my questions and even referred to my own essay. I appreciate your time and effort in putting your response together. I will follow the leads you referrenced. I will respond when I put something together worth your time.

    James

    5 days later
    • [deleted]

    Tommaso,

    Thanks for a fascinating and extremely well constructed essay. Since Wolfram is scheduled to speak at ICCS in Boston this summer, I think it might be interesting to see how your multi-level hierarchy compares to Bar-Yam's multiscale variety -- hierarchies of emergence vs. lateral distribution of information.

    Interesting conceptual equation, "spacetime geometry = order number." Suppose one were to make another equation: "order = organization feedback." Then one would get -- substituting terms in my equation for yours -- the theme of my ICCS 2006 paper ("self-organization in real and complex analysis") that begs self-organization of the field of complex numbers, z, in the closed algebra of C.

    One more comment (though I could go on; your paper is rich in quotable points), concerning global and local (4.1) time-dependent relations among point particles. Research in communication network dynamics (e.g., Braha--Bar-Yam 2006, Complexity vol 12) shows often radical shifts in hub to node connectivity on short time intervals while time in the aggregate shows that the system changes very little. Taking point particles as network nodes, perhaps something the same or similar is happening.

    Good luck in the contest. (I also have an entry.) I expect that you will rank deservedly high.

    All best,

    Tom

      Dear Tom,

      thanks for the positive comments. Following your links I reached the Robert Laughlin's 2005 book 'A Different Universe: Reinventing Physics from the Bottom Down', in which the role of emergence in theoretical physics is given an important role. Good to hear; another book on the pile!

      In the equation 'spacetime geometry = order plus number', introduced by people in the Causal Set programme, 'number' simply refers to counting the number of events in a region of the causal set, which is then equated to the volume of that region. And 'order' is the partial order among events. You mention self-organization in the context of the field of complex numbers, and this does not seem much related to 'number' in the above sense (if this is what you meant to suggest). But of course I am curious about everything that has to do with self-organization.

      Usually a self-organizing system is conceived as a moltitude of simple active entities. Does this happen in your ICCS 2006 paper?

      One peculiarity of the 'ant-based' (or Turing-machine-like) approach dicussed in my essay is that you actually have only ONE active entity -- the 'ant' -- and expect everything else to emerge, including the moltitude of interacting particles or entities that one normally places at the bottom of the hierarchy of emergence.

      Ciao. Tommaso

      • [deleted]

      Dear Tommaso,

      Thank you for your essay. You write a lot about an emergence and computation, chaos, self-organization and automata. All the elements I have touched in my essay because they are closely connected to the evolution of spacetime concept.

      E.g. you write: "Computations may exist even without computers". It seems to have something in common with Computational LQG by Paola Zizzi. In my essay I have even quoted Paola.

      My own view is that the universe is a dissipative coupled system that exhibits self-organized criticality. The structured criticality is a property of complex systems where small events may trigger larger events. This is a kind of chaos where the general behavior of the system can be modeled on one scale while smaller- and larger-scale behaviors remain unpredictable. The simple example of that phenomenon is a pile of sand.

      When QM and GR are computable and deterministic, the universe evolution (naturally evolving self-organized critical system) is non-computable and non-deterministic. It does not mean that computability and determinism are related. Roger Penrose proves that computability and determinism are different things.

      Let me try to summarize: the actual universe is computable at the Lyapunov time so it is digital but its evolution is non-computable so it remains at the same time analog (the Lyapunov time is the length of time for a dynamical system to become chaotic).

      Your work seems to be the trial to develop the computable model of the universe at the Lyapunov time. Good luck!

      Jacek

      • [deleted]

      Ciao Tommaso,

      Yes, I do mean to suggest that the non-ordered set, z (the universal set of complex numbers) is organized to allow -- not a partial order of events -- but a well-ordered sequence in the specified domain of topology and scale, with analytic continuation over n-dimension manifolds. It is nontrivial that this is accomplished without appeal to Zorn's lemma (axiom of choice). And time is given a specifically physical definition. I followed up at ICCS 2007 with a nonmathematical paper ("Time, change and self organization") that incorporated and expanded on some of these results.

      You pick up right away, the difference between the hierarchical distribution of information, and multiscale variety. I am thinking that your "multitude of entities" may be dual to the "ant" analogy, because with activities occuring at varying rates at different scales, new hierarchies may form and feed back to the system dynamics.

      You know, Boston is very beautiful in the summer. :-)

      All best,

      Tom

      Hi again Tommaso,

      Just to let you know I dropped you a comment on Feb. 18, 2011 @ 21:07 GMT concerning the data vs. code question, just in case you hadn't seen it.

      Best.

      5 days later
      • [deleted]

      Dear Tommaso,

      Welcome to the essay contest. This essay contradicts quantum mechanics: How your digital/computational universe conjecture theory manages the Heisenberg uncertainty? For this purpose your digital computer must know the definite, absolute information about the position and momentum of every particle. Moreover, this ''computer'' must know all quantum information with absolute precision before events occurs - it is forbidden by quantum mechanics. Also, to perform such processing, the exchange of information and the work of computer must be faster that light. How your digital computation theory explains the EPR paradox and nonlocality?

      I can prove a theory false by simply finding one example in which the theory does not hold. Let us analyze your statement: ''all the complexity we observe in the physical universe, from subatomic particles to the biosphere, is a manifestation of the emergent properties of a digital computation that takes place at the smallest spacetime scale''.

      I can show you a place where the digital/computational universe conjecture theory is wrong: 1) At the center of a black hole as described by general relativity lies a gravitational singularity, a region where the spacetime curvature becomes infinite. Thus, at the center of a black hole a digital computation is not possible because spacetime curvature becomes infinite. You see, there are places and phenomena which exist without need in the digital computation. Since I found at least one place where the digital computation can not exist, it is a proof that this theory is wrong.

      Besides, to process an event the digital computation needs the exchange of information. Inside of the black hole (event horizon) all paths bring the particle closer to the center of the black hole. It is no longer possible for the particle to escape. Since the signal can neither escape from a black hole nor move inside of a black hole, it means the exchange of information is not possible. Since the exchange of information near the event horizon is not possible, it mean that digital computation also is not possible. Pay attention that the digital computation is not possible even outside of the Black Hole, near the event horizon because the exchange of information is forbidden.

      The essay is inconsistent; I found propositions which contradict each other. For example: ''There exists a tiniest scale at which the fabric of spacetime appears as a pomegranate, made of indivisible atoms, or seeds''. ''all the complexity we observe in the physical universe, is a manifestation of the emergent properties of a digital computation that takes place at the smallest spacetime scale''.

      Suppose that at the level of indivisible atoms a universal computation keeps running and manage the external physical processes. There appears a question: who/what manage this ''digital computer' and the work of the indivisible atoms? It means the existence of the deeper background structure which process and manage the activity of "indivisible" atoms this ''digital computer''. If the universal computation sits at the bottom of a multi-level hierarchy of emergence then where sits the computation which manage ''the universal computation''? Thus, the idea of the digital/computational universe conjecture contradicts to the idea of indivisible atoms.

      In conclusion, I agree with you that Reality is ultimately digital, but the digital/computational universe conjecture theory is wrong and inconsistent.

      Sincerely,

      Constantin

      • [deleted]

      The previous post is my post, by Constantin Leshan. The login does not hold.

      Soncerely,

      Constantin Leshan

        • [deleted]

        Hi Tommaso

        My rate is done and you got a good grade. A very well written essay. I agree with the essence of it as I hope you could verify on my essay, even that the style of my writing is quite different.

        Now having said that, I would say:

        I agree our universe is made from some simple basic cellular automata and most things are emergent phenomena.

        I don't agree to identify those automata to space-time and see particles and every thing else emerge from there.

        My position is quite the opposite. I identify the basic automata with particles and see space and time derived from the interaction. Unfortunatly I haven't done concrete definitions and experimentation with my approach.

        I feel my approach may have the problem of having more complex automata but might be easier to codify relativity in there.

        Could you comment ?

        Regards

        Juan Enrique Ramos Beraud

          • [deleted]

          Hi Juan Enrique,

          Tommaso Bolognesi's essay contradicts quantum mechanics: How the digital/computational universe conjecture theory manages the motion of particle and Heisenberg uncertainty? For this purpose this digital computer must know the definite, absolute information about the position and momentum of every particle. Moreover, this ''computer'' must know all quantum information with absolute precision before events occurs - it is forbidden by quantum mechanics. Also, to perform such processing, the exchange of information and the work of computer must be faster that light. How your digital computation theory explains the EPR paradox and nonlocality?

          I can prove a theory false by simply finding one example in which the theory does not hold. Let us analyze your statement: ''all the complexity we observe in the physical universe, from subatomic particles to the biosphere, is a manifestation of the emergent properties of a digital computation that takes place at the smallest spacetime scale''.

          I can show you a place where the digital/computational universe conjecture theory is wrong: 1) At the center of a black hole as described by general relativity lies a gravitational singularity, a region where the spacetime curvature becomes infinite. Thus, at the center of a black hole a digital computation is not possible because spacetime curvature becomes infinite. You see, there are places and phenomena which exist without need in the digital computation. Since I found at least one place where the digital computation can not exist, it is a proof that this theory is wrong.

          Besides, to process an event the digital computation needs the exchange of information. Inside of the black hole (event horizon) all paths bring the particle closer to the center of the black hole. It is no longer possible for the particle to escape. Since the signal can neither escape from a black hole nor move inside of a black hole, it means the exchange of information is not possible. Since the exchange of information near the event horizon is not possible, it mean that digital computation also is not possible. Pay attention that the digital computation is not possible even outside of the Black Hole, near the event horizon because the exchange of information is forbidden.

          The essay is inconsistent; I found propositions which contradict each other. For example: ''There exists a tiniest scale at which the fabric of spacetime appears as a pomegranate, made of indivisible atoms, or seeds''. ''all the complexity we observe in the physical universe, is a manifestation of the emergent properties of a digital computation that takes place at the smallest spacetime scale''.

          Suppose that at the level of indivisible atoms a universal computation keeps running and manage the external physical processes. There appears a question: who/what manage this ''digital computer' and the work of the indivisible atoms? It means the existence of the deeper background structure which process and manage the activity of "indivisible" atoms this ''digital computer''. If the universal computation sits at the bottom of a multi-level hierarchy of emergence then where sits the computation which manage ''the universal computation''? Thus, the idea of the digital/computational universe conjecture contradicts to the idea of indivisible atoms.

          In conclusion, I agree with you that Reality is ultimately digital, but the digital/computational universe conjecture theory is wrong and inconsistent.

          Sincerely,

          Constantin

          Dear Constantin,

          would it be wise to say that Quantum Field Theory is wrong because it does not predict the existence of unicellular organisms?

          This is not meant to be provocative, but only to express what I believe is the 'delicate' status of any conjectured theory of everything (QFT not even pretending to be one). Any such conjecture should maximize the number of explained physical phenomena while minimizing the machinery of the explanations, for example by getting rid of universal constants such as c and G, which should be derived, not assumed.

          I believe that the digital/computational reality conjecture (I wrote 'conjecture', not 'theory'), could hardly be beaten in terms of simplicity -- any kid can understand and reproduce the steps of, say, a deterministic Turing machine moving on a binary tape, or on a graph -- and this is already a great incentive for investigating it. But it is equally clear that the number of 'proof obligations' assigned to the conjecture is explosive, offering much room to criticism, until these are not discharged.

          Then, looking at the long list of TODO's , let me first summarize the good news, and tick the phenomena that occur in spacetime, that the conjecture can comfortably explain, via emergence in computation (see also essay and references):

          - random-like behaviors;

          - periodicity, and co-existence of regular-periodic and random-like structures;

          - self-replication;

          - localized periodic structures that interact with one another, similar to particle scattering diagrams.

          All these can be observed in cellular automata as well as in algorithmic causal sets.

          What I find almost miraculous is that we get these features for free, that is, without coding anything of physical flavor into those simple models of computation. And I do hope that you agree in considering the above as fundamental PHYSICAL phenomena, in the broad sense that they characterize qualitatively our universe, as we perceive it.

          I would not endorse any ToE proposal that does not perform VERY well at these tasks -- first qualitatively, and then, of course, also quantitatively. In my essay I additionally suggest that these properties, in duly varied forms, should manifest very early in the history of the universe.

          I realize that this conjecture represents a radical shift of perspective, in open violation with a principle supported by many scientists (e.g. Carlo Rovelli), also expressed somewhere in these blogs, that science always progresses incrementally, by smooth improvements of the best existing theories. To say that this has always been, and will ever be the case, is quite a strong statement (proposal for the next FQXi Contest: 'Is the History of Physics Discrete or Continuous'?). But, if the 'continuous' solution is preferred, it would be certainly wise to widen the domain of theories to be considered for improvement or integration, including not only SR, GR and QM, but also Complex Systems, Self-Organization, and, or course, Darwin.

          I have taken a larger tour than you probably expected. You raise specific points and I do want to answer them as punctually as possible. Take this as a preamble. I'll be back.

          Bye for now

          Tommaso

          Constantin (and Juan-Enrique), I have given a first answer to your objections up in the blog where you raised them first. The rest of my replies comes hopefully tomorrow. Look for it by scrolling up to that same place. Thanks. Tommaso.

          SECOND PART of my answer.

          (The ultimate bottom)

          You find a contradiction between placing indivisible atoms of spacetime at the bottom of reality, and the need for a digital computer that runs the evolution of this collection of atoms. You seem annoyed by the fact that such a digital computer would represent a 'deeper background structure' beneath the level of these indivisible spacetime atoms, requiring perhaps even an operator (you ask 'who/what manages this digital computer'?): discrete spacetime would no longer be the very 'bottom' of the universe. The answer is simple: I do NOT postulate the existence of such a computer, in the basement or elsewhere, as clearly written at the bottom of page 1 in my essay. An algorithm, as well as a differential equation, is simply a formal way to describe dynamics. No need for hardware.

          (Computation and curvature)

          You write that 'at the center of a black hole, a digital computation is not possible because spacetime curvature becomes infinite'. I completely agree that your PC (or even my Mac!) would start having computing problems a while after crossing a black hole horizon. But, again, we are not talking about hardware, we are abstractly talking about computation. Better: computations on graphs. The variety of structures and phenomena that one can obtain out of algorithmically evolving graphs is formidable. A cheap proof of this, if you wish, is that when you describe phenomena such as those involving black holes, you tend to visualize things precisely in terms of points and arrows, which is what directed graphs are made of...

          By the way, a very good 1999 paper by Margenstern and Morita proves that, in the context of cellular automata, spatial (negative) curvature offers indeed a great computational advantage over flat space (M. Margenstern, K. Morita, 'A Polynomial Solution for 3-SAT in the Space of Cellular Automata in the Hyperbolic Plane', Journal of Universal Computer Science, vol. 5, no. 9, Springer, 1999, pp. 563-573). Amazing.

          However, to me, the appropriate question is not whether a computation is possible inside a black hole, but, rather, what IS a black hole, how does it look like or manifest, in a graph-based, computational spacetime. I don't know. But simple concepts such as sink node (one with only incoming arcs), or strongly connected components are available that may help. Curvature can also be defined for (planar) graphs, called 'combinatorial curvature'. It is only finite, but possibly unbounded, if you grow the number of faces sharing the inspected node, as it can indeed happen in some of my algorithmic causal sets.

          (Quantum effects)

          If I had good answers for these problems, they would have appeared very early in the essay. Stephen Wolfram has some potentially useful suggestions for entanglement (NKS book, ch. 9). I have long discussed the quantum effects issue with Alex Lamb, who is also participating to this Contest , and he half-managed to convince me that a form of nonlocality could be achieved if we imagine the algorithmic graph rewriting to take place directly on the causal set, rather than on an underlying spatial support, as I've done so far.

          But the more general question is: are we going to eventually apply the standard QM techniques and compose instances of discrete spacetime in a gravitational path integral -- a sum over histories? Perhaps... but later. In doing so, we could follow, for example, the work of Renate Loll and collaborators, who take sums of causal dynamic triangulations (CDT) of spacetimes, and investigate consequences such as emergent spacetime dimension. But I am reluctant to do this, at least before having fully explored the potential of a classical approach to emergence in discrete, computational spacetime. Exciting phenomena such as those illustrated at Figures 3 - 5 of my essay would probably be obscured by a QM treatment.

          Finally, while I have no problems in conceiving 'computations' without 'computers', I am more skeptical about defining 'observations' without 'observers', and QM effects do depend on rather intrusive observers, engaging in interactions that affect both the observed and the observer subsystem. In a tiny, discrete, newborn universe that has just reached the size of say 33 elements -- counting edges, nodes, faces, or simplices -- is there enough room for this interaction? Is there room for an observer? (an INTERNAL observer, that is). Maybe quantum effects, and perhaps even relativistic ones, unfold only at a later stage, when entities emerge that can play the role of (proto-)observers. But this is only wild speculation.