Dear Tommaso,

I think your essay is very interesting.

I was wondering if you could clarify something: You say, "There exists a tiniest scale at which the fabric of spacetime appears as a pomegranate...made of indivisible atoms, or seeds." How do you reconcile this with the fact that photons of any wavelength travel at the same rate through space? Would photons of a smaller wavelength be more impeded by the atoms of space? (Forgive me if you have already addressed this.)

Also, I feel I should clarify my response to your question about my essay. I do believe that discrete space and time are the ultimate bottom layer of nature, but when I say 'nature,' I'm thinking of the universe as a working system, not its basic, fundamental components. Only through discrete space and time do you get matter, force, (relative) energy, and all the workings of the universe. However, on a fundamental level, I believe space at least is a continuous object. Its discreteness would come from particular one-, two- or three-dimensional regions becoming more 'timelike' in their nature, though they would continue to be space...in my opinion.

All the best with the contest,

Lamont

    Hi Williams,

    you ask whether photons of a smaller wavelength would be more impeded by the atoms of space.

    I wish I were already there!

    The general question behind this, I guess, would be: what are, really, particles in a causal set, intended as a discrete model of spacetime?

    A general answer would be: since the only tool we have for building the universe is a discrete spacetime -- a directed graph made of nodes (events without attributes) and causal relations among them -- a particle is a trajectory, a worldline, a periodic pattern made of events.

    But talking about the SPEED of a particle in a causal set is already quite difficult, since we need a reference frame, and that's not easy to define either, since we cannot enjoy the advantages of a continuous setting, such as Minkowski spacetime. One way to proceed would be to identify the origin of a reference frame with a particle, as defined above, so that you end up with a system of particles that observe each other and detect their mutual speeds... But I have not jet investigated this line of thought.

    Defining what a photon is in a causal set is indeed particularly challenging, for the following reason.

    While the definitions of time-like and space-like sets of events are immediately available, via the notions of chain and anti-chain for partial orders, the definition of light-like path is problematic, since you do not have a continuous background where you can take limits.

    The difficulty is as follows. Consider the set of nodes forming the photon's trajectory.

    If these are in time-like relation, we have something for which Lorentz distance progresses (Lorentz distance between two events in a causal set is the length of the longest directed path between them - this works fine, as the people in the Causal Set Programme know well), but then we would have a progression of the photon proper time, which contradicts its living on a null spacetime cone.

    If the points are in a space-like relation, no information can be carried by the photon, violating the idea that this particle is the fastest causality carrier.

    The attitude one assumes with the type of research I have described is to set up computational experiments and, basically, see what happens, without preconceived expectations. I find this approach justified in light of the highly creative power exhibited by emergence in simple models of computation. Of course the idea is then to establish relations between what emerges and familiar physical phenomena, as I suggested, for example, with the entanglement-like effect in Figure 3 of my essay.

    At the moment, photons and null cones appear to be still at large, in my causets.

    Hi Don,

    1) Thank you.

    2) You refer to the JOUAL 2009 conference . Four of the six regular papers presented at the event are now published in Complex Systems . I could add that one of those authors - Alex Lamb - has just submitted an interesting essay at this contest, which matches very well, again, the JOUAL focus! Go take a look.

    3) That's what I call the 'Teilhard conjecture' (after T. de Chardin). We are still very far from its experimental validation, but if it turned out to be false, all the experiments I am running on computational, causet-based big bangs would be, mostly, wasted cpu time...

    4) I fully agree.

    • [deleted]

    Dear Tommaso Bolognesi,

    "There exists a tiniest scale at which the fabric of spacetime appears as a pome-granate (Figure 1), made of indivisible atoms, or seeds. This view is reflected in models such as Penrose's spin networks and foams, and is adopted in theories such as Loop Quantum Gravity [14] and in the so called Causal Set Programme [6, 13]...."

    Thank you for pointing out that this is a view.

    "At that level, a universal computation keeps running. We do not know yet the program code, but, in accordance with a fundamental principle of minimality ('Occam razor'), we like to believe that it is small, at least initially."

    And it grows by what means?

    "Perhaps it is a self-modifying program: code and manipulated data might coincide. ..."

    Self? In other words by mechanical magic?

    "...This does not mean that we have to postulate the existence of a divine digital Computer that sits in some outer space and executes that code, for the same reason that, under a continuous mathematics viewpoint, we do not need a transcendental analog Computer that runs the Einstein field equations for animating spacetime. Computations may exist even without computers (and, incidentally, the concept of computation is much older than computer technology). 1"

    Of course computations may exist without computers. Your remark about "outer space" could be applied equally well to extra dimensions and other universes.

    I have printed off your essay and am going to read it; but, I must admit that your beginning appears to be ideological rather than scientific. If I am incorrect, I will learn that by reading your essay and will returnh to apologize.

    James

      Dear James Putnam,

      your post triggers a number of reactions, probably (and unfortunately) more than I can put in a post here.

      The first general point I need to clarify is that, in an attempt to be concise in writing the essay, I adopted a style, especially in the opening that you quote, which may indeed sound more 'assertive' than that of a stardard scientific paper. On the other hand, I believe that a bit of 'assertiveness' may be helpful for stimulating discussions, in a context such as the FQXi forum.

      And in the sequel of the essay, in particular in Section 3 - 'Experimental validation', I do put my views in the right perspective.

      I did not feel like using more space just for supporting the validity of the idea of working with discrete models such as causal sets, in light of the aboundance of solid papers that share that view as a very reasonable working hypothesis, to say the least.

      You ask: By which means does a discrete spacetime grow?

      Rideout and Sorkin [Phys. Rev. D 61, 024002, 2000] discuss classical dynamics for causal set growth, using *probabilistic* techniques.

      Out of the several ways I have explored (by simulations) for obtaining causal sets by *deterministic algorithms*, the one that I personally consider most attractive consists in letting a 'stateless ant' walk on a trivalent graph G while applying, step by step, one out of two possible graph rewrite rules (known also in Loop Quantum Gravity, and used by Smolin, Markopoulou & friends). G can be equated to a growing space, while spacetime corresponds to the causal set of the computation being performed on it (the relevant references are [1-5], [8], [14] and [17]).

      One of the two graph rewrite rules introduces a new node in G, so this rule would be responsible for the growth of space. But spacetime (the causal set) grows at *every* rewrite rule application, since:

      1 rewrite rule application = 1 computation step = 1 node (event) added to the causal set (spacetime).

      In this respect, my approach is indeed in contrast with the following statement from your essay:

      "The universe evolves, but not because simplicity can generate complexity. Complexity can come only from pre-existing complexity. The greatest possible effects of complexity in the universe exist right from its start in a potential state."

      I tend to disagree on that, based on the phenomenon of deterministic chaos, or algorithmic randomness. The best example of this is perhaps Wolfram's elementary cellular automaton n. 30: in spite of the elementary initial condition and the very simple rule (coded by a boolean function of three variables), the computation dynamics exhibit surpring pseudo-random character, as if information were continuously injected in the process from outside.

      I like to believe (ideology again!) that our universe started from nothing, or almost nothing, and that space, spacetime, and complexity did increase: they were not 'already there' at the beginning, unless you mean that all the complexity of the mentioned rule 30 computation is 'potentially' present in the few bits that code its initial condition and boolean function.

      I remember a workshop at the Phys. Dept. of the Padova Univ. (Jan 15, 2008) with lively discussions on String Theory vs. Loop Quantum Gravity. During the panel, Gabriele Veneziano and Carlo Rovelli did agree on one thing: they both were unable to point out a crucial validation/falsification experiment for their respective theories. I believe I am in good company in claiming that, in physics, both imaginative hypotheses and accurate experimental verification are necessary. If you call the former 'ideological', then I am afraid my essay has a lot of ideology in it. And, although I also provide simulation results (more in the references) that suggest interesting analogies with physical phenomena (e.g. the emergence of interacting objects!), my research on causal sets (as well as that of many others) is still very far from precise numerical predictions.

      Since this is getting long, I answer to the remark on the self-modifying program in the next post. Looking forward to your comments after reading the essay. Thanks for your stimulating remarks!

      Tommaso

      Dear James,

      you quote my essay:

      "Perhaps it is a self-modifying program: code and manipulated data might coincide. ..."

      and ask:

      Self? In other words by mechanical magic?

      The funny thing is that I had removed this line from the quasi-final version of the essay, because I did not have enough space for expanding on it. But then I put it back, being ready to accept questions or criticism. Thank you for giving me an opportunity to explain.

      The idea of a self modifying program is, again, an attempt to satisfy the requirement of minimality. While I support the computational universe idea, what I find a bit annoying is the separation between (i) a (fixed) program P and (ii) the data D that it manipulates. Under this view, D represents our Reality, while P would be the rule that governs it, without enjoying itself the status of a Real entity. They are two things, and one of them is even 'unreal'. Two is bigger than one. If the program operates on itself, we would have only one thing: P = D = Reality. That would be more elegant. I believe that the Mathematical Universe idea by Max Tegmark also achieves this unity: there's only one thing, namely a mathematical structure.

      By the way, the concept of a self-modifying program is quite familiar in computer science, e.g. in logic programming (in the Prolog language etc.). Furthermore, self-reference is a recurrent concept when dealing with formal systems (Goedel's theorem), computation (universal Turing machines), not to mention consciousness. I would not be surprised at all if it played a crucial role in an ultimate, computational theory of physics.

      If you claim that there is something magic in a program that modifies itself (= Reality), then I'd expect you to claim the same for a program that modifies 'external' data (= Reality). In my opinion, there is no more magic in a program that runs our Reality, than in a set of differential equations that does essentially the same thing.

      Cheers. Tommaso

      PS

      So far I've done only few experiments on self-modifying Turing machines, without exciting results. In all the experiments mentioned in the essay, data and program are separated, and the latter is fixed.

      • [deleted]

      yes of course and a micro BH also because the singularity says that.

      Me also I am musician and poet and my father was a bus driver and now he is died.And after what at the age of 20 I was in the coma. no but frankly hihihihii several pappers and this and that ....a big joke yes and big pub .

      hop under review.Christi hiihhi you see I play everywhere as a child now,I love this platform and the vanities of scientists.

      Computer vs rationality of our universe. Big Bangs with a S no but frankly, you simulate what an universe or your universe.A big joke all that.It's just computing , not physics.On that good bye.

      Don't be offensed,I am just a little crazzy.Hop I am going to take my meds.until soon.

      Best

      Steve

      Dear Tommaso,

      Perhaps it would help to bring up the contribution of Alan Turing and his concept of universality unifying both data and programs. While one can think of a machine input as data, and a machine as a program, each as separated entities, Turing proved that there is a general class of machines of the same type (defined in the same terms) capable of accepting descriptions of any other machine, and simulate their evolution for any input, hence taking a program+data as data, and unifying both.

      This is why one can investigate the 'computational universe; today either by following an enumeration of Turing machines, or using one (universal Turing) machine running an enumeration of programs as data inputs. Because both approaches are exactly the same thanks to Turing's universality.

      Best.

      - Hector Zenil

      • [deleted]

      Dear Tommaso Bolognesi,

      I have printed your response. I am impressed. Your response is not in agreement with me; but, that is a minor point. Your response was directed at my questions and even referred to my own essay. I appreciate your time and effort in putting your response together. I will follow the leads you referrenced. I will respond when I put something together worth your time.

      James

      5 days later
      • [deleted]

      Tommaso,

      Thanks for a fascinating and extremely well constructed essay. Since Wolfram is scheduled to speak at ICCS in Boston this summer, I think it might be interesting to see how your multi-level hierarchy compares to Bar-Yam's multiscale variety -- hierarchies of emergence vs. lateral distribution of information.

      Interesting conceptual equation, "spacetime geometry = order number." Suppose one were to make another equation: "order = organization feedback." Then one would get -- substituting terms in my equation for yours -- the theme of my ICCS 2006 paper ("self-organization in real and complex analysis") that begs self-organization of the field of complex numbers, z, in the closed algebra of C.

      One more comment (though I could go on; your paper is rich in quotable points), concerning global and local (4.1) time-dependent relations among point particles. Research in communication network dynamics (e.g., Braha--Bar-Yam 2006, Complexity vol 12) shows often radical shifts in hub to node connectivity on short time intervals while time in the aggregate shows that the system changes very little. Taking point particles as network nodes, perhaps something the same or similar is happening.

      Good luck in the contest. (I also have an entry.) I expect that you will rank deservedly high.

      All best,

      Tom

        Dear Tom,

        thanks for the positive comments. Following your links I reached the Robert Laughlin's 2005 book 'A Different Universe: Reinventing Physics from the Bottom Down', in which the role of emergence in theoretical physics is given an important role. Good to hear; another book on the pile!

        In the equation 'spacetime geometry = order plus number', introduced by people in the Causal Set programme, 'number' simply refers to counting the number of events in a region of the causal set, which is then equated to the volume of that region. And 'order' is the partial order among events. You mention self-organization in the context of the field of complex numbers, and this does not seem much related to 'number' in the above sense (if this is what you meant to suggest). But of course I am curious about everything that has to do with self-organization.

        Usually a self-organizing system is conceived as a moltitude of simple active entities. Does this happen in your ICCS 2006 paper?

        One peculiarity of the 'ant-based' (or Turing-machine-like) approach dicussed in my essay is that you actually have only ONE active entity -- the 'ant' -- and expect everything else to emerge, including the moltitude of interacting particles or entities that one normally places at the bottom of the hierarchy of emergence.

        Ciao. Tommaso

        • [deleted]

        Dear Tommaso,

        Thank you for your essay. You write a lot about an emergence and computation, chaos, self-organization and automata. All the elements I have touched in my essay because they are closely connected to the evolution of spacetime concept.

        E.g. you write: "Computations may exist even without computers". It seems to have something in common with Computational LQG by Paola Zizzi. In my essay I have even quoted Paola.

        My own view is that the universe is a dissipative coupled system that exhibits self-organized criticality. The structured criticality is a property of complex systems where small events may trigger larger events. This is a kind of chaos where the general behavior of the system can be modeled on one scale while smaller- and larger-scale behaviors remain unpredictable. The simple example of that phenomenon is a pile of sand.

        When QM and GR are computable and deterministic, the universe evolution (naturally evolving self-organized critical system) is non-computable and non-deterministic. It does not mean that computability and determinism are related. Roger Penrose proves that computability and determinism are different things.

        Let me try to summarize: the actual universe is computable at the Lyapunov time so it is digital but its evolution is non-computable so it remains at the same time analog (the Lyapunov time is the length of time for a dynamical system to become chaotic).

        Your work seems to be the trial to develop the computable model of the universe at the Lyapunov time. Good luck!

        Jacek

        • [deleted]

        Ciao Tommaso,

        Yes, I do mean to suggest that the non-ordered set, z (the universal set of complex numbers) is organized to allow -- not a partial order of events -- but a well-ordered sequence in the specified domain of topology and scale, with analytic continuation over n-dimension manifolds. It is nontrivial that this is accomplished without appeal to Zorn's lemma (axiom of choice). And time is given a specifically physical definition. I followed up at ICCS 2007 with a nonmathematical paper ("Time, change and self organization") that incorporated and expanded on some of these results.

        You pick up right away, the difference between the hierarchical distribution of information, and multiscale variety. I am thinking that your "multitude of entities" may be dual to the "ant" analogy, because with activities occuring at varying rates at different scales, new hierarchies may form and feed back to the system dynamics.

        You know, Boston is very beautiful in the summer. :-)

        All best,

        Tom

        Hi again Tommaso,

        Just to let you know I dropped you a comment on Feb. 18, 2011 @ 21:07 GMT concerning the data vs. code question, just in case you hadn't seen it.

        Best.

        5 days later
        • [deleted]

        Dear Tommaso,

        Welcome to the essay contest. This essay contradicts quantum mechanics: How your digital/computational universe conjecture theory manages the Heisenberg uncertainty? For this purpose your digital computer must know the definite, absolute information about the position and momentum of every particle. Moreover, this ''computer'' must know all quantum information with absolute precision before events occurs - it is forbidden by quantum mechanics. Also, to perform such processing, the exchange of information and the work of computer must be faster that light. How your digital computation theory explains the EPR paradox and nonlocality?

        I can prove a theory false by simply finding one example in which the theory does not hold. Let us analyze your statement: ''all the complexity we observe in the physical universe, from subatomic particles to the biosphere, is a manifestation of the emergent properties of a digital computation that takes place at the smallest spacetime scale''.

        I can show you a place where the digital/computational universe conjecture theory is wrong: 1) At the center of a black hole as described by general relativity lies a gravitational singularity, a region where the spacetime curvature becomes infinite. Thus, at the center of a black hole a digital computation is not possible because spacetime curvature becomes infinite. You see, there are places and phenomena which exist without need in the digital computation. Since I found at least one place where the digital computation can not exist, it is a proof that this theory is wrong.

        Besides, to process an event the digital computation needs the exchange of information. Inside of the black hole (event horizon) all paths bring the particle closer to the center of the black hole. It is no longer possible for the particle to escape. Since the signal can neither escape from a black hole nor move inside of a black hole, it means the exchange of information is not possible. Since the exchange of information near the event horizon is not possible, it mean that digital computation also is not possible. Pay attention that the digital computation is not possible even outside of the Black Hole, near the event horizon because the exchange of information is forbidden.

        The essay is inconsistent; I found propositions which contradict each other. For example: ''There exists a tiniest scale at which the fabric of spacetime appears as a pomegranate, made of indivisible atoms, or seeds''. ''all the complexity we observe in the physical universe, is a manifestation of the emergent properties of a digital computation that takes place at the smallest spacetime scale''.

        Suppose that at the level of indivisible atoms a universal computation keeps running and manage the external physical processes. There appears a question: who/what manage this ''digital computer' and the work of the indivisible atoms? It means the existence of the deeper background structure which process and manage the activity of "indivisible" atoms this ''digital computer''. If the universal computation sits at the bottom of a multi-level hierarchy of emergence then where sits the computation which manage ''the universal computation''? Thus, the idea of the digital/computational universe conjecture contradicts to the idea of indivisible atoms.

        In conclusion, I agree with you that Reality is ultimately digital, but the digital/computational universe conjecture theory is wrong and inconsistent.

        Sincerely,

        Constantin

        • [deleted]

        The previous post is my post, by Constantin Leshan. The login does not hold.

        Soncerely,

        Constantin Leshan

          • [deleted]

          Hi Tommaso

          My rate is done and you got a good grade. A very well written essay. I agree with the essence of it as I hope you could verify on my essay, even that the style of my writing is quite different.

          Now having said that, I would say:

          I agree our universe is made from some simple basic cellular automata and most things are emergent phenomena.

          I don't agree to identify those automata to space-time and see particles and every thing else emerge from there.

          My position is quite the opposite. I identify the basic automata with particles and see space and time derived from the interaction. Unfortunatly I haven't done concrete definitions and experimentation with my approach.

          I feel my approach may have the problem of having more complex automata but might be easier to codify relativity in there.

          Could you comment ?

          Regards

          Juan Enrique Ramos Beraud

            • [deleted]

            Hi Juan Enrique,

            Tommaso Bolognesi's essay contradicts quantum mechanics: How the digital/computational universe conjecture theory manages the motion of particle and Heisenberg uncertainty? For this purpose this digital computer must know the definite, absolute information about the position and momentum of every particle. Moreover, this ''computer'' must know all quantum information with absolute precision before events occurs - it is forbidden by quantum mechanics. Also, to perform such processing, the exchange of information and the work of computer must be faster that light. How your digital computation theory explains the EPR paradox and nonlocality?

            I can prove a theory false by simply finding one example in which the theory does not hold. Let us analyze your statement: ''all the complexity we observe in the physical universe, from subatomic particles to the biosphere, is a manifestation of the emergent properties of a digital computation that takes place at the smallest spacetime scale''.

            I can show you a place where the digital/computational universe conjecture theory is wrong: 1) At the center of a black hole as described by general relativity lies a gravitational singularity, a region where the spacetime curvature becomes infinite. Thus, at the center of a black hole a digital computation is not possible because spacetime curvature becomes infinite. You see, there are places and phenomena which exist without need in the digital computation. Since I found at least one place where the digital computation can not exist, it is a proof that this theory is wrong.

            Besides, to process an event the digital computation needs the exchange of information. Inside of the black hole (event horizon) all paths bring the particle closer to the center of the black hole. It is no longer possible for the particle to escape. Since the signal can neither escape from a black hole nor move inside of a black hole, it means the exchange of information is not possible. Since the exchange of information near the event horizon is not possible, it mean that digital computation also is not possible. Pay attention that the digital computation is not possible even outside of the Black Hole, near the event horizon because the exchange of information is forbidden.

            The essay is inconsistent; I found propositions which contradict each other. For example: ''There exists a tiniest scale at which the fabric of spacetime appears as a pomegranate, made of indivisible atoms, or seeds''. ''all the complexity we observe in the physical universe, is a manifestation of the emergent properties of a digital computation that takes place at the smallest spacetime scale''.

            Suppose that at the level of indivisible atoms a universal computation keeps running and manage the external physical processes. There appears a question: who/what manage this ''digital computer' and the work of the indivisible atoms? It means the existence of the deeper background structure which process and manage the activity of "indivisible" atoms this ''digital computer''. If the universal computation sits at the bottom of a multi-level hierarchy of emergence then where sits the computation which manage ''the universal computation''? Thus, the idea of the digital/computational universe conjecture contradicts to the idea of indivisible atoms.

            In conclusion, I agree with you that Reality is ultimately digital, but the digital/computational universe conjecture theory is wrong and inconsistent.

            Sincerely,

            Constantin

            Dear Constantin,

            would it be wise to say that Quantum Field Theory is wrong because it does not predict the existence of unicellular organisms?

            This is not meant to be provocative, but only to express what I believe is the 'delicate' status of any conjectured theory of everything (QFT not even pretending to be one). Any such conjecture should maximize the number of explained physical phenomena while minimizing the machinery of the explanations, for example by getting rid of universal constants such as c and G, which should be derived, not assumed.

            I believe that the digital/computational reality conjecture (I wrote 'conjecture', not 'theory'), could hardly be beaten in terms of simplicity -- any kid can understand and reproduce the steps of, say, a deterministic Turing machine moving on a binary tape, or on a graph -- and this is already a great incentive for investigating it. But it is equally clear that the number of 'proof obligations' assigned to the conjecture is explosive, offering much room to criticism, until these are not discharged.

            Then, looking at the long list of TODO's , let me first summarize the good news, and tick the phenomena that occur in spacetime, that the conjecture can comfortably explain, via emergence in computation (see also essay and references):

            - random-like behaviors;

            - periodicity, and co-existence of regular-periodic and random-like structures;

            - self-replication;

            - localized periodic structures that interact with one another, similar to particle scattering diagrams.

            All these can be observed in cellular automata as well as in algorithmic causal sets.

            What I find almost miraculous is that we get these features for free, that is, without coding anything of physical flavor into those simple models of computation. And I do hope that you agree in considering the above as fundamental PHYSICAL phenomena, in the broad sense that they characterize qualitatively our universe, as we perceive it.

            I would not endorse any ToE proposal that does not perform VERY well at these tasks -- first qualitatively, and then, of course, also quantitatively. In my essay I additionally suggest that these properties, in duly varied forms, should manifest very early in the history of the universe.

            I realize that this conjecture represents a radical shift of perspective, in open violation with a principle supported by many scientists (e.g. Carlo Rovelli), also expressed somewhere in these blogs, that science always progresses incrementally, by smooth improvements of the best existing theories. To say that this has always been, and will ever be the case, is quite a strong statement (proposal for the next FQXi Contest: 'Is the History of Physics Discrete or Continuous'?). But, if the 'continuous' solution is preferred, it would be certainly wise to widen the domain of theories to be considered for improvement or integration, including not only SR, GR and QM, but also Complex Systems, Self-Organization, and, or course, Darwin.

            I have taken a larger tour than you probably expected. You raise specific points and I do want to answer them as punctually as possible. Take this as a preamble. I'll be back.

            Bye for now

            Tommaso

            Constantin (and Juan-Enrique), I have given a first answer to your objections up in the blog where you raised them first. The rest of my replies comes hopefully tomorrow. Look for it by scrolling up to that same place. Thanks. Tommaso.