Dear Mauro,

Thank you for the helpful and detailed response to my question. I understand you as saying an event is simply defined as information in the spaceless here and timeless now. Space and time then arise as ways to organize events in such a way that they are given a spatio-temporal structure. Is this accurate?

If so, it is still not clear to me how information for an event is measured without any reference to space and time. I can imagine a reference frame where local space and time are defined, and then using that to develop a global spacetime structure as Einstein did. But how are the measurements made in the observer's reference frame without even time and space defined for it?

To illustrate, you used the example of space arising from comparisons of the sizes of an image on the retina. But the measurement of size on the retina seems to presuppose space. And the example of time arising from comparing clock measurement now with clock measurement of a memory also seems to presuppose a temporal distinction between the now and the past memory. It also seems to presuppose space to measure the movement of the clock's dials in space.

Your further clarification would be appreciated.

Best regards,

Tom

  • [deleted]

Very nice presentation (clear and well written), congratulations!

However, I'm having some problems understanding how space-time (time in particular) can emerge from the causal structure of the network: I really don't see how you can define "cause" and "effect" without the notion of "time", because a cause must by definition precede IN TIME an effect. (Of course there are also other further requirements for such definition.)

In other words, it seems that the clock time tau of the computation of the quantum computer is "sneaked in" to constitute the elementary building block of time, which then does not really emerge, but it is "sneaked in". [By "clock" here, I mean the clock in a computer, namely the time it takes for a single elementary operation to complete, which in normal computers gives the processor speed.]

I have similar concerns also for the emergence of space.

Probably I'm missing something! Thanks in advance for the clarifications...

    • [deleted]

    Dear Anonymous Reader,

    thank you very much for your kind appreciation!

    Here's the point that is needed to understand the emergence of space-time from the causal network.

    First, causality must be defined in a way that is independent on the arrow of time: otherwise, you cannot consider even the mere possibility that information could be sent from the future. Or, equivalently: you cannot even imagine the possibility of time-travel!

    Causality is defined in my Ref. [8] (and also in [10]) without reference to time. There you have events in input-output connection: causality is the assumption that the marginal probability of any event does not depend on the set of events connected at the at its output. We then assume the time-arrow coincide with the causality arrow, i.e. with the in-out direction. In short: cause and effect are defined simply through an asymmetric dependence of marginal probability.

    The emergence of time (as well as space) should now be regarded as the emergence of the Minkowsky "metric" from pure "topology" through event-counting. And this can be done via building-up of foliations over the quantum circuit. The time tau and distance a are just the digital-analog conversion from pure dimensional numbers (event counting) to the usual seconds and meters.

    I hope that I answered your question!

    You can convince yourself that space-time is always referred to events (not that events happen within space-time), by taking the lesson from Einstein literally: time and space must be defined operationally through measurements. Then, ultimately, each measurement is referred to a single observer, the AYPT (at your place and time) through an history of previous observations (please, read my answer to Thomas). Thus whatever happens in the four-dimensional Minkowsky space-time is precisely contained in a zero space-dimensional local memory. It is like the stream of bits of a 3D movie.

    Let me know your opinion now!

    Cheers

    Mauro

    • [deleted]

    Thanks for the clarification! I'm still a little confused though. If events are "facts of the world describable by the basic language obeying the rules of predicate logic", I'm not sure how I can assign a probability to an event (and hence calculate a marginal). The event either happens, or it doesn't happen. Probabilities pertain to our predictions only (namely to our ignorance of some fact). What exactly do you mean by "probability of an event"?

    Also, when you speak of the events connected to the input and to the output, you are implying that the input happens before (IN TIME) than the output. I would say that is implicit in the notion of input-output. Can you instead define input and output without resorting to time?

    In other words, I'm sorry, but still don't see how you can relate events without assuming time...

    • [deleted]

    Dear Anonymous,

    From your answer I infer that for you the impossibility of time-travel is a tautology, still many authors believe that time-travels are possible!

    In a time-travel the input is in the future and the output is in the past...

    Cheers

    • [deleted]

    Dear Tobias,

    sorry for not having replied to you soon, but I didn't see your further post. I did some experiments for larger circles, and I noticed that I cannot have more than 12 sides. This maybe connected to your point. In such case, this idea doesn't work and one needs other ways, such as using the depth of events due to clock imprecision, or some other ideas. In the meanwhile I noticed your wonderful paper, and I'm going to leave my feedback on your blog.

    Cheers

    Dear Mauro,

    thanks for getting back to it! It's good to sort this out, luckily it's a mere mathematical point and therefore has a clear and unequivocal answer.

    In fact, using the idea of Minkowski sum it is relatively simple to prove that a regular lattice will never give an isotropic propagation speed. Let me know in case I should explain more details.

    The figures are amazing!! How did you make these?

    I think I understand the first one, but the second one I unfortunately could not make sense of...

    What I forgot to say yesterday is that the essay nevertheless is one of the most fascinating ones and contains some ingenious insights :)

    Dear Tobias,

    thank you for your nice words. Regarding the figures, I've even better ones: the one that I posted were done only for the sake of this discussion with you, and I spent no more than 30 minutes to do them just using Xfig (available for unix-linux, or Mac Fink).

    Coming back to Physics: it seems to me that our two works are much more connected than what may appear at first sight.

    I'm very interested in the mathematical proof that you are mentioning that a regular lattice will never give an isotropic propagation speed (this clearly refers only to space dimensions d>1). Indeed, the only notion of "Minkowski sum" that I know is an operation between subsets of an affine space. Can you give me more information, e.g. a place where to look for your mentioned proof, or can you please give me more details?

    What you say is very interesting. However, at first sight it seems to contradict the possibility of simulating the Dirac equation (which is covariant!) by a quantum computer with a a periodic network of gates. This is the case also of your graphene simulator. I believe that your proposal of the grapheme simulator is a great idea, and I want to prove it correct. But, how we reconcile a quantum computer simulation of Dirac with an anisotropic maximum speed of propagation of information?

    I will post also a reply in your blog, continuing our two parallel discussions.

    Let me say that from my positive experience about these blogs, the idea of FQXi of this contest is starting to pay real dividends to research in terms of interesting discussions.

    Cheers

    Mauro

    Giacomo

    I enjoyed reading your article. I have been interested in how the quantum computer would combine the digital and the analog properties.

    Relative to the mass-dependent refraction index of the vacuum, what would the effect be if there is a different Planck mass employed? One that is on the order of the mass of the electron and the mass of the proton--at the same time, keeping the Planck length. See my article, and review the connection of the Planck length realm and the election-proton realm.

    Guilford Robinson

      • [deleted]

      Dear Mauro,

      sorry for the delay, sometimes it's difficult when one has a day job, but I suppose you know that ;)

      So about proving anisotropy of propagation speed in a regular lattice: The notion of Minkowski sum I mentioned is indeed the one you are familiar with. The anisotropy proof goes as follows: think of the lattice as projected onto space, ignoring the time dimension. Designate a certain starting point as the origin. Then define the "ball" B_n to be the set of points which can be reached in n steps from the origin. Clearly, every B_n is a polytope, i.e. is the convex hull of finitely many points. For a certain n (n=2 in your case), the extreme points of B_n are all translates of the origin. Then how far can we get in n+n=2n steps? From the origin, we can get to all the outer points of B_n; from each outer point, we can then traverse another n steps. And then the distance traversable in these n steps is precisely given by a translated copy of B_n! Therefore, B_2n is the Minkowski sum of B_n with itself. Hence B_2n coincides with B_n scaled by a factor of 2. The same argument applies inductively to show that

      [math]B_{kn}=k\cdot B_n[/math]

      In particular, the shape of the balls B_kn is independent of k.

      Concerning the comparison to graphene, yes, that's an excellent question! One difference is that we are now looking at wave functions instead of classical point particles. Then the characteristic quantity of the system is the energy-momentum relation E(p) of the (quasi-)particles. A Taylor expansion of this quantity yields precisely something of the form

      [math]E(p)=M_{ij}p^i p^j + O(p^3)[/math]

      where M_ij is something like an "inverse mass tensor" and summation is implied. When

      [math]M_{ij}=\delta_{ij}[/math]

      holds, then the low-energy excitations have isotropic propagation speed! And as I mentioned in my essay, it is in fact only the low-energy excitations for which the whole emergence of the massless Dirac equation holds. (In light of this discussion, this is a point which I should have emphasized more...) For higher-energy excitations, isotropy does not hold. In the graphene case, anisotropies occur which are known as "trigonal warping"; I haven't been able to find a good reference for this, but google turns up a whole lot of papers on that.

      The previous post is mine, my login had expired...

      Also, the energy-momentum relation is missing a square root.

      Dear Tobias,

      thank you very much for your simple and clever mathematical proof!

      I think that, however, there must be a way to improve the isotropy especially for the massless case, which you comment in your blog in replying to my last post. Here a mechanism that I recently devised. As you may know from my paper in 1plus1 dimensions there is a renormalization of the speed c coming from the mass coupling between field operators left and right due to unitariety. In short, one has that the sum of the square-modulus of the matrix elements of the local U (in 2 steps) must be one, and this turns to be the sum of mass and speed squared. I thought that there maybe a way that in d>2 space dimensions the coupling with a larger number of modes provides more non-vanishing matrix elements (you must have a larger matrix in larger dimensions, with dimension 4 in 3plus1, with the gate shaped as two pentachorons (5-simplex), connecting 4 wires with 4 wires in space). One now has the chance of an anisotropic refraction index coming from unitariety, curing the problem. And, this maybe the way to cure also the massless case, which still needs more matrix elements in U, even without the mass coupling. Even without the mass coupling there is the need of coupling the four field modes exiting from the vertices of the pentachoron in order to recover the three 2nd order partial derivatives from a 4x4 matrix (H=U-U^\dag).

      I cannot believe that the massless field has no digital analog, there must be a way! Otherwise we are proving that the world is not digital!!

      Dear Guilford,

      thank you for your interest and your compliments! I just downloaded your paper: I'll take a closer look at it (it doesn't look easy to follow at first sight).

      The refraction index of vacuum is a function of the ratio between the Compton wavelength and the distance 2l between two next neighbor in-out independent gates, and the same ratio expressed in terms of mass ratio gives the Planck Mass if you take 2l equal to the Planck length. One would need indeed another good reason to chose 2l as the Plank length: the only thing that I can say is that it is the minimum distance in principle between causally independent events. Clearly, if you take 2l larger than the Planck length, you may incur in imaginary refraction indexes (corresponding to absorption?) which is odd. Whence the Planck mass must be the largest possible mass of the field, and information halts at such mass value!

      I hope that this is what you were looking for. Please let me know.

      Dear Guacamo

      In a last minute 'trawl' of essays I hadn't read I was pleased to come across yours. Your clear and lucid description of a quantum computer was very interesting and refreshing, and a new angle on my own model.

      I hope you'll read my rather analogue version of what seems to be QC=SR, entirely equivalent, explaining special relativity logically with a quantum mechanism and deriving Equivalence with a = g.

      Probably too late for you to vote now, but I'd like your take on it anyway. The lower string gives some good analogies. http://fqxi.org/community/forum/topic/803

      Best wishes

      Peter

        Tobias,

        there is something odd, which I cannot understand in your beautiful proof (which I'd like to be correct, since it would be a simple argiment). Apparently your assertion that the "ball" B_n (the set of points which can be reached in n steps from the origin) is a polytope is not true. See e.g. the figure here attached. Where am I wrong?Attachment #1: Minkowski_small.pdf

        Dear Peter,

        thank you for your appreciation.

        I downloaded your paper. It looks very nice, but with a lot of physics that I cannot check myself.

        Best regards,

        Mauro

        • [deleted]

        OK, this may become a very long post... maybe we should switch to email? Or is anyone else following this discussion here on the forum?

        First of all, I think the statement is true in both your Minkowski2.pdf (as I interpret it) and also in your Minkowski3.pdf. Note that I was not claiming B_kn = k*B_n to be true for all n. Rather, I said that there exists a certain n such that this holds for all k. In Minkowski3.pdf, you have drawn all B_n from B_1 to B_5. The relevant value for n here is n=2. And we have indeed B_4 = 2*B_2, as claimed.

        However, your previous post did point out a problem in my proof. So I have been going back to the drawing board and thought about it all again. By now, I think I have a mathematically precise formulation of the statement as well as a rigorous proof. Here it comes.

        The setting is the following: let us consider any periodic graph which is an infinite, connected and locally finite graph G=(V,E) together with an embedding of G into R^d, where d is arbitrary. The main assumption is that this embedding is periodic: there is a group Z^d acting as translations on R^d which maps the embedded graph to itself. Hence R^d decomposes into isomorphic unit cells of finite size, which are all translates of each other. In your example, the unit cell can be taken to be a hexagon made up out of 6 equilateral triangles.

        Now fix any point of the graph as origin and take B_n to be the set of all vertices of the graph which can be reached from the origin by traversing at most n edges. So in contrast to my previous terminology, B_n is only a set of vertices, and not a polytope anymore; in particular, talking about "convexity" of B_n is meaningless. B_n is the set of points which can be reached in n time steps.

        We are interested in how the shape of B_n / n depends on n. In particular, whether it is possible that this "velocity set" tends to a Euclidean ball as n --> oo.

        *Claim:* The set

        [math]

        \lim_{n\rightarrow\infty}\frac{1}{n}B_n

        [/math]

        is a polytope. (More accurately: there is a polytope P such that the Hausdorff distance between P and B_n / n converges to 0 as n--> oo.)

        *Proof:* For simplicity, let us consider first the case where every unit cell contains only one vertex of G. Then any vertex can be mapped into any other by a translation preserving the graph. In this case, I will now prove that the velocity polytope is precisely the convex hull

        [math]

        P = \mathrm{conv}(B_1)

        [/math]

        To see this, note that, as in the previous "proof", we get B_2 by translating a copy of B_1 to all the vertices of B_1. We obtain B_3 by translating copies of B_1 to all vertices of B_2. And so on. Hence,

        [math]

        B_n = \{x_1+\ldots+x_n\:|\:x_1,\ldots,x_n\in B_1\}

        [/math]

        Similarly,

        [math]

        \frac{1}{n}B_n=\left\{\frac{1}{n}x_1+\ldots+\frac{1}{n}x_n\:|\:x_1,\ldots,x_n\in B_1\right\}

        [/math]

        This clearly lies in the convex hull of B_1; morevoer, as n grows, we can approximate any point in the convex hull of B_1 by a point of this form. (Such a point is a convex combination of elements of B_1. Approximate the coefficients of this convex combination by rational numbers with denominator n.)

        This proves the claim in the case that each unit cell contains exactly one vertex of the graph. The argument for the general case follows now. Define a an "admissible velocity" to be a vector

        [math]

        \vec{v}=\frac{\vec{s}}{t}

        [/math]

        where \vec{s} and t are given as follows: there needs to exist a path in the graph which begins at the origin x, ends at a vertex which is a translate Tx of x, and does not traverse any other translate of x, such that t is the number of edges in the path, and \vec{s} is the direction vector from x to Tx.

        Then it is clear that there is only a finite number of admissible velocities. The convex hull of these velocities is a polytope Q. The claim now is that

        [math]

        Q=\lim_{n\rightarrow\infty}\frac{1}{n}B_n

        [/math]

        To see this, we prove the two inclusion separately. So why is the left-hand side contained in the right-hand side? The reason is that admissible velocities can be concatenated with each other any number of times, and this gives convex combinations of velocity vectors as above. Why is the right-hand side contained in the left-hand side? For any very long path which begins at the origin x, adding or removing a few edges does not change much, so we can assume that it ends at some translate of the origin Tx. But then we are back to a convex combination of velocity vectors.