Dear Ben,

My warmest congratultions to your first - and most well deserved - rating in this contest.I have spent a few days in rural southern Sweden, hardly within reach of the Internet. But today I have re-read the above conversation, and learnt more from it. Now, when everything is settled about the final essays, I look forward to the possibility of a more peaceful converstation, in which I have a slight chance to catch up.

Best regards!

Inger

Hello again Ben and Friends,

I'm taking up your invitation to post at any time, with due respect to John Baez, who is not my cousin in real life.

A story:

They say my uncle is crazy, and cousin John tells me some family members wanted to lock old Uncle Octonius up in the attic, but I think he is only eccentric because he's seen the universe, and knows its secrets. For years we thought he just wouldn't associate with the other family members at all, but somehow we worked out how to do it safely. You see; Octonius is very persuasive, and can make people do almost anything - so he can't be trusted, or rather no one person can ever see him alone. And when we send two, they always disagree on what was said. Therefore; we always visit Uncle Octonius in committees of three. But; the first time a group of us visited, he insisted that he must see all the family members - with equal frequency - and that there always be someone in common between any two visits. Luckily; this worked out, because there are seven of us.

The thing is; Octonius is incredibly wealthy and knows the secrets of the universe, but we were all so afraid of him that we never knew why he seemed so crazy. You see; he always liked to break the laws of algebra - or insist on things being backwards sometimes - whenever we tried to use the associative and commutative rules to simplify expressions for him. But we never understood why that was, until we attempted to rank ourselves - thinking that both the greatest and slightest within our family needed to be included, within each committee, to assure trust. Then Octonius explained that committees follow a rule that is non-commutative, and then if you include everyone at once things become non-associative, because there can be disagreements between members or committees - but there is also a hierarchy or ordering of and within any committee.

Though we are still not sure we can trust him, Uncle Octonius tells us this is as fair as it can be, and now he is teaching us the secrets of the universe. So who could complain? I'm glad cousin John didn't let the others lock him up in the attic, or we would never have learned of his vast wealth and untold secrets.

end of story

Jonathan

    Dear Ben - first of all I want to thank you again for your supportive remarks on John and my submission! Thankfully, after checking my records, I found your essay in the list of ones I loved as well. I am happy to see that enough people also loved your essay. Honestly, the rating and voting was overwhelming to me, and I admire your thoroughness in this essay contest. Hopefully you win a prize, both for your actual essay submission as well as your engagement in the contest. It will be well deserved.

    On a sidenote, after flying over the 300 or so essay titles, zipping through some 30 essays without actually reading it, there were some 10-15 essays that I actually read and felt competent voting on; with an average score somewhere in the 8's since - by selection - I chose only to read essays that already looked interesting on first sweep. It should be needless to say that I voted honestly, if overwhelmed by the mass of entries.

    Regarding your causal metric hypothesis, there is a scenario that I would love to ask for your opinion on whether or not it may be compatible with your hypothesis. Let me attempt:

    (1) What you describe as finite set members in your universe are representations of actual particles. Rather than placing particles into a somehow finite spacetime model, spacetime only exists (if emergent) at places where there is a particle. Each particle has well defined attributes, some of which are "location-like" parameters that are used to model when and how strong these particles interact; some others are properties that characterize strengths of interaction under the various fundamental forces (charge, mass). Just as there is no such thing as a continuous charge or mass between any two interacting particles, there is no such thing as continuous space or time between any two particles or interaction, either.

    (2) Particle location parameters are coordinates in the similar sense to spacetime coordinates in General Relativity, they are unobservable in principle but model something ultimately observable. There is not necessarily a distinct time-like coordinate or three distinct space-like coordinates, though. Instead, the 4 (or more, but not much more) coordinates will eventually appear space-like or time-like under observation as defined in the next (3) and (4).

    (3) Human bias evaluates physical forces by observing motion of electromagnetically bound objects (atoms, molecules). It makes us humans believe that there is a Lorentzian base manifold that other forces act upon. Accordingly, conventional physical law has electromagnetism as genuinely describable on Lorentzian base manifolds. What we use to call "observation" is in fact a projection of the entire space of particles and parameters into a smaller subspace; the causal set is projected into a more narrow parameter space of itself, where that subspace has: (a) one time-like coordinate, (b) one space-like coordinate, and (c) genuinely Minkowskian metric. However, this is not a genuine property of your universe, but merely an extraneous projection that mimics human experience.

    (3a) Granted, it is a very useful projection since it would be hard to observe any kind of force in a lab that explicitly does not use atoms or molecules ...

    (4) If you model interaction between any two particles, you do so by understanding all the properties of the two corresponding causal set members. Causality, and partial order, is defined by the effective change of properties of each set member: Interaction is unique and well defined, therefore, the change of parameters of your set members through the interaction are well defined, "causal" relations: The location parameters how strong interaction is based on the set geometry (warning: weasel word "geometry"!). Pairwise calculation between any two particles (any two set members) give you an effective physical interaction, a force so to speak, which in turn is modeled by modifying the particle's location parameters.

    (4a) In the language of causal sets, I believe that this would mean that physical interaction may change the (partial) order of your set members. I am not sure, though.

    (5) Projecting all such particle location parameters onto an overwhelmingly electromagnetic (Minkowskian) observer space results in the fundamental laws of nature we know today.

    Do you think such a procedure is compatible with your causal metric hypothesis?

    Best wishes, Jens

      Jens,

      While I am interested in Ben's answers to your questions, I would like to assert that the belief an "electromagnetic observer space" is "(Minkowskian)" is one of the fundamental assumptions needing review, the subject matter of this essay contest. As you know, my essay promotes a contrary opinion.

      Rick

      Dear All,

      I am suffering a bit of a backlog of deep and important points raised by a number of different people here over the last two weeks. Each of these points requires a careful and somewhat involved response, to whatever extent I am intellectually capable of providing one!

      This is midterm week at my university, and I have a throng of needy calculus students tugging at my coattails at present. It may well be the weekend before I am able to catch up on some of these communications. Some of the main priorities are the following:

      1. The implications of experimental results constraining certain types of nonmanifold structure/covariance breaking.

      2. The proper application of path summation in general contexts.

      3. Discussing what "particles" might look like in view of the causal metric hypothesis.

      4. Discussing some special algebraic structures of particular importance...

      In the meantime, please feel free to continue posting such remarks here; you're contributing to my education! Since this thread is at the top of the list, it's a reasonably convenient place for discussion. Many of you know more about some of these issues than I do, so feel free to post remarks in response to others' comments. While I will do my best to answer things myself, I am a bit greedy in the sense that my greater interest at present is absorbing, black hole-like, what everyone else is saying. I will endeavor to give off a little Hawking radiation, however!

      Finally, I am trying to compile a coherent email list; my email is bdribus@math.lsu.edu, and I'd appreciate hearing from any of you. I have already contacted a number of you who included email addresses on your essays. Take care,

      Ben

      Dear Jonathan,

      The following three posts are in response to a point you raised in your post on October 3 05:33 GMT on my thread on the subject of "Lorentz invariance violation." From reading your essay and from our other correspondence, I know that this is ground you have been well over, but I include some general details here for general interest.

      My essay advocates an "order-theoretic interpretation of covariance," which is an example of what is usually called "Lorentz invariance violation," (LIV) or "covariance breaking." I prefer to regard this as a "reinterpretation" of the covariance principle, to extend it to a domain where continuous group symmetries are of doubtful applicability. But that is merely a choice of terminology.

      As you point out, there exist experimental means to test certain types of LIV, and some of these methods have placed tight constraints on these types of LIV in the last few years. In particular, you mention experimental results from the Fermi Gamma Ray Telescope and the INTEGRAL gamma ray observatory. For anyone who is interested, I link to a few references about this:

      [link:arxiv.org/abs/1106.1068]INTEGRAL[\link]

      [link:arxiv.org/abs/1002.0349]Fermi[\link]

      [link: http://arxiv.org/abs/0912.0500]Stecker[\link]

      [link:phys.org/news/2011-06-physics-einstein.html]Popular INTEGRAL article[\link]

      The first two articles are arXiv versions of recent papers describing the methods and results of INTEGRAL and Fermi in constraining LIV. The third is a somewhat general review of such methods, unfortunately dating from before the most recent results. The fourth is a popular article on the same subject.

      (continued below)

      (continued from previous post)

      The "popular simplification" of these results has been that "if Lorentz invariance violation exists, it must occur on scales much smaller than the Planck scale." Of course, such conclusions are model-dependent; see for instance the discussion in section III of the first article to see how much conventional physics is being assumed.

      As you point out, these results may be very problematic for the theory of causal dynamical triangulations (CDT), which is a much more constrained and structured approach to "quantum causal theory" than Sorkin's causal sets or my causal metric hypothesis. CDT has nontrivial fundamental elements, namely Lorentzian 4-simplices, and this allows for the use of a lot of familiar machinery in the theory. I do not know exactly to what extent Fermi/INTEGRAL doom CDT, but my impression is that at least some of these methods apply more or less directly and have negative implications.

      I don't think this is true for causal sets, but I would like to ask Rafael Sorkin about this. There is a fair bit of literature on photon dispersion in causal sets, but I doubt if the machinery cited in the Fermi/INTEGRAL papers has a definitive causal set analogue at this point.

      (continued below)

      Dear Jonathan,

      This is the last part continuing from above about Lorentz invariance violation...

      Let me say a few words about why the Fermi/INTEGRAL experimental results don't worry me from the perspective of my own work, though they certainly should be kept in mind as constraints on the details of nonmanifold models of fundamental spacetime structure.

      1. Just to be clear, although I think locally finite causal graphs are the most physically interesting "causal-metric" models at present, I don't think that "volume" arises from a constant discrete metric (a la Sorkin).

      2. In particular, "nonmanifold" does not imply "discrete;" the two concepts are merely different extremes. Furthermore, "discrete" can mean several different things. There are several different topologies that are relevant for such models, and the discrete topology is perhaps the least interesting of these. There is also a measure-theoretic meaning of discreteness (e.g. Sorkin's "order plus number equals geometry.")

      3. I doubt the arguments for the fundamental significance of the Planck scale.

      What I am arguing is that experimental results like Fermi/INTEGRAL should serve as guides in pursuing nonmanifold models of spacetime structure, not discouragements. The most obvious objections to what I've said here is that my models are too vague and general at present to be either confirmed or falsified by feasible experiments. This is true... I need to do much more work. However, my feeling is that these results rule out only a tiny sliver of the universe of interesting quantum causal models.

      Any additional thoughts you might have on this important point would be appreciated! In particular, I suspect you know more about exactly where Fermi/INTEGRAL leave CDT and other similar models. Take care,

      Ben

      The breakdown of the Lorentz symmetry is something which was advanced by loopvariable quantum gravity theorists. As Ben points out the observations of distant burstars has put considerable doubt upon these theories. Spacetime appears to be absolutely smooth down to scale of 10^{-45}cm or so. The graininess of spacetime that would result from violations of Lorentz symmetry should result in dispersion of light. Higher frequency light would interact more strongly with this graininess, so over billions of light years the subtle effect would be observed in different arrival time of light with different frequencies. This has failed to emerge in observations. The double relativity proposed by Smolin and Magueijo is an example of how this might occur in special relativity. This is a sort of Planck scale obstruction to the boost operations of special relativity.

      We do not expect this on a number of grounds. If I were to accelerate a proton to Planck energy I would be surprised to find that I could not boost if further. In a thought experiment I could imagine boosting an identical apparatus to a gamma smaller than that of the proton. The proton might then come upon the apparatus, where in that frame I boost it to a high gamma to the Planck energy. This obstruction would then say that if I boost to the frame of the original apparatus this proton would be observed to have an energy equal to the Planck energy. This is an inconsistency.

      The idea is somewhat enticing, but if there is something going on with this I would expect the physical world to somehow cancel the effect. In what I write below that is just what is proposed.

      The paper by Smolin and Magueijo sets up a postulate on the relativity of inertial frames, the equivalence principle and the observer independence of the Planck scale of length and energy. A nonlinear Lorentz group is then proposed with the generator of the standard Lorentz rotation generator J^i defined as

      [math]

      L_{ij} = p_i{\partial\over{\partial p^j}} - p_j{\partial\over{\partial p^i}}, J^i = \epsilon^{ijk}L_{jk},

      [/math]

      where the boost generator modified with the dilaton operator

      [math]

      D = p^i{\partial\over{\partial p^i}}

      [/math]

      is

      [math]

      K^i = K^i_0 L_p p^i D.

      [/math]

      These satisfy the standard commutation relationships for the Lorentz algebra

      [math]

      [J^i, J^j] = \epsilon^{ijk}J_k, [J^i, K^j] = \epsilon^{ijk}K_k, [K^i, K^j] = \epsilon^{ijk}J_k.

      [/math]

      The addition of the dilaton operator means there is the inclusion of a p^i in the boost, which means the action is nonlinear in momentum space. The entire nonlinear Lorentz boost in the x direction then gives

      [math]

      p_0^\prime = {{\gamma(p_0 - vp_x)}\over{1 L_p(\gamma - 1)p_0 - L_p\gamma vp_x}}

      [/math]

      [math]

      p_x^\prime = {{\gamma(p_x - vp_0)}\over{1 L_p(\gamma - 1)p_0 - L_p\gamma vp_x}}

      [/math]

      [math]

      p_{y,z}^\prime = {{p_{y,z}}\over{1 L_p(\gamma - 1)p_0 - L_p\gamma vp_x}}.

      [/math]

      The effect of the operator U(p_0) on the element of the Lorentz group g = exp(ω_{μν}L^{μν}) defines the nonlinear representation of the Lorentz group by

      [math]

      {\cal G}[\omega_{\mu\nu}] = U^{-1}(p_0)gU(p_0) = U^{-1}(p_0)(1 \omega^{\mu\nu}L_{\mu\nu})U(p_0) \dots =

      [/math]

      [math]

      1 \omega^{\mu\nu}{{L_{\mu\nu}}\over{1 - L_p^2p_0^2}} \dots = exp{\Big(\omega^{\mu\nu} L_{\mu\nu}\frac{1}{1 - L_p^2p_0^2}\Big)} = e^{\omega^{\mu\mu}M_{\mu\nu}[p_0]}.

      [/math]

      This modifies the structure of general relativity. Let the vector e^a, where the index a indicates an internal space direction, define a tetrad basis by e^a_μ = ∂_μe^a. The tetrad exhibits the nonlinear realization of the transformation according

      to

        CONTINUED:

        [math]

        e^a_\mu \rightarrow e^{a\prime}_\mu = {\cal G}[\omega_{\alpha\beta}, p^b_0]e^a_\mu,

        [/math]

        where p^b_0 = e^b_0. For e^{aμ} the transformation involves {\cal G}^{-1}[p_0]. Similarly the differential operator

        [math]

        D_\mu \rightarrow {\cal G}[e_0](\partial_\mu \omega^{a\nu}_\mu e^a_\nu),

        [/math]

        transforms locally under the nonlinear Lorentz group. This then gives

        [math]

        D_\mu e^a_\nu = {\cal G}[p_0](\partial_\mu e_\nu \omega^a_{\sigma\nu}){\cal G}[e_0] \big({\cal G}[p_0] \partial_\mu{\cal G}^{-1}[p_0] e^a\big)_\nu,

        [/math]

        which for the local nonlinear transformation written according to indices gives the connection coefficients

        [math]

        {\omega^{a\nu}}_\mu[p_0] = {{\cal G}_\alpha}^\nu[p_0]{\omega^{a\alpha}}_\beta{{\cal G}^{-1\beta}}_\mu[p_0] {{\cal G}_\alpha}^\sigma[p_0] \big(\partial_\mu{{{\cal G}^{-1\alpha}}_\nu[p_0]}\big) e^a_\sigma.

        [/math]

        There is then an additional connection term. For p_0L_p \le\le 1 these additional connection terms are correspondingly small. Define these additional connection terms

        [math]

        {\gamma^{a\mu}}_{\nu } = {\gamma^{a\mu}}_{\nu\sigma}e^\sigma = {{\cal G}^\mu}_\rho\partial_\nu{{\cal G}^{-1\rho}}_\sigma e^a

        [/math]. The

        curvature tensor is then

        [math]

        {R^{a\alpha}}_{\mu\beta\nu}[p_0] = {{\cal G}^\alpha}_\sigma[p_0] {R^{a\sigma}}_{\mu\rho\nu}{{\cal G}^{-1}_\beta}^\rho \partial_{[\beta}{\gamma^{a\alpha}}_{\mu\nu]} \epsilon^{abc}{\gamma^{b\alpha}}_{[\beta}{\gamma^c}_{\mu\nu]}.

        [/math]

        The standard curvature is homogeneously transformed by the nonlinear term, where the additional curvature in the last two terms is labelled as

        [math]

        {\rho^{a\alpha}}_{\mu\beta\nu}.

        [/math]

        This additional curvature is then some gravity field effect induced by this extreme boost. A particle boosted to near the Planck scale might be expected to experience the cosmological constant or curvature induced Einstein tensor G_{μν} = Λg_{μν}more strongly. A Lorentz contracted curvature with Gaussian curvature R has curvature radius 1/sqrt{R} and this would be contracted by the Lorentz factor γ, and so the curvature "amplified" by R --- > R/γ^2. I would then propose that the above curvature term. The cosmological constant is defined on the Hubble frame, which is due to the symmetry of the spacetime. The apparent "preferred frame" is not some violation of relativity. This highly boosted particle would then experience the cosmological curvature much more strongly.

        This does not appear to solve the 123 orders of magnitude problem. The boosted cosmological constant is ~ 10^{76} times larger This boosted particle is interacting with the vacuum of the universe much more strongly, but it is still 47 orders of magnitude too small. In other words returning to the unboosted frame would suggest the cosmological constant would be 10^{47} times larger than it is. However, if we were to boost a Planck mass we might expect that it interacts more strongly with the vacuum. This might then increase this "boost factor" At this point I have no particular idea on how to proceed.

        Cheers LC

        Dear Benjamin Dribus,

        Causal metric hypothesis is much applicable with the Tetrahedral-brane scenario of Coherently-cyclic cluster-matter paradigm of universe, in that time emerges with the eigen-rotational strings and a causality-effect continuum is expressional for an eternal universe and thus Causal cycles is descriptive with this paradigm. Causality of three-dimensional structures is the effect of tetrahedral-brane expressions by eigen-rotational strings, in that spacetime emerges from eigen-rotations of one-dimensional string-matter segments. Thus in this paradigm, the metric properties of spacetime is descriptive by the configuration space with string-length and time, in that the nature of spacetime is expressed differently.

        With best wishes

        Jayakar

        Thank You Ben and good Sir Lawrence!

        Good summary and references Ben, and explication of the territory Lawrence. Lorentz invariance vilolation (LIV) is likely a problem for for some causally structured theories like CDT, but as LC pointed out, it first came out as a prediction by some LQG folks, as a possible means of validating the loops approach. Greatly summarized, the Fermi and INTEGRAL results detected the near simultaneous arrival of both very high and lower energy radiation pulses from the same distant gamma ray burstar event. This does greatly constrain things, but as you pointed out Ben - it does not close the door on all causal approaches. And my conversations with a couple of LQG researchers, would indicate that their approach is not ruled out either - only constrained.

        As I understand it; the very thing which makes causal dynamical triangulation (CDT) work is what makes it problematic, in terms of LIV. The timelike lines must line up at the boundaries of each simplex, as the simplicial fabric evolves, and so there is a local discrete arrow of time that has a particular direction in space. The CDT approach has a fixed clock as well, yielding a very definite grain, but Smolin and Markopoulou showed that a varying clock yields similar results that show evolving dimensionality. One of the CDT authors (Loll or Ambjorn) pointed out in correspondence that they don't think it is an exact model anyway, but rather a discrete simulation of the way the spacetime metric evolves.

        My guess is the CDT folks did not start with enough degrees of freedom, to end up with an invariant model, and that's where I think the Octonions come in. That would greatly increase the options at the outset. I have lots of ideas of how that would come together. I'll have to continue my comments on the morrow, though, as it is already very late here. I've downloaded the papers you provided links to, and also a few by Sorkin and colleagues on topics relating to this discussion. Though I think a lot of the work on the Causal Sets approach is entirely sound, I tend to believe that we are looking for something similar to - but not exactly like - it. I need to do more reading though, to see how far that program (CauSets) has come along, before I comment further.

        All the Best,

        Jonathan

          Hello Mr Dribus,

          At my humble opinion, It is not the disorder the entropy. It is simply the infinite light and its physical distribution.

          A good occham razzor permits to sort the false extrapolations. It permits to see what are really these spheres of light in evolution of mass.

          A sphere for me is a planet, a star, an elementary particules, a water drop, the spheroids are so numerous. Fruits, glands, brains,cells,flowers,....the universe also is a sphere with all its intrinsic spheres, quantic and cosmological. The gauge is a pure 3D, 3 vectors !!!!!

          The spheres are everywhere, in us, around us , above us......The spherization, my theory of spherization, shows us how these spheres of light build the spheres of mass !!! inside a pure 3D sphere and its central sphere, the most important BH. The uniqueness serie is essential for a real understanding of quantizations and universal 3D proportions.

          The informations are inside the main central spheres, quant.and cosmol.The building is a pure spherization of the universal sphere by quantum spheres and cosmological spheres.The codes are inside these singularities. The number is so important for the serie of uniqueness.See that this number is the same for the two 3d gauges,quantum scale and cosmological scale. The 3D is essential.The road, for a real undertanding of this pure light without motion, time and dimension above our physical walls, is rational and dterministic.We have not pseudo convergences.The real interest is to analyze this universal physical sphere in evolution optimization SPHERIZATION.

          The noncommutative geometries must be well extrapolated, like the superimposings, or this or that. If not, we have pseudo sciences.The 3D is essential.The strings can converge with a correct axiomatization of deterministic tools. The rest seems vain.

          Best Regards

          Loop variable quantum gravity, causal sets or nets, causal triangulation theories and related ideas are themselves I think constraint systems. For instance, loop variable theory is a spinor, or spinor field, form of the 3 space plus 1 time form of relativity which has the Hamiltonian constraint NH = 0 and the momentum constraint N^iH_i = 0. The Wheeler DeWitt equation HΨ[g] = 0 is a canonical quantization form of the Hamiltonian constraint, and loop variable theory is a spinor form of this type of theory. The vanishing of the Hamiltonian NH = 0 or the canonical quantization HΨ[g] = 0 is due to the fact the manifold is global and there is no boundary from which one can compute with Gauss' law the mass-energy of the entire spacetime, or universe. It further have to be pointed out that loop quantum gravity (LQG) has yet to compute a one loop diagram properly. The reason is that if you have HΨ[g] = 0 it means you have ∂Ψ[g]/∂t = 0, and there is no dynamics! Computing a scattering amplitude, particles in --- > process --- > particles out, on a T-channel is not possible.

          What do these constrain? Frankly I think they constrain string theory. Ed Witten has found that within string/M-theory there is a form of twistor theory. This is the so called twistor "mini-revolution" that started a few years ago. Twistors are in some ways related to loop variables, but they have more spacetime content. I think this segues into thinking about these non-string approaches to quantum gravity as constraint systems. String theory uses a "time" τ that is a string time parameter along the string world sheet with the parameter σ along spatial extent of the string. This permits one to construct Hamiltonians of the form

          H = (T/2)[(∂X/∂τ)^2 (∂X/∂σ)^2], T = string tension,

          for X the string variable. This contrasts of course with the LQG which has no explicit concept of time, because there is no way to define mass-energy in a global context. A Hamiltonian is the generator of time translations, which means the energy defined by the Hamiltonian is conserve (Noether's theorem). In string theory this corresponds to level matching of string modes, but in LQG E = 0 and there is no time translation. However, the spacetime is a target map from the string, which should correspond to HΨ[g] = 0 for a wave functional over the spacetime metric. LQG is then some type of constraint system.

          There are also some interesting possibilities for duality principles. Barbour and Alves have proposed a form of shape dynamics, which is a symmetrical theory. The spatial relationships between elements that define a shape in space are symmetrical. Causal sets involve asymmetrical relationships between nodes that are connected by lines or into graphs. These represent temporal ordering. The two approaches seem to represent something similar to Penrose's tensor space of symmetric and anti-symmetric tensors in a type of duality. The duality in some work by Sparling and others is supersymmetry. I then conjecture that the correspondence between shape dynamics and causal nets (sets) is then a form of this duality or is a categorical equivalence to SUSY. This may then be another form of constraint, in particular the SUSY structure which exists in the AdS_n ~ CFT_{n-1} correspondence.

          Cheers LC

          Dear All,

          I want to thank Lawrence for the detailed info on deformed special relativity (DSR), Lorentz invariance violation, and the connection to the cosmological constant problem. His link to the original paper of Smolin and Magueijo does not appear to work; hopefully the following link fixes this:

          Smolin and Magueijo DSR

          I mentioned DSR briefly in my essay (it's one of the approaches involving noncommutative geometry). My understanding is that DSR has suffered a number of theoretical and experimental setbacks since it was introduced, but I think the issues it attempts to address are things which must be considered.

          I'll also remark that a number of other authors who submitted excellent essays to this year's FQXi competition were instrumental in the development of DSR and related approaches involving a minimal fundamental scale. These include Sabine Hossenfelder and of course Giovanni Amelino-Camelia.

            Dear All,

            Another point Lawrence re-raised is the possibility of duality between theories such as causal set theory, which involve anti-symmetric binary relations, and theories such as shape dynamics, which involve symmetric binary relations. This is one of the ideas arising from the discussion here (provided no one thought of it already!) that I hope will receive further attention and exploration.

            The evolution of this idea here is worth reading, but it is unfortunately scattered around the threads and there is too much to repost in one place. I think the discussion began with my comments on Daniel Alves' thread about symmetric, anti-symmetric, and asymmetric binary relations in shape dynamics and causal theory. From there the discussion branches out in several places.

            Lawrence has offered some important clues on making this idea precise. See his remarks on the axiomatization of space, Penrose tensor space theory, supersymmetry, fermionic and bosonic fields, etc. on Daniel Alves' thread, Sean Gryb's thread, and my thread. In particular, see his post of September 28 on my thread.

              The scalars for example cannot be utilized without a kind of universal 3D axiomatization. Let's take the equations of Friedmann Lemaître and the correlated metric. The 3D sphere and its intrinsic quantum spheres and cosmological spheres are all in 3D. We cannot insert extradimensions. The 3D is essential.The Universal sphere is in evolution spherization in a pure 3D. The scalars at my humble opinion are not vectors. the spherical coordonnates inside a closed system is essential. That's why the number of the universal uniqueness(see the fractal of spheres of light from the main spherical volume).It permits to quantize the mass polarizing the light in a pure 3D general point of vue.The quantum scale is in meter, the cosmological scale also.That's why a closed evolutive sphere is essential for the universal sphere. We arrive at an optimization of the model, isotropic and homogene. Like for Einstein. The spherization can bee seen with the help of equations of friedmann Lemaître. The space is curved by this mass and more this mass increases due to evolution, more the spherization acts, the spherization is general due to the increasing of mass due to the polarization mass/light.If we have not the number of uniqueness, it becomes more difficult for the quantization and the understanding of the spherization. The fact that this light is infinite in this Aether without motion, dimension,and times, shows us that this physicality is light in motion,so spheres in rotation. G c and h can be seen in a pure 3D spherization. the series of uniqueness of quantum spheres imply an interesting road for the quantization of evolution correlated with informations. The curves of spacetime are in fact coded by the singularities. The expansion is just a step, a maximum volume is an evidence. a contraction appears so when the density is ok for this contraction, the spherization in 3d is an evidence. The energy is correlated .The bosons and fermions can be seen like turning in opposite sense. The variables and parameters are relevant....sorry I must go,until soon

              Regards

              I will write more on the Penrose tensor space, or what I think is also called a modular space. Shape dynamics is really a form of Regge calculus. One could think about this according to light rays. In this way there is no matter of time involved with the "motion" of a shape, for null rays have no proper time. I illustrate this with two diagrams I attach to this post. The first is a flat spacetime description. This is also pictured in 2-space plus 1-time spacetime in 3 dimensions. Two points on a spatial surface emit light pulses. These converge on three points on a subsequent spatial surface. These then define a triangle on that spatial surface. The two points then emit subsequent light pulses and map the triangle onto a third spatial surface. In Minkowsk spacetime this continues indefinitely.

              In the curved spacetime situation null rays are curved. Since the metric

              ds^2 = g_{00}c^2dt^2 - g_{ij}dx^idx^j

              is such that for ds = 0 we can have

              U^iU^j = (g_{00}/g_{ij})c^2dt^2,

              and the optical path change due to curvature has a c^2 term. Hence we can assume the triangles on the spatial surface are flat. The deformation of null rays will then map the first triangle on the second diagram I attach into the second. The picture here is then completely described by null rays which have no proper time.

              The time evaluated from the Jacobi variational principle

              δt = sqrt{m_iδx_iδx_i/(E-V)}

              is related to a proper time, or an interval. In the case of a null interval the spatial δx_i may be evaluated according to g_{00} and g_{ij} as above. The time computed by the Jacobi variation is then an "emergent" or computed quantity. This is a parameter which emerges from the "motion" of the triangle, or the map from the first to the second. This is related to Desargue's theorem, where null geodesics are projective rays and these are used to "lift" a shape from one spatial surface to another.

              The causal net or set approach is stranger to me. However, the ordering principle behind it seems to demand an antisymmetry due to one ordering A > B not equivalent to B > A. I ponder whether the two approaches have a relationship between each other. These two should have some matter of equivalency if they both predict the same spacetime physics. If this is so this seems to categorical relationship to this and supersymmetry. Penrose's modular space of tensors is one way one can look at supersymmetry. The problem is that as yet I do not see a tensor structure to causal net theory.

              Cheers LC