(continued from previous post)

The "popular simplification" of these results has been that "if Lorentz invariance violation exists, it must occur on scales much smaller than the Planck scale." Of course, such conclusions are model-dependent; see for instance the discussion in section III of the first article to see how much conventional physics is being assumed.

As you point out, these results may be very problematic for the theory of causal dynamical triangulations (CDT), which is a much more constrained and structured approach to "quantum causal theory" than Sorkin's causal sets or my causal metric hypothesis. CDT has nontrivial fundamental elements, namely Lorentzian 4-simplices, and this allows for the use of a lot of familiar machinery in the theory. I do not know exactly to what extent Fermi/INTEGRAL doom CDT, but my impression is that at least some of these methods apply more or less directly and have negative implications.

I don't think this is true for causal sets, but I would like to ask Rafael Sorkin about this. There is a fair bit of literature on photon dispersion in causal sets, but I doubt if the machinery cited in the Fermi/INTEGRAL papers has a definitive causal set analogue at this point.

(continued below)

Dear Jonathan,

This is the last part continuing from above about Lorentz invariance violation...

Let me say a few words about why the Fermi/INTEGRAL experimental results don't worry me from the perspective of my own work, though they certainly should be kept in mind as constraints on the details of nonmanifold models of fundamental spacetime structure.

1. Just to be clear, although I think locally finite causal graphs are the most physically interesting "causal-metric" models at present, I don't think that "volume" arises from a constant discrete metric (a la Sorkin).

2. In particular, "nonmanifold" does not imply "discrete;" the two concepts are merely different extremes. Furthermore, "discrete" can mean several different things. There are several different topologies that are relevant for such models, and the discrete topology is perhaps the least interesting of these. There is also a measure-theoretic meaning of discreteness (e.g. Sorkin's "order plus number equals geometry.")

3. I doubt the arguments for the fundamental significance of the Planck scale.

What I am arguing is that experimental results like Fermi/INTEGRAL should serve as guides in pursuing nonmanifold models of spacetime structure, not discouragements. The most obvious objections to what I've said here is that my models are too vague and general at present to be either confirmed or falsified by feasible experiments. This is true... I need to do much more work. However, my feeling is that these results rule out only a tiny sliver of the universe of interesting quantum causal models.

Any additional thoughts you might have on this important point would be appreciated! In particular, I suspect you know more about exactly where Fermi/INTEGRAL leave CDT and other similar models. Take care,

Ben

The breakdown of the Lorentz symmetry is something which was advanced by loopvariable quantum gravity theorists. As Ben points out the observations of distant burstars has put considerable doubt upon these theories. Spacetime appears to be absolutely smooth down to scale of 10^{-45}cm or so. The graininess of spacetime that would result from violations of Lorentz symmetry should result in dispersion of light. Higher frequency light would interact more strongly with this graininess, so over billions of light years the subtle effect would be observed in different arrival time of light with different frequencies. This has failed to emerge in observations. The double relativity proposed by Smolin and Magueijo is an example of how this might occur in special relativity. This is a sort of Planck scale obstruction to the boost operations of special relativity.

We do not expect this on a number of grounds. If I were to accelerate a proton to Planck energy I would be surprised to find that I could not boost if further. In a thought experiment I could imagine boosting an identical apparatus to a gamma smaller than that of the proton. The proton might then come upon the apparatus, where in that frame I boost it to a high gamma to the Planck energy. This obstruction would then say that if I boost to the frame of the original apparatus this proton would be observed to have an energy equal to the Planck energy. This is an inconsistency.

The idea is somewhat enticing, but if there is something going on with this I would expect the physical world to somehow cancel the effect. In what I write below that is just what is proposed.

The paper by Smolin and Magueijo sets up a postulate on the relativity of inertial frames, the equivalence principle and the observer independence of the Planck scale of length and energy. A nonlinear Lorentz group is then proposed with the generator of the standard Lorentz rotation generator J^i defined as

[math]

L_{ij} = p_i{\partial\over{\partial p^j}} - p_j{\partial\over{\partial p^i}}, J^i = \epsilon^{ijk}L_{jk},

[/math]

where the boost generator modified with the dilaton operator

[math]

D = p^i{\partial\over{\partial p^i}}

[/math]

is

[math]

K^i = K^i_0 L_p p^i D.

[/math]

These satisfy the standard commutation relationships for the Lorentz algebra

[math]

[J^i, J^j] = \epsilon^{ijk}J_k, [J^i, K^j] = \epsilon^{ijk}K_k, [K^i, K^j] = \epsilon^{ijk}J_k.

[/math]

The addition of the dilaton operator means there is the inclusion of a p^i in the boost, which means the action is nonlinear in momentum space. The entire nonlinear Lorentz boost in the x direction then gives

[math]

p_0^\prime = {{\gamma(p_0 - vp_x)}\over{1 L_p(\gamma - 1)p_0 - L_p\gamma vp_x}}

[/math]

[math]

p_x^\prime = {{\gamma(p_x - vp_0)}\over{1 L_p(\gamma - 1)p_0 - L_p\gamma vp_x}}

[/math]

[math]

p_{y,z}^\prime = {{p_{y,z}}\over{1 L_p(\gamma - 1)p_0 - L_p\gamma vp_x}}.

[/math]

The effect of the operator U(p_0) on the element of the Lorentz group g = exp(ω_{μν}L^{μν}) defines the nonlinear representation of the Lorentz group by

[math]

{\cal G}[\omega_{\mu\nu}] = U^{-1}(p_0)gU(p_0) = U^{-1}(p_0)(1 \omega^{\mu\nu}L_{\mu\nu})U(p_0) \dots =

[/math]

[math]

1 \omega^{\mu\nu}{{L_{\mu\nu}}\over{1 - L_p^2p_0^2}} \dots = exp{\Big(\omega^{\mu\nu} L_{\mu\nu}\frac{1}{1 - L_p^2p_0^2}\Big)} = e^{\omega^{\mu\mu}M_{\mu\nu}[p_0]}.

[/math]

This modifies the structure of general relativity. Let the vector e^a, where the index a indicates an internal space direction, define a tetrad basis by e^a_μ = ∂_μe^a. The tetrad exhibits the nonlinear realization of the transformation according

to

    CONTINUED:

    [math]

    e^a_\mu \rightarrow e^{a\prime}_\mu = {\cal G}[\omega_{\alpha\beta}, p^b_0]e^a_\mu,

    [/math]

    where p^b_0 = e^b_0. For e^{aμ} the transformation involves {\cal G}^{-1}[p_0]. Similarly the differential operator

    [math]

    D_\mu \rightarrow {\cal G}[e_0](\partial_\mu \omega^{a\nu}_\mu e^a_\nu),

    [/math]

    transforms locally under the nonlinear Lorentz group. This then gives

    [math]

    D_\mu e^a_\nu = {\cal G}[p_0](\partial_\mu e_\nu \omega^a_{\sigma\nu}){\cal G}[e_0] \big({\cal G}[p_0] \partial_\mu{\cal G}^{-1}[p_0] e^a\big)_\nu,

    [/math]

    which for the local nonlinear transformation written according to indices gives the connection coefficients

    [math]

    {\omega^{a\nu}}_\mu[p_0] = {{\cal G}_\alpha}^\nu[p_0]{\omega^{a\alpha}}_\beta{{\cal G}^{-1\beta}}_\mu[p_0] {{\cal G}_\alpha}^\sigma[p_0] \big(\partial_\mu{{{\cal G}^{-1\alpha}}_\nu[p_0]}\big) e^a_\sigma.

    [/math]

    There is then an additional connection term. For p_0L_p \le\le 1 these additional connection terms are correspondingly small. Define these additional connection terms

    [math]

    {\gamma^{a\mu}}_{\nu } = {\gamma^{a\mu}}_{\nu\sigma}e^\sigma = {{\cal G}^\mu}_\rho\partial_\nu{{\cal G}^{-1\rho}}_\sigma e^a

    [/math]. The

    curvature tensor is then

    [math]

    {R^{a\alpha}}_{\mu\beta\nu}[p_0] = {{\cal G}^\alpha}_\sigma[p_0] {R^{a\sigma}}_{\mu\rho\nu}{{\cal G}^{-1}_\beta}^\rho \partial_{[\beta}{\gamma^{a\alpha}}_{\mu\nu]} \epsilon^{abc}{\gamma^{b\alpha}}_{[\beta}{\gamma^c}_{\mu\nu]}.

    [/math]

    The standard curvature is homogeneously transformed by the nonlinear term, where the additional curvature in the last two terms is labelled as

    [math]

    {\rho^{a\alpha}}_{\mu\beta\nu}.

    [/math]

    This additional curvature is then some gravity field effect induced by this extreme boost. A particle boosted to near the Planck scale might be expected to experience the cosmological constant or curvature induced Einstein tensor G_{μν} = Λg_{μν}more strongly. A Lorentz contracted curvature with Gaussian curvature R has curvature radius 1/sqrt{R} and this would be contracted by the Lorentz factor γ, and so the curvature "amplified" by R --- > R/γ^2. I would then propose that the above curvature term. The cosmological constant is defined on the Hubble frame, which is due to the symmetry of the spacetime. The apparent "preferred frame" is not some violation of relativity. This highly boosted particle would then experience the cosmological curvature much more strongly.

    This does not appear to solve the 123 orders of magnitude problem. The boosted cosmological constant is ~ 10^{76} times larger This boosted particle is interacting with the vacuum of the universe much more strongly, but it is still 47 orders of magnitude too small. In other words returning to the unboosted frame would suggest the cosmological constant would be 10^{47} times larger than it is. However, if we were to boost a Planck mass we might expect that it interacts more strongly with the vacuum. This might then increase this "boost factor" At this point I have no particular idea on how to proceed.

    Cheers LC

    Dear Benjamin Dribus,

    Causal metric hypothesis is much applicable with the Tetrahedral-brane scenario of Coherently-cyclic cluster-matter paradigm of universe, in that time emerges with the eigen-rotational strings and a causality-effect continuum is expressional for an eternal universe and thus Causal cycles is descriptive with this paradigm. Causality of three-dimensional structures is the effect of tetrahedral-brane expressions by eigen-rotational strings, in that spacetime emerges from eigen-rotations of one-dimensional string-matter segments. Thus in this paradigm, the metric properties of spacetime is descriptive by the configuration space with string-length and time, in that the nature of spacetime is expressed differently.

    With best wishes

    Jayakar

    Thank You Ben and good Sir Lawrence!

    Good summary and references Ben, and explication of the territory Lawrence. Lorentz invariance vilolation (LIV) is likely a problem for for some causally structured theories like CDT, but as LC pointed out, it first came out as a prediction by some LQG folks, as a possible means of validating the loops approach. Greatly summarized, the Fermi and INTEGRAL results detected the near simultaneous arrival of both very high and lower energy radiation pulses from the same distant gamma ray burstar event. This does greatly constrain things, but as you pointed out Ben - it does not close the door on all causal approaches. And my conversations with a couple of LQG researchers, would indicate that their approach is not ruled out either - only constrained.

    As I understand it; the very thing which makes causal dynamical triangulation (CDT) work is what makes it problematic, in terms of LIV. The timelike lines must line up at the boundaries of each simplex, as the simplicial fabric evolves, and so there is a local discrete arrow of time that has a particular direction in space. The CDT approach has a fixed clock as well, yielding a very definite grain, but Smolin and Markopoulou showed that a varying clock yields similar results that show evolving dimensionality. One of the CDT authors (Loll or Ambjorn) pointed out in correspondence that they don't think it is an exact model anyway, but rather a discrete simulation of the way the spacetime metric evolves.

    My guess is the CDT folks did not start with enough degrees of freedom, to end up with an invariant model, and that's where I think the Octonions come in. That would greatly increase the options at the outset. I have lots of ideas of how that would come together. I'll have to continue my comments on the morrow, though, as it is already very late here. I've downloaded the papers you provided links to, and also a few by Sorkin and colleagues on topics relating to this discussion. Though I think a lot of the work on the Causal Sets approach is entirely sound, I tend to believe that we are looking for something similar to - but not exactly like - it. I need to do more reading though, to see how far that program (CauSets) has come along, before I comment further.

    All the Best,

    Jonathan

      Hello Mr Dribus,

      At my humble opinion, It is not the disorder the entropy. It is simply the infinite light and its physical distribution.

      A good occham razzor permits to sort the false extrapolations. It permits to see what are really these spheres of light in evolution of mass.

      A sphere for me is a planet, a star, an elementary particules, a water drop, the spheroids are so numerous. Fruits, glands, brains,cells,flowers,....the universe also is a sphere with all its intrinsic spheres, quantic and cosmological. The gauge is a pure 3D, 3 vectors !!!!!

      The spheres are everywhere, in us, around us , above us......The spherization, my theory of spherization, shows us how these spheres of light build the spheres of mass !!! inside a pure 3D sphere and its central sphere, the most important BH. The uniqueness serie is essential for a real understanding of quantizations and universal 3D proportions.

      The informations are inside the main central spheres, quant.and cosmol.The building is a pure spherization of the universal sphere by quantum spheres and cosmological spheres.The codes are inside these singularities. The number is so important for the serie of uniqueness.See that this number is the same for the two 3d gauges,quantum scale and cosmological scale. The 3D is essential.The road, for a real undertanding of this pure light without motion, time and dimension above our physical walls, is rational and dterministic.We have not pseudo convergences.The real interest is to analyze this universal physical sphere in evolution optimization SPHERIZATION.

      The noncommutative geometries must be well extrapolated, like the superimposings, or this or that. If not, we have pseudo sciences.The 3D is essential.The strings can converge with a correct axiomatization of deterministic tools. The rest seems vain.

      Best Regards

      Loop variable quantum gravity, causal sets or nets, causal triangulation theories and related ideas are themselves I think constraint systems. For instance, loop variable theory is a spinor, or spinor field, form of the 3 space plus 1 time form of relativity which has the Hamiltonian constraint NH = 0 and the momentum constraint N^iH_i = 0. The Wheeler DeWitt equation HΨ[g] = 0 is a canonical quantization form of the Hamiltonian constraint, and loop variable theory is a spinor form of this type of theory. The vanishing of the Hamiltonian NH = 0 or the canonical quantization HΨ[g] = 0 is due to the fact the manifold is global and there is no boundary from which one can compute with Gauss' law the mass-energy of the entire spacetime, or universe. It further have to be pointed out that loop quantum gravity (LQG) has yet to compute a one loop diagram properly. The reason is that if you have HΨ[g] = 0 it means you have ∂Ψ[g]/∂t = 0, and there is no dynamics! Computing a scattering amplitude, particles in --- > process --- > particles out, on a T-channel is not possible.

      What do these constrain? Frankly I think they constrain string theory. Ed Witten has found that within string/M-theory there is a form of twistor theory. This is the so called twistor "mini-revolution" that started a few years ago. Twistors are in some ways related to loop variables, but they have more spacetime content. I think this segues into thinking about these non-string approaches to quantum gravity as constraint systems. String theory uses a "time" τ that is a string time parameter along the string world sheet with the parameter σ along spatial extent of the string. This permits one to construct Hamiltonians of the form

      H = (T/2)[(∂X/∂τ)^2 (∂X/∂σ)^2], T = string tension,

      for X the string variable. This contrasts of course with the LQG which has no explicit concept of time, because there is no way to define mass-energy in a global context. A Hamiltonian is the generator of time translations, which means the energy defined by the Hamiltonian is conserve (Noether's theorem). In string theory this corresponds to level matching of string modes, but in LQG E = 0 and there is no time translation. However, the spacetime is a target map from the string, which should correspond to HΨ[g] = 0 for a wave functional over the spacetime metric. LQG is then some type of constraint system.

      There are also some interesting possibilities for duality principles. Barbour and Alves have proposed a form of shape dynamics, which is a symmetrical theory. The spatial relationships between elements that define a shape in space are symmetrical. Causal sets involve asymmetrical relationships between nodes that are connected by lines or into graphs. These represent temporal ordering. The two approaches seem to represent something similar to Penrose's tensor space of symmetric and anti-symmetric tensors in a type of duality. The duality in some work by Sparling and others is supersymmetry. I then conjecture that the correspondence between shape dynamics and causal nets (sets) is then a form of this duality or is a categorical equivalence to SUSY. This may then be another form of constraint, in particular the SUSY structure which exists in the AdS_n ~ CFT_{n-1} correspondence.

      Cheers LC

      Dear All,

      I want to thank Lawrence for the detailed info on deformed special relativity (DSR), Lorentz invariance violation, and the connection to the cosmological constant problem. His link to the original paper of Smolin and Magueijo does not appear to work; hopefully the following link fixes this:

      Smolin and Magueijo DSR

      I mentioned DSR briefly in my essay (it's one of the approaches involving noncommutative geometry). My understanding is that DSR has suffered a number of theoretical and experimental setbacks since it was introduced, but I think the issues it attempts to address are things which must be considered.

      I'll also remark that a number of other authors who submitted excellent essays to this year's FQXi competition were instrumental in the development of DSR and related approaches involving a minimal fundamental scale. These include Sabine Hossenfelder and of course Giovanni Amelino-Camelia.

        Dear All,

        Another point Lawrence re-raised is the possibility of duality between theories such as causal set theory, which involve anti-symmetric binary relations, and theories such as shape dynamics, which involve symmetric binary relations. This is one of the ideas arising from the discussion here (provided no one thought of it already!) that I hope will receive further attention and exploration.

        The evolution of this idea here is worth reading, but it is unfortunately scattered around the threads and there is too much to repost in one place. I think the discussion began with my comments on Daniel Alves' thread about symmetric, anti-symmetric, and asymmetric binary relations in shape dynamics and causal theory. From there the discussion branches out in several places.

        Lawrence has offered some important clues on making this idea precise. See his remarks on the axiomatization of space, Penrose tensor space theory, supersymmetry, fermionic and bosonic fields, etc. on Daniel Alves' thread, Sean Gryb's thread, and my thread. In particular, see his post of September 28 on my thread.

          • [deleted]

          The scalars for example cannot be utilized without a kind of universal 3D axiomatization. Let's take the equations of Friedmann Lemaître and the correlated metric. The 3D sphere and its intrinsic quantum spheres and cosmological spheres are all in 3D. We cannot insert extradimensions. The 3D is essential.The Universal sphere is in evolution spherization in a pure 3D. The scalars at my humble opinion are not vectors. the spherical coordonnates inside a closed system is essential. That's why the number of the universal uniqueness(see the fractal of spheres of light from the main spherical volume).It permits to quantize the mass polarizing the light in a pure 3D general point of vue.The quantum scale is in meter, the cosmological scale also.That's why a closed evolutive sphere is essential for the universal sphere. We arrive at an optimization of the model, isotropic and homogene. Like for Einstein. The spherization can bee seen with the help of equations of friedmann Lemaître. The space is curved by this mass and more this mass increases due to evolution, more the spherization acts, the spherization is general due to the increasing of mass due to the polarization mass/light.If we have not the number of uniqueness, it becomes more difficult for the quantization and the understanding of the spherization. The fact that this light is infinite in this Aether without motion, dimension,and times, shows us that this physicality is light in motion,so spheres in rotation. G c and h can be seen in a pure 3D spherization. the series of uniqueness of quantum spheres imply an interesting road for the quantization of evolution correlated with informations. The curves of spacetime are in fact coded by the singularities. The expansion is just a step, a maximum volume is an evidence. a contraction appears so when the density is ok for this contraction, the spherization in 3d is an evidence. The energy is correlated .The bosons and fermions can be seen like turning in opposite sense. The variables and parameters are relevant....sorry I must go,until soon

          Regards

          I will write more on the Penrose tensor space, or what I think is also called a modular space. Shape dynamics is really a form of Regge calculus. One could think about this according to light rays. In this way there is no matter of time involved with the "motion" of a shape, for null rays have no proper time. I illustrate this with two diagrams I attach to this post. The first is a flat spacetime description. This is also pictured in 2-space plus 1-time spacetime in 3 dimensions. Two points on a spatial surface emit light pulses. These converge on three points on a subsequent spatial surface. These then define a triangle on that spatial surface. The two points then emit subsequent light pulses and map the triangle onto a third spatial surface. In Minkowsk spacetime this continues indefinitely.

          In the curved spacetime situation null rays are curved. Since the metric

          ds^2 = g_{00}c^2dt^2 - g_{ij}dx^idx^j

          is such that for ds = 0 we can have

          U^iU^j = (g_{00}/g_{ij})c^2dt^2,

          and the optical path change due to curvature has a c^2 term. Hence we can assume the triangles on the spatial surface are flat. The deformation of null rays will then map the first triangle on the second diagram I attach into the second. The picture here is then completely described by null rays which have no proper time.

          The time evaluated from the Jacobi variational principle

          δt = sqrt{m_iδx_iδx_i/(E-V)}

          is related to a proper time, or an interval. In the case of a null interval the spatial δx_i may be evaluated according to g_{00} and g_{ij} as above. The time computed by the Jacobi variation is then an "emergent" or computed quantity. This is a parameter which emerges from the "motion" of the triangle, or the map from the first to the second. This is related to Desargue's theorem, where null geodesics are projective rays and these are used to "lift" a shape from one spatial surface to another.

          The causal net or set approach is stranger to me. However, the ordering principle behind it seems to demand an antisymmetry due to one ordering A > B not equivalent to B > A. I ponder whether the two approaches have a relationship between each other. These two should have some matter of equivalency if they both predict the same spacetime physics. If this is so this seems to categorical relationship to this and supersymmetry. Penrose's modular space of tensors is one way one can look at supersymmetry. The problem is that as yet I do not see a tensor structure to causal net theory.

          Cheers LC

          Dear Steve,

          It took me a long time to realize you had posted here because it is so high up in my thread!

          I agree that "disorder" is not a very good description of entropy. In some important cases entropy is actually a measure of symmetry.

          By the way, another place in which the sphere arises is in quantum information theory; the space of states of a single qubit is a sphere called the Bloch sphere in this context. It can be identified with the Riemann sphere which is obtained by adding the point at infinity to the complex numbers.

          My feeling about dimension is that it is still rather mysterious why dimension 3 is so important. There are a lot of mathematical arguments for this, but I haven't yet seen a convincing explanation. Take care,

          Ben

          Mr Dribus,

          The entropy principle is a concept so difficult to encircle in its pure generality but so simple also. This entropy is like an infinite energy. The steps of fractalization are so numerous inside the pure physical 3D sphere. If this infinite light has created a physical 3D sphere in spherization of mass-gravitation. So we have disponible steps of energy correlated with our principle of equivalence. The mass, it is the energy. It is so a pure mesure of symmetries between the gravitation and the polarized light(due to evolution spherization)and its steps. I beleive that the spheres of light in their pure serie of unqueness so are quantas of pure energy correlated with the infinite singularity without motion ,time and dimension. That said, and it is paradoxal, the physicality is a finite system in increasing of mass, so the entropy, physical increases.See that this entropy inside the physicality is so under the physical laws. The infinite entropy so is a reality for both of systems. But the real interest is to utilize the disponible energies.A kind of taxonomy becomes an essential. The volumes of entangled spheres are so essential.The gravitation polarizes the light. The correlated synchonizations seem porportional with energies ,the spherical volumes of stability so are relevant considering the main central sphere as the most important volume of the serie of uniqueness. The cosmological number of spheres inside the universal sphere is the same than an ultim entanglement of spheres.The finite groups are essential. My 2 equations become very relevant considering a closed isotropical and homogeneous Universal Sphere.

          You say "By the way, another place in which the sphere arises is in quantum information theory; the space of states of a single qubit is a sphere called the Bloch sphere in this context. It can be identified with the Riemann sphere which is obtained by adding the point at infinity to the complex numbers."

          The spheres are everywhere ,I beleive that the simulations of quantum informations can be optimized. The Bloch sphere seems relevant considering the qubit informations. It can be optimized in a pure 3D convergence with the spherical volumes furthermore of the serie of uniqueness and its pure finite universal number. The complexs at infinity seems relevant also.They are tools. T

          about the dimensions.I beleive that it is very very important to consider an universal axiom for our 3 vectors implying a pure 3D sphere.The metric is a pure 3D. The proportions need to have these 3 vectors. If not we cannot have the pure thermodynamical correlations, universal between all rotating 3D spheres. A closed evolutive system in 3D is essential for our proprotions.

          Furthermore, the fact that we have the special relativity implying c and its limits for bosons, show us the road of the perception of 3D creations. c is essential for our contemplations if I can say. The general relativity tell us that the mass curves the space.It is so still an essential this 3D. More the mass increases, more the physical entropy increases, more the spherization of the universal sphere increases. The SR and GR are ok if and only if we have these finite groups inside a closed evolutive sphere. The dimension 3 is at all scale, it is the reason why we have our planck scale in meter and our universal sphere and its bounded limit also in meter.It is essential for all our universal proportions. The fractal of scales is always in 3D.

          I wish you all the best in this contest.

          Regards

          • [deleted]

          Benjamin

          I noiced your essay is top of the list, so I read it.

          The first step is to understand how we detect existence, and hence what our reality is and how it must occur. We can only know our reality, as we cannot transcend our own existence.

          "The first few assumptions I reject are... that systems evolve with respect to an independent time parameter"

          Not so. The two fundamental knowns in respect of our reality (ie not some metaphysical conceptualisation) are that a) it exists independently of the sensory systems that detect it, b) it alters. This means our reality is existential sequence. Which entails:

          1 It is comprised of elementary substances, these having physical existence which is not further divisible (there may be more than one type thereof).

          2 These elementary substances have at least one innate property each which has a propensity to alter, of itself &/or under external influence, in its existent condition.

          3 In any given sequence of physical existence, only one physically existent state (ie a reality) can occur at a time, and this has a definitive physical presence.

          4 No phenomenon can have physical influence and not have physical presence.

          5 There must be a particular relationship between a previously existent state(s) and a currently existent state for it(they) to be the cause, in terms of sequence occurrence and spatial position, as physical influence cannot 'jump' physical circumstance.

          Now, what these simple rules mean 'in practice', is another matter. But one fundamental characteristic of our reality is that it is sequence (or system), timing being an extrinsic measuring system which calibrates the relative speed of changes. It is also easy, on the basis of the above, to discern, generically, what dimension can be, and indeed what any of the other concepts mentioned can be, if anything.

          Paul

          Dear Paul,

          Thanks for reading my essay and for your remarks. I think I can agree with part of what you say; the rest I am not quite sure about. Let me be more precise.

          1. You say "we can only know our own reality." True enough; every observation is mediated by our interpretation, and every argument is mediated by our reason. But all of science is based on the assumption that there exists an external reality with characteristics accessible to any competent observer; i.e., that competent interpretation and reason will converge on objective answers. I don't dispute this assumption; there is no use in doing science if one does.

          2. I am not sure if your "not so" means you don't believe this is one of the first assumptions I reject (I assure you that it is!), or that you believe it should not be rejected. I will assume the latter.

          3. Your first "fundamental known" is part of what I said above, that there exists an objective reality. I agree.

          4. Your second "fundamental known" is less clear. What does it mean for "something" to "alter?" Well, you must have two different states, one you call "before," and one you call "after," but that is not enough. You must have some way of identifying the two states as representing the "same thing, only having altered." At the very least, this requires a relation pointing from the "first" state to the "second" state. The family of such relations is precisely a partial order as I described in my essay.

          5. I am afraid that your points 1-5 seem to include many imprecise concepts and implicit time-related assumptions. For instance, what precisely is an "elementary substance?" Is it a set, or an element of a set, or a field on a manifold, or what? What precisely is an "innate property," and how is it connected to its corresponding "substance?" What precisely is a "propensity?" If a "property" "alters," how do you identify it as an altered form of the same property rather than an unrelated property? How do you define and model "influence?" When you say "only one state can occur at a time," you must already have an assumption about what "time" is for the statement to mean anything. Later you say that time "calibrates the relative speed of changes," but what does "speed" mean, and what are speeds "relative" to? What does "spatial position" mean? What does it mean to "jump?" And so on and so on...

          You refer to these points as "simple rules," but I don't think they're simple. They sound simple in English, but if one tries to make them precise, one sees that they are quite complicated and include assumptions about the very things they are intended to define. This makes it hard to evaluate what you are actually claiming, and this is before one can even begin to think about whether or not the rules are likely to be valid or useful. Perhaps you could explain the meaning of these things a bit more clearly. Take care,

          Ben

            • [deleted]

            Benjamin

            Thanks for you prompt response, and the constructive nature thereof.

            Re 1: But this is not what I am saying. We can only investigate a specific form of existence. In short!:

            Any form of existence invokes the possibility of an alternative, (ie if A, there is always the logical possibility of not-A). But, any form of existence other than our reality is inherently unknowable, since we cannot transcend our existence. Therefore, we can only analyse our reality. That being existence in the form of what we can detect (either directly or indirectly), which is dependent on the sensory processes.

            Therefore, our reality comprises those physically existent phenomena which are potentially sensorially detectable by any organism, and the existent phenomena which are proven to have caused them. The caveat of potentiality referring to physical, not metaphysical, issues with the mechanics of the sensory processes, because there are known problems with them. That is, instances where sensory detection either cannot be effected, or not completely accurately and/or comprehensively.

            In which case, dependence on the sensory processes does not necessitate objective knowledge being limited to validated direct experience. Where there are known functionality issues, what occurred must be hypothesised, but still be based on, and assessment of consequent outcomes referenced to, validated direct experience. That is, objective knowledge of our reality must always be subservient to direct experience, ie either proven to be directly experienceable, or proven to be potentially so.

            Essentially, the problem arises when there is a presumption that we can know existence, which we cannot. The confusion being between what, while not directly validatable, is properly inferable from other direct experience, and what is based on no substantiated experienceability.

            Re 2: the "not so" referred to the fact that our reality must occur as existential sequence, and that means there is only one physically existent state at a time. That is, for the successor to exist, the predecessor must cease. So the assumption you rejected: 'that systems evolve with respect to an independent time parameter', is true. Or at least is so given the immediate grammatical meaning of the phrase. Frankly, I was not so sure about the others, and hence your view on them, and would need more understanding as to what they are meant to mean.

            Re 3: But, as said at 1, this is not what I am saying. In short (again):

            Being reliant on these sensory processes limits, it has to be assumed, the form of existence we can know, but not what occurs within that. Because the sensory systems receive, not create, physically existent phenomena in their detectable form (albeit these result from interactions between other existent phenomena, which is what is usually meant by reality), and create information. So the sensory processes can have no influence on our reality, only on information about it.

            Re 4: See 2 above concerning sequence, and then:

            Change concerns how realities differ, and is therefore not an attribute of any given reality. It is not existent, and neither is the difference. Only physically existent states are existent, it being comparison of these states which reveals difference. Logically, change involves: 1) substance (ie what changed), 2) order (ie the sequence of differences), 3) frequency (ie the rate at which change occurred). The latter being established by comparing numbers of changes occurring over the same duration. This could involve realities in any sequence (including different aspects of the same sequence), and have either occurred concurrently, or otherwise. This is timing.

            Re 5: There are no time related assumptions, what is being said is a function of what must be the fundamental nature of our reality (remember, we are trapped in an existential loop). So, to answer one specific question as an example, I do not know what could actually constitute an elementary substance. It is just that, given how our reality is constituted, there must be one, or indeed probably several types. Etc, etc.

            On time and timing. The phrase "at a time" refers to occurrence, not time/timing. In respect of the latter: timing compares the number of changes, irrespective of type, in different sequences. This can be done directly, or with respect to a common denominator. So if you are using a quartz watch, it is crystal oscillations. In other words, what is being established is the rate (speed) of a rate (change). And it is all about change, which concerns difference between, not a feature of. There can be no change in whatever constitutes a physically existent state, because otherwise it would involve more than one such state, and physical existence can only occur in one state at a time.

            Finally (well I could expand on dimension and space, but have said enough for now!), in respect of: "but I don't think they're simple". You are quite right, but not in the sense you mean. At the generic level, these are precise, when applied, not only is it difficult to discern what, within our reality, they could be, but their simplicity belies their significant consequences. In simple terms, physics has failed to understand the fundamental nature of what it is investigating.

            Paul

            Dear Paul,

            Thanks for the clarification. I believe I understand much better now where you are coming from. Also, your view does not seem to be quite as much at variance with mine as I had thought from your original comments. In particular, I think we completely agree that time is a way of talking about sequence (i.e. order). However, each event (or observer) has its own local sequence, so the order is a partial order, not a single linear order.

            You seem to be suggesting that it is the reality we perceive, not your principles, which is complicated, and this seems to be a valid contention. What I should have said is that it's complicated to translate your principles into meaningful statements about physics, but the same could be said of mine.

            You write, "physics has failed to understand the fundamental nature of what it is investigating," and you're quite likely correct. My causal metric hypothesis is a guess about that fundamental nature. Likely it's too simple, but I hope it can explain some current conundrums when properly developed.

            I chose not to write about observational and measurement-related issues not because I regard them as unimportant, but because there was simply not enough space. Also, these issues verge on the interface between consciousness and the physical world, which is something I regard as far beyond our abilities at present. I certainly don't feel confident writing about it! A few other brave contestants (Janko Kokosar, David and Julie Rousseau, Sara Walker, etc.) touched on some of these issues. Take care,

            Ben

              • [deleted]

              Ben

              I am not sure we do agree on the nature of time. Because you say, "However, each event (or observer) has its own local sequence, so the order is a partial order, not a single linear order". And you reject the principle "that systems evolve with respect to an independent time parameter".

              But, the entirety of our reality is an existential sequence, one can, to maintain sanity! conceptualise constituent sequences. However, there is no time in a reality, because this concept is about the calibration of change anyway and is therefore non-existent, but more importantly, there is no change in a reality, otherwise it cannot exist, and then alter. Changes are occurring at different speeds, but that is irrelevant. Timing is an extrinsic measurement system, time being the unit of measurement. For this to function properly there can only be one time/one system, which is applicable to the entirety of reality. Put simply, there is only ever in existence what is commonly known as a present, ie our reality is only in one physically existent state (a reality) at a time. Timing just provides a reference to establish what that was, and a means of relating various disparate rates of change.

              Furthermore, I noticed the sentence: "The fundamental structure of spacetime is the central focus of both the rejected assumptions and the new principles". Now, as a representational device of our reality this model is invalid. As stated above, time is in no sense (ie even when properly expressed in terms of change) a factor within any given reality. This means that a reality has purely spatial properties. But, dimension has been misconceived. Three spatial dimensions is the absolute minimum number conceivable whilst maintaining ontological correctness at that level of conception, but not what is physically existent. In effect, dimension involves the concept of reality being divided into a spatial grid, the smallest unit of which equates with the smallest substance. The dimension, ie spatial footprint-size/shape, of any given entity being the relative spatial positions 'occupied' at any time (ie in its physically existent state). The number of possible dimensions in reality is half the number of possible directions that the smallest substance could travel from any given spatial point, because dimension relates to direction, either way.

              I could ask rather superficial questions about the causal metric hypothesis, ie why are there two versions when there is only one reality, why a binary relationship in causality, but the more important issue is what is the model of reality which underpins this. Because that comes first, and then everything else follows, ie those 6 principles, once understood in grammatical terms, can then be translated into what they can mean in reality, given how that is fundamentally constituted.

              Finally, one does not need to know about consciousness, etc, as such. These are just unfortunate(!) interference factors. The physics of our reality is unchanged. We, and all sentient organisms, receive a physical input. This exists whether we receive it or not. We create (ie the output) knowledge of reality, not reality. The trick is to discern (ie eradicate the interferences) what was received, and hence what caused that, given knowledge as to how the phenomena involved behave physically.

              Paul