• [deleted]

Ben

At the risk of causing a riot, a problem with that Smolin/Magueijo paper is that it presumes SR includes gravitation in the first place. Which it does not. So in that sense, there is no issue to resolve. Einstein defined SR several times, and it is not 1905.

In 1905, the two postulates are "apparently irreconcilable". Which is a bizarre statement when juxtaposed against the assertion that "These two postulates suffice for the attainment...theory...electrodynamics of moving bodies". The irreconcilabilty is because light is presumed to be in vacuo, whilst everything else is not, because they are subject to dimension and momentum variance. And the cause of that is subsequently revealed to be gravitational forces. In other words, light and matter cannot be co-existent and so 1905 is not a singular, cohesive, theory. So in expounding GR, where everything (ie including light) is subject to gravitation, Einstein refers to a special/entirely theoretical circumstance where there was no gravitation, ie SR. This is why SR only involves uniform rectilinear and non-rotary motion, light that travels in straight lines at a constant speed, and rigid bodies.

Einstein: Relativity 1916 section 18: "...the special principle of relativity, i.e. the principle of the physical relativity of all uniform motion..."

Einstein: Foundation of GR 1916 section 3: "...the case of special relativity appearing as a limiting case when there is no gravitation"

Einstein: Relativity 1916 section 28: "The special theory of relativity has reference to Galileian domains, ie to those in which no gravitational field exists...In gravitational fields there are no such things as rigid bodies with Euclidean properties; thus the fictitious rigid body of reference is of no avail in the general theory of relativity"

Einstein: Relativity 1916, section 22: "Let us further investigate the path of light-rays in a statical gravitational field... Let us find out the curvature which a light-ray suffers. [equation (74) refers] ...a curvature of rays of light can only take place when the velocity of propagation of light varies with position...

Paul

  • [deleted]

The relationship between position configuration variables and momentum can first be seen by looking at the diagram I attach. This illustrates a scattering process with 5 input momenta and 5 output momenta. The "blob" is the region with virtual or off shell processes, which can be realized by on shell processes by BCFW recursion. I will ignore that matter for the moment. The momenta labeled with number 1, 2, 3, ..., 10 must all add to zero if we reverse the sign of the outgoing momenta. This is a trick used in working out the S, T, U channels. These momenta can then form a polygon. It is tempting of course to think of the configuration variables as defining the momenta by p1 = x1 - x2, p2 = x2 - x3, and so forth. However, this polygon has a dual polygon, which may be constructed by drawing a line through the mid points of the momentum edgelinks and then finding where these lines intersect. This will be the dual polygon. This dual polygon is then the vectors which represent the position variables of these particles. This is the more sophisticated way of such a representation for it does not rely upon an explicit reference to either set of variables to derive the other, but rather depends upon a duality principle.

In three dimensional space the diagram is more complex and the polygon is replaced with a polytope in three dimensions. Further, since the diagram on the left is really a spacetime diagram, with time running to the right, the polygon is really replaced with a polytope in four dimensions. The fundamental polytopes in four dimensions are the 24-cell, which is self-dual and the 120-cell that is dual to the 600-cell. In order to construct a one to one self duality between momentum and position configuration variables the 24-cell is the obvious model. For systems with more particles than can be represented by a single 24-cell tessellations of 24-cells may then be considered.

The 24-cell is a representation in Hurwitz quaternions of the F_4 exceptional group. The F_4 group shares a relationship with the B_4 = SO(9)

F_4/B_4:1 --- > spin(9) --- > F_{52/16} --- OP^2 --- > 1

And of course spin groups have a double cover to orthogonal groups

1 -- > Z_2 --- > Spin(n) --- > SO(n) --- > 1

The group SO(9) plays an important role with string theory or holography. Physics on an infinite momentum frame, physics observed boosted enormously, reduces the relevant physics to one dimension less, and the observed physics by time dilation is effectively nonrelativistic by time dilation. I illustrate this briefly below. This reduces the 10 dimensions of supergravity to 9, and the symmetry group of this spacetime is then SO(8,1) ~ SO(9).

It is easy to how an extremely boosted system appears nonrelativitstic. We consider the invariant momentum

m^2 = E^2 - p^2,

here with c = 1. The energy is then

E = sqrt{p_x^2 p_y^2 p_z^2 m^2}

We then consider the momentum p_z as enormous, and p_z >> p_x,y. We factor this out with

E = p_z sqrt{1 (p_x^2 p_y^2)/p_z^2 m^2/p_z^2}

Where binomial theorem gives us

E = p_z (p_x^2 p_y^2)/p_z m^2/p_z

The momentum p_z then acts as a time dilation factor approximating a Lorentz factor. We may group all of this to define a new energy or Hamiltonian

H = (E - p_z)p_z = p_x^2 p_y^2 m^2,

where the right hand side of this equation is a nice classical nonrelativistic Hamiltonian. The mass squared then plays the role of a potential energy.

Cheers LCAttachment #1: scattering_polygon.GIF

  • [deleted]

Lawrence/Ben

Bear with me while I put up what may appear like a simplistic comment, but sometimes a person without all the background sees the 'wood for the trees'. I have no idea how this could be represented as a model, let alone analysed in practice, but the logic of our reality must be:

In establishing what constitutes dimension, distance and space in our reality, it must be recognised that we are, in effect, conceiving of any given physical reality (ie physically existent state) as if it was being divided into a grid of spatial positions. And in order to deconstruct it to its existential level, the 'mesh' size of this grid would have to be equivalent to the smallest physical substance in our reality. [Note: there is no form of change within any given state of physical existence within our reality, only spatial characteristics, because it can only occur in one such state at a time].

Only physically existent states exist, being comprised of physical substance. That is, concepts either reflect that physicality, or are an artefact of it. By definition (ie what constitutes physical existence within our reality), any given physically existent state must have a definitive dimension/size/shape (ie spatial footprint), this being a function of its constituent physical substance. That, with reference to the conceptual grid, can be defined as spatial positions 'occupied'.

It could be argued that a direct comparison between states is possible, and therefore there is no need for the concept of a grid. But this is a fallacy, because logically the two circumstances are the same. The physically existent state used as a reference is just a surrogate grid. Indeed, in order to ensure compatibility with other comparisons, that state would have to be maintained as the reference (ie a de facto grid).

'Mapping' other states that were existent at the same given time, would reveal not only, obviously, both the spatial footprint of those states and their comparability with each other, but also, distance. That is an artefact, a function of the physicality of the particular existent states involved. It is a difference, defined by comparison. So, there cannot be a distance between physically existent states which existed at other times, because there cannot be a distance to a physically existent state which does not exist. Distance is usually measured between the two nearest dimensions of the existent states, but could include any combination of dimensions. And depending on the spatial relationship of the states involved, distance could involve a relationship in terms of separation of the states, or one within another, that again being with respect to specified dimensions.

Dimension is a specific aspect of spatial footprint (ie spatial positions 'occupied' when existent). It relates to the distance along any possible axis. So, three is the absolute minimum number of spatial dimensions that is still ontologically correct at the highest level of conceptualisation of any given physical reality. But is not what is physically existent. At that existential level, the number of possible dimensions that any given physical reality has is half the number of possible directions that the smallest substance in our reality could travel from any given spatial point.

Paul

  • [deleted]

Largely what Dribus connects with is causal net, where physica can be established within a minimal set of postulates and is based on the causal succession of events. There is the shape dynamics which involves polyhedra in space and their dynamics. One involves time (causal nets) and the other involves space (shape dynamics), where I think there is some possible duality here between these pictures. I have suggested this might have some categorical relationship to supersymmetry.

Cheers LC

  • [deleted]

Dr. Crowell, and Essay Author Benjamin F. Dribus,

Very interesting thread. I appreciate the opportunity provided by FQXi.org to follow it. I hope there will be additions to it. Thank you.

James

  • [deleted]

In respect of any given physically existent state, ie the existential state of our reality (or constituent parts thereof) at any given time, there are only spatial characteristics. These being a function of the physical substances which comprise our reality. Space, as in the meaning of not-substance (ie rather than spatial footprint), which is not to be confused with just a different form of substance, needs to be proven. By definition, since a physically existent state involves no change, ie it is one state in the sequence, it only has one set of spatial characteristics.

Any circumstance where there is 'time' involves change, ie sequence, sequence order and the rate of turnover thereof. That is, the characteristics revolving around what and how a physically existent state is altering, which is derived from comparison of more than one of them in the sequence. Alteration must function in accord with specific rules relating to sequence order and spatial position, ie any given physically existent state can only be the potential cause of another if it is physically possible. Causality only relating to direct physical influence, since, by definition, all physically existent states which comprise our reality at any given time are ultimately physically interrelated.

The issue is what constitutes a physically existent state. And given the logic of our reality, that must be associated with being in any one of the possible conditions that the properties of the elementary substances could attain. The problem is that we do not conceptualise our reality at its existential level, and therefore fail to identify the entirety of a sequence and confuse the relative timings at which any given state in a sequence occurred.

Paul

Dear All,

Another idea I'd like to inject regarding the possibility of shape/causal duality occurred to me while trying to understand the "top-down causation" philosophy presented by George Ellis and others. I struggled with this a bit after first reading about it; in particular I seemed to run into trouble thinking about how one might make it precise. I recall Lawrence expressed some agnosticism about this on his thread too.

Anyway, it occurred to me that, very generically, it might be problematic to expect duality between a theory involving classical holism (e.g. shape dynamics) and a theory with complete reductionism at the classical level (e.g. causal sets). Ellis' top-down causation, suitably represented in a pure causal theory, might incorporate a degree of classical holism into causal theory and make the idea of duality more feasible.

(continued below)

(continued from previous post)

Whether this idea would work or not, one might still wonder how top-down structure could be incorporated into pure causal theory in a sensible way, and what the actual quantitative differences would be. The answer, I think, is that top-down causation elevates the binary relation generating the causal order to a "power-space relation." The causal metric hypothesis still applies here, but the vertices of causal graphs no longer represent spacetime events. Below, I copy the relevant material from the post I made about this on George Ellis' thread.

(continued below)

(continued from previous post)

After initially struggling with the idea, I've been thinking a bit about

how your [George's] top-down causation idea might look from the

perspective of nonmanifold models of fundamental spacetime structure that

emphasize the role of causality. It seems that top-down causation might

provide an interesting new perspective on such models. For definiteness

and simplicity, I use Rafael Sorkin's causal sets approach as an example.

Causal sets, as currently conceived, are by definition purely bottom-up at

the classical level. Causality is modeled as an irreflexive, acyclic,

interval-finite binary relation on a set, whose elements are explicitly

identified as "events." Since causal structure alone is not sufficient to

recover a metric, each element is assigned one fundamental volume unit.

Sorkin abbreviates this with the phrase, "order plus number equals

geometry." This is a special case of what I call the causal metric

hypothesis.

In the context of classical spacetime, top-down causation might be

summarized by the statement, "causal relationships among subsets of

spacetime are not completely reducible to causal relations among their

constituent events." In this context, the abstract causal structure

exists at the level of the power set of classical spacetime, i.e., the set

whose elements are subsets of spacetime. Discrete models very similar to

causal sets could be employed, with the exception that the elements would

correspond not to events, but to families of events. Two-way

relationships would also come into play.

Initially this idea bothered me because of locality issues, but such a

model need not violate conventional classical locality, provided that

appropriate constraints involving high-level and low-level relations are

satisfied.

This idea is interesting to me for the following reasons.

1. The arguments for top-down causation presented by you [George] and

others are rather convincing, and one would like to incorporate such

considerations into approaches to "fundamental theories," particularly

those emphasizing causality.

2. One of the principal difficulties for "pure causal theories" is their

parsimony; it is not clear that they contain enough structure to recover

established physics. Top-down causation employed as I described (i.e.

power-set relations) provides "extra structure" without "extra hypotheses"

in the sense that one is still working with the same (or similar) abstract

mathematical objects. It is the interpretation of the "elements" and

"relations" that becomes more general. In particular, the causal metric

hypothesis still applies, although not in the form "order plus number

equals geometry."

3. There is considerable precedent, at least in mathematics, for this type

of generalization. For example, Grothendieck's approach to algebraic

geometry involves "higher-dimensional points" corresponding to

subvarieties of algebraic varieties, and the explicit consideration of

these points gives the scheme structure, which has considerable

advantages. In particular, the scheme structure is consistent with the

variety structure but brings to light "hidden information." This may be

viewed as an analogy to the manner in which higher-level causal structure

is consistent with lower-level structure (e.g. does not violate locality),

but includes important information that might be essential in recovering

established physics.

4. As far as I know, this approach has not yet been explicitly developed.

    Lawrence,

    Does the OP^2 factor relate to known physics in any particular way? I'm sure Jonathan Dickau would be interested to know this too. Take care,

    Ben

    Paul,

    I'm sure you gathered this by reading my essay, but the approach I am working on does not take spatial relations to be fundamental. In this view, spacelike separation is a way of talking about events that are not causally related. "States" are not fundamental either.

    I think it's much more natural to consider as fundamental "what actually happens" (i.e. interactions involving cause and effect) than to imagine a spatial manifold (or lattice or grid or whatever structure one wishes to work with). The events in such a "spatial section" don't interact, and physics is principally about describing interaction.

    However, it's entirely possible that the "space first" view is equally valid, or even equivalent. My "shape/causal duality" wishful thinking is based on this idea.

    "I think it's much more natural to consider as fundamental 'what actually happens' (i.e. interactions involving cause and effect) than to imagine a spatial manifold (or lattice or grid or whatever structure one wishes to work with)."

    Well put, Ben. It's the relation among events (e.g., those that happen in a simply connected, vs. multiply connected, space) that determines the interaction, rather than the form of the surface. Events infinitely separated in time may still be correlated by an identical rate of change (Einstein spoke of this phenomenon in "Geometry and Experience"). Advances in topology today allow us to make the relation rigorous by obliterating the distinction between local and global. "Natural" doesn't always mean "intuitive," at least in the sense of naive realism.

    Tom

    • [deleted]

    In later thinking about this, the top-down approach might make sense from the perspective of computation theory. It might be there simply is no way that one can compute complex structures at the top from the bottom. This may extend to matters of manifold compactification and the number of possible cosmologies in the so called multiverse. The possible number of windings in 6 dimensions is a large number, 10^{1000}, which can be estimated from topology. The "choice" of possible winding index is a set of Boolean procedures, such as a 0 or 1 in a tape slot. A search through all possible such configurations of Calabi-Yau manifolds could not be accomplished by any computer running at any speed that is physically possible.

    The busy-beaver function or map is an extremization of algorithmic space or state change. This is an algorithmic variant of the calculus of variations, which pertains to trajectories in space with a continuum measure. A space that is evolving as a foliation or evolute in a de Sitter spacetime of inflation will exhibit nucleation bubbles and a vast array of possible CY manifolds and brane wrappings which construct local vacua are a manifestation of the possible outcomes that are not p-complete computable. The question then is how does the path integral of the universe in effect make the extremization, in a classical sense, or in a quantum setting the amplitudes of the various paths in the path integral?

    In these cases, whether it is the halting problem or the busy beaver problem or the minimal compression problem, there is a computation that in principle can be done to look at the behavior of some algorithm. All one does is to track its performance step by step and record its computation or output. What does not exist is a single computable means to do this for all possible algorithms in each of these classes of algorithms. There does not exist a universal machine that can check if all possible algorithms of a certain type pass the test or not, such as halting or non-halting. In the above situation with CY manifolds, what is not computable in a p-complete setting are all possible discrete groups or orbifold configurations.

    It has to be remembered that the "universe as a computer" is a model system one has in the "tool box." If we were to think of the universe as a computer or algorithm which evaluates a data stack that is the same algorithm, then we have some issues similar to a universal Turing machine emulating all possible Turing machines and itself. which means it emulates itself emulating itself and all other TMs and so forth. What can of course be done is to think of the universe as cut-off in a finite set of qubits. In that way the universe is not caught in a Godel loop, at least up to the limits of what we are able to observe.

    As my essay here at FQXi references how there is only one electron in the universe, but which appears in multiple configuration variables that are holographic projections. The same holds for all other particles, quarks and leptons as well as gauge bosons and so forth. In this way the number of degrees of freedom in the universe is very small, 512 or 1024, instead of the vast number we appear to observe. The number of possible particles that emerges in the multiverse, and the complexity of internal configurations for them, grow faster than any ability to compute them from fundamental states. This means there is an element of "top-down" structure to the universe, where not all the states or configuration of states and structures can be computed from below.

    We might consider there to be only 16 basic elementary particles, 3 lepton doublets = 6 particles, 3 quark doublets = 6 particles, the photon, Z and W^{+/-} for four gauge bosons. Again following the thread in my essay the graviton is the "square" of gauge theory. In particular the graviton is an entanglement between to gluons in a color neutral state. So we can ignore the graviton. This is a total of 16 elementary particles. We now consider these plus their supersymmetric partners, for a total of 32 elementary particles. Now divide 32 into 128, 256, 512 and 1024 for 4, 8, 16 and 32 respectively, corresponding to the number of supersymmetries. If we remove the weights of each of these, 4, 8, 16, 32 and the number of supersymmetries we have the root space dimension 120 (icosian quaternion space) 240 (the root dimension of E_8 or SO(32)) 480 (E_8xE_8) and 960. The last is more difficult to interpret.

    I was going to get into adinkra diagrams and their possible functorial relationship with causal nets (sets). However, time probably prevents that. Adinkra diagrams are ways of looking at entanglements of states with supercharges. This has been worked for 4 supercharges. With 4 supercharges these are GHZ entanglements, which can be constructed from more elementary trinary (Cat state) and bipartite entanglements. I will defer that until tomorrow.

    The NP-complete computation of all possible configurations means the de Sitter spacetime of inflation produces bubble nucleation regions (Coleman's vacuum instability that results in so called pocket universes) according to an extremization principle. However, how this happens is strange, where the incomputatbility of a universal Turing machine that computes whether an algorithm is halting, or busy beaver, or in this case the computation of a winding topology, means there is no "extremization." Yet by some means the cosmic wave function(al) does obey it.

    Cheers LC

    • [deleted]

    The projective octonion plane or Fano plane describes the discrete group system of quaternions. There are triplets of quaternions that exist in sets of 7. These are termed colineations. I will try to write more about this in the coming days.

    Cheers LC

    Hello to both of you.

    The computation begins to intrest me, :) perhaps it is due to you and Professor Ellis also. I am understanding better the 2d convergences. Thanks for that.

    If we want to compute correctly the system of unqueness, so it becomes very relevant for the algorythms of superimposings and sortings.

    The determinsim like an essential for all extrapolations. The Universal turing spherical machine is possible in inserting this serie of volumes, spherical decreasing from the main central sphere, the number 1.

    The causalities at all scales can be made. The informations can be classed with the volumes and my equations.If a binary relation is the chief orchestra of orders giving the geometrical comportments for the computing.I d say that the volumes of the serie of uniqueness are there the chief orchestra.

    The computation is important for our simulations, but the universal convergences must be precise and deterministic. These orders must respect the universal sphere and its intrinsic laws of geometrical buildings. The causalities, universal are the same at all scales of the 3D fractalization. The codes of informations are inside the main singularities, the central spheres.

    What I say is simple, the universal 3D algorythm of spherization exists.And it can be compute indeed. But the determinism of causalities muts be always an essential. If the number of uniqueness is not inserted, it is not possible to reach the correct causal parameters.

    A mathematical diagram or a geometrical algebric extrapolation must respect several universal axioms.If not the extrapolations shall be false in its pure generality and its foundamental intrinsic laws. The metric implied must so respect the pure general distribution.

    Let's take the simple example of dimensions. A dimension is a vector ,and when we have 3 vectors , we have the 3D. The scalars are different. The axiom of dimensionalities at my knowledge is essential.How can we calculate the proportions if they do not exist these 3 dimensions.(x,y,z).

    a quaternion is in the same logic respecting this 3D, if we insert a scalar, the time for example, so we have the evolution.But never we have 4 dimensions.It is a scalar this time.Let's take an other example, if we take a simple symmetry and reversibilities. So we have a possible reversibility of times if we take a mathematical play.But in the reality , the time is irreversible in its generality. The maths are a tool, it is like notes of music, we must respect the harmony of distribution of oscillations. The categorification so can be made respecting the quantization furthermore.and we can see the evolution with a pure determinism. The Universe is unique and possesse a specific number of spheres. The rotations and the volumes imply the evolution of the spheres of light.But if the serie of uniqueness is not inserted for all 3d scales.The quantization is not possible , so we cannot also quantize the informations of evolution.

    The computation can be universal with the correct algorythmic superimposings. A kind of algorythm of deterministic sortings will be relevant. A kind of algorythm of spherization optimization also will be relevant for the imrpovement of the metric of evolution. The universal causation is possible with the correct serie of uniqueness and the finite groups.

    The multiverses seem sifficult to accept when we want to compute our universal sphere and its spheres quantic and cosmological.

    That said, it is intersting all these discussions.

    Regards

      • [deleted]

      Ben

      I would stop worrying about top down, oscillation, feedback, reaction, or any other concept that involves an hypothesised relationship which cannot occur.

      What is physically existent is so at any given time. At the next point in time, at the existential level, 'it' will be significantly different. Which points to the ontologically incorrect way in which we conceptualise physical existence, ie as 'its'. Really these are sequences (systems) of physically existent states (ie an elementary and specific physical condition of substance), which when conceived by us at a higher level appear to persist in existence over time. But it is really only the superficial physical features which we deem to constitute any given 'it' that do so. St Paul's Cathedral has a significantly different physical state as at each point in time, but it 'looks' the same, and we gradually notice the manifestation of alteration.

      The more important point here is that at any given time only one physical state exists (aka present) in any given sequence, what did exist has ceased to do so (aka past) and there is no physically existent state commonly referred to as the future. Any concept which involves the notion of change to it, or that it can have some physical influence, is incorrect, because there is nothing in existence to affect, nor anything to invoke an effect. The notion of changing the future is properly expressed as the situation where a physically existent state occurred which is different to what which would otherwise have done so, had the causal factors been different. Which is meaningless physically, as by definition, any given state is a function of certain previously existent states.

      At most there could be a repetition of a previously physically existent state as the sequence progresses, but this is still different because it occurred at a different time. Although even that is likely to be superficial, ie due to the level of conceptualisation (but possibly correct at that level). Physically, it is probably impossible that a configuration of any given physically existent state, in its entirety, will re-occur.

      All this means that in any given instance of causality the cause must be from amongst physically existent states that exist concurrently, were the immediate predecessor of the subsequent state in sequential order, and had a spatial position which enabled a direct physical influence on the 'changed' state (ontologically it is a different state which when compared with a previous one reveals difference).

      Paul

      Dear Paul,

      The difficulty is that you and I have different assumptions about what is fundamental. I regard "what happens" as fundamental and view "spatial sections" or "antichains" or "Cauchy surfaces" as ways of talking about collections of events that don't interact. You regard "what is" as fundamental and regard "what happens" as a way of talking about "changes" in "what is."

      I don't know which view is right; maybe they both are! Surely the fact that I proposed the possibility of duality between the two (see my September 19 remark on Daniel Alves' thread) shows that I'm open-minded about the matter. However, I think that your view faces serious physical challenges.

      The relativity of simultaneity invoked by Einstein seems to be a feature of the real world, and this seems to preclude a "naive" notion of an external time parameter. You can choose a time parameter, but it will be different for different observers, while the causal structure is invariant. Any number of different ways of slicing things spatially are equally valid, so taking a spatial slice to represent "fundamental reality" puts one on very shaky ground.

      I use the qualifier "naive" because something like a preferred family of frames could arise from much more detailed considerations, and this is precisely the point of the whole discussion of Lorentz violation invariance. Once again, I'm open-minded about this; recall that I reject the symmetry interpretation of covariance (which puts all frames on an absolutely equal footing). But there's a big difference between rejecting the absolute equivalence of all frames and insisting on the priority of a single universal frame.

      A lot of this comes down to philosophical or aesthetic choices about what "makes sense" or "looks right." Ultimately nature has no respect for such choices, however, so it's better to take very seriously what nature seems to be saying before making them! Take care,

      Ben

      Dear Steve,

      Just to be clear, I'm not proposing a multiverse in the sense of the string theory landscape, but only in the sense of the superposition principle. In ordinary quantum theory, the sum of any two states (e.g. particle states) gives you another state, but in a truly background-independent ("single substance") theory, the different particle states result in different "spacetimes." That's all I mean by the "classical universes." They interfere to give a single quantum universe. A relevant motto for this (and in some ways, quantum theory in general) is "E pluribus unum," i.e., "out of many, one." Take care,

      Ben

      • [deleted]

      Ben

      As per a post below, spatial relationship of concurrent phenomena is a critical condition in causality. Put simply, something cannot have a direct physical effect on something else, and thereby cause (or contribute to) the next something in the sequence, unless spatially adjacent (and concurrently existent). Physical influence cannot jump a physical circumstance. And causality can only proceed 'one go at a time'. Obviously if not adjacent it can exert influence indirectly, but then so does everything, because everything is ultimately interrelated, so that concept is meaningless.

      A physically existent state, ie a unique condition of the substance, is fundamental. It is what exists at any given time. That is, it is "what actually happens" at a given time. When compared, the subsequent one in the sequence is revealed to have differences. The question then being what changed and why?

      Causality is entirely conditioned by considerations about spatial relationship, because there is only spatial relationship in any given physical reality. Any concern about sequence order revolves around ensuring that nothing other than concurrently existent phenomena are considered as potential sources of the cause. For the very simple fact that something cannot have physical influence on something else, unless they are concurrently existent. Something cannot affect something which has ceased to exist or has yet to exist. Indeed, whatever was causal must be in the preceding existent state to that which involves what is deemed to be effect, and have ceased to exist when that became existent.

      Paul

      Paul,

      That is certainly how physics has viewed causality and locality in the past, from Newton worrying about action at a distance all the way to Bell/EPR. But I think that puts the cart before the horse. In my view, classical locality should be defined in terms of causality. If two things interact, then they are local. That is much of the motivation for the causal metric hypothesis. The "metric" aspect of "spacetime" is a way of talking about the ways in which things do, in fact, interact.

      I agree that classical action at a distance is absurd. But I don't propose to solve the problem by invoking some imaginary metric manifold with just the necessary properties to arrange all the interacting events I see. Instead, I take the interactions at face value, and ascribe our notion of locality to the interactions themselves.

      You require two structures: one structure is given by the events themselves, which is what we actually observe, and all we ever can observe. The other structure is an imaginary "shadow space" that organizes the events.

      I propose to do away with the shadow space and recognize only one structure. The events organize themselves, and the metric properties of the shadow space are just a way of talking about how they interact. Take care,

      Ben