Akinbo,

That your digital wristwatch outputs a discrete value doesn't mean that it operates by the quantum principle of superposition. The values on the dial of your analog clock completely correspond to the digital output, as a continuous record of elapsed time.

"ALL the experiments and postulates of Special relativity were not carried out in a straight line! According to GR, no line on earth is straight! Maybe only slightly curved but definitely not straight."

Mathematically, a straight line is a special case for a curve. Light always travels (by Fermat's principle of least action) in the straightest line that it can. A light ray parallel to the Earth's plane travels straight out into space; Earth's gravity is far too weak to affect its path. In the case of Einstein lensing, however, a light ray traveling around a very strong gravity field, such as the sun, will curve ever so slightly so that the information the ray reveals about its source appears slightly displaced in space from the real source of radiation. The corrections you're worried about don't matter, except at relativistic distances and speeds. Otherwise, Newtonian physics works just fine; time and space can be treated as if they were absolutely flat and straight.

"And there was no quantum proposition as at the time SR and GR were formulated."

There certainly was. And it was due to Einstein himself, through his work in such things as Brownian motion, and the photelectric effect. See Einstein, 1905, "On the Electrodynamics of Moving Bodies."

Best,

Tom

John,

The notion that clocks are irrelevant is equivalent to collapse of superposition. Time drops out of the quantum mechanical equations.

Best,

Tom

Tom,

I realize this cross references GR and QM, but does that mean "blocktime" is collapsed to a point?

In other words you referred to the scale of time we see looking out across the cosmos as the vector of time and collapsing the superposition is measuring the location of the particle at a point...

Wouldn't that mean you are also collapsing the measurement of space as well? Considering space is "collapsed" by gravity and lots of particles are "measuring" each other.

This going back to my previous observation that quanta(of energy) are not necessarily point particles, but logically expand to fill their container. So measuring them is a way to confine the container. (As would balancing them with an opposite energy.)

So essentially the detector screen on the two slit experiment does collapse their superposition, but depending on the setup, will affect how it is collapsed.

Regards,

John M

Eckard,

They would seem to be in his camp, but he would be providing ammunition for their future.

Regards,

John M

" ... does that mean 'blocktime' is collapsed to a point?"

A plane, not a point.

And since there are as many points in any plane as there are in the entire universe, one should be able to see that all instantaneous events in the block time universe are included.

Best,

Tom

Tom,

Is that the plane of the present, separating past from future?

Regards,

John M

Why do youink the present separates past and future, John?

I don't find an "edit" button.

I meant to ask, Why do you think the present separates past and future?

(as opposed to uniting past and future?)

Tom,

That might be a more appropriate way of describing it.

Wouldn't this present as plane contradict the idea of no simultaneity?

I know one is QM and the other is GR, but doesn't that raise the issue of them both being models of reality and thus not a problem if their various approximations conflict, as opposed to both being parts of some platonic super-structure, creating deep metaphysical angst that all the parts don't fit together?

(Yes, physical present as plane, whether connecting or dividing past from future, is a rough model, not some elemental aspect of reality. A plane has no depth....)

Regards,

John M

John,

"Wouldn't this present as plane contradict the idea of no simultaneity?"

No, but one would need understand the idea of covariance to get a proper understanding of relative instantaneous values.

Best,

Tom

John C,

Ground up is OK, but solid foundations are much harder to find. I'm not sure I grasp your thought process but think I know what you mean, and I agree, all is relative.

But I've found a far simpler solution works. Come down 6 storeys with me to bedrock and check it out.

Lets assume the ions we find everywhere in a vacuum, the foundation of matter, each has 1). A rest frame, 2). The job of keeping em fluctuations going and 3) Do so at one speed, (relative to the frame of each) whatever the relative 'arrival' speed. We'll call the emission speed 'c'. Each bunches of particles at rest relatively then forms a 'discrete field' (model - DFM).

Now play with that scenario in your mind for a bit and see if you can stop it resolving every single anomaly and paradox in physics. If you have any dynamic visualisation skills I predict you will fail completely. But careful where you point it; all confusion and stupidity melts away. (But it won't go through the deep shifting sand of oblivion the heads are buried in!)

As you say all velocity less than c is relative, but so is c itself. You can't measure something without interacting and changing it's speed to max c.

Let me know how you get on. My last 3 essays build it up to above ground level.

Best wishes.

Tom.

Ooops, what a Freudian slip! Those shifting sands of oblivion meant I forgot my own name for a moment!

Peter

OK Pete, I see what your meaning of discrete field implies. That is pretty much how I visualize things, and why it's so difficult to keep any kind of generality of spacetime from coming apart at the particles. A solution to the problem of how to account mathematically for a quantity of energy precipitating into a rest mass having a continuous variation of density which would be in accord with inverse square law, yet not reaching infinite density at center of mass ( and consuming the entire energy quantity ) can be achieved by firstly defining inertia in general terms. Rather than an operation between two masses, what is it about mass that exhibits inertia that is essentially the same thing for any mass? If the answer is that some portion of the energy will exist at a density that is proportional to the total energy quantity, then that proportion would be true of any mass. That density would exist at constant density in a central volume and thus prescribe a finite quantity of energy and the greater part of the energy quantity is distributed outward in a continually decreasing density to a limit density of coherence. That distribution can be plotted along a radius as an exponential function of deceleration from 'c' at the outer limit down to nil extension of time at the horizon of the relative inertial density core, so distribution of energy quantity to density variation in a spherical volume resolves as an exponential root of 'c'.

My thinking about why light velocity is that specific velocity is pretty vague, but we tend to see it as an upper limit we have arrived at. On the other hand, it is the velocity where spacetime becomes distinguishable, where the electric and magnetic field strengths are indistinguishable, where acceleration stops. From there on down to relative rest, things become differentiable, space becomes Euclid Square, time becomes Then and Now, electric charge becomes select. That's a specimen in a jar, but I might be getting to where I can hit the jar. jrc

    Hi John,

    Good points to raise. Re http://phys.org/news/2013-09-mathematics-effective-world.html :

    1) "And that is Abbott's main point (and most controversial one): that mathematics is not exceptionally good at describing reality":

    Clearly regularities, i.e. information categories, information relationships and information balance, exist at all levels of reality. But mathematical symbols are best for representing the regularities found in simple fundamental reality: I think with more complex reality it can get too difficult and unwieldy to attempt to precisely represent the regularities with mathematical symbols, or even to uncover the regularities in the first place. Recognizing and representing the regularities of simple fundamental reality with mathematical symbols (law of nature equations) has allowed us to send rockets to the moon, and build modern bridges.

    But clearly, this fundamental underlying regular structure of reality (which we represent with mathematical symbols) is not as extensive as the mathematical extremists would have us believe - reality does NOT consist of all possible structures: reality is more like a selection made out of all possible structures.

    2) "Einstein, a mathematical non-Platonist, was one scientist who marvelled at the power of mathematics. He asked, "How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality?" ":

    Einstein's error seems to be this: "human thought...is independent of experience". Human thought is NOT independent of experience; human thought is NOT independent of the "objects of reality": human thought is MADE OUT OF the "objects of reality". Clearly, this is why "mathematics...is so admirably appropriate to the objects of reality". Mathematics merely symbolizes and generalizes the basic subjective "objects of reality" like information categories, information relationships and information balance. I think it is clear that even the supposedly-Platonic numbers are nothing more than a type of information category relationship - i.e. they are somewhat similar in structure to "laws of nature".

    3) Mathematical symbols can be used to represent aspects of reality, especially fundamental reality. But the nature of reality is not a mathematical universe, but a universe of information, categories, relationships and balance (AND choice). I.e. reality is more like a living thing than a dead mathematical structure.

    What do you think: a living universe or a dead universe??

    Cheers,

    Lorraine

    Lorraine,

    I think a more immediate issue isn't just whether we can effectively use symbols for concepts and effectively manipulate them, rather than just using symbols for phonetic vocalizations, as with natural languages, but how belief systems also quickly come to dominate how they are to be used.

    One of my main arguments here is that time is not the vector from past to future, but the process by which future becomes past, thus making it similar to temperature. Such that time is to temperature what frequency is to amplitude. Now we have a brain, divided into two hemispheres, that reflects this relationship, with a linear, time-like left brain and a scalar(non-linear), thermostat-like right brain.

    Instead we are forced to believe nature is composed of a three dimensional space-vector system, with a narrative vector added on, because as mobile points of reference, that is how we experience reality. Meanwhile the whole side of our brain built for non-linear thought processing gets dismissed as intuition and emotion, for not recognizing this religion. Intuition does incorporate new information. We still see the sun moving across the sky, but can intuitively understand it is the earth rotating the opposite direction. Baseball players and physicists have different intuitive responses. So now we have spent generations pursuing reducto ad absurdum arguments and ending up in multiworlds, because physicists can never be wrong.

    Regards,

    John M

    John RC,

    "difficult to keep...spacetime from coming apart at the particles." Impossible I'd say. 'Spacetime' means quite different things to different people. After all the nonsensical interpretations that many still cling to Einstein ended up precisely where Minkowski started;

    "not 'space' but infinitely many 'spaces' in relative motion" This gives the discrete inertial system or 'field' model (DFM). The particle interactions then divide them.

    I could't penetrate far through the haze into your sample jar, so best to offer some simplifications which I think are compatible with that and the above.

    Inertia is simply gyroscopic. It's not then bizarre finding a ton weight accelerating under G at the same rate as a pea. Imagine a ton of spinning gyroscopes fixed to a framework. Then beside it one tiny peas shaped gyroscope. Which is easiest for you to accelerate by pushing? Correct, the tiny one. So why do we expect gravity to do the opposite and accelerate the big one faster?

    That shown the fundamental error sin all our assumptions. Errors we've become so familiar with that most are unable to challenge them. I did and a very simple model emerged, but it seems most can't so confusion remains.

    If you wish to see the model you only need to read the essays. The limit on propagation speed of EM waves is then simple and relates to minimum wavelength gamma, which is at optical breakdown mode plasma density. (I'll post the link to the paper on that if you wish).

    Best wishes

    Peter

    PS. It will make perfect sense to you when you read it but if you don't then also 'rehearse' it the whole dynamic will evaporate as our neural networks don't have a pre-set default mode to 'hang it on'.

    Akinbo,

    Light does c through all matter systems wrt the rest frame of the matter. Simple as that. The G field only then has a secondary affect because it affects particle density. That resolves Q1.;

    "The concept speed is inherently relative, but ONLY relative to the matter it propagates in (including lenses)." These are then local 'discrete frames' or fields.

    Q2; What is absolute? Time rate locally, NOT emitted signals universally. Propagation speed, but again LOCALLY wrt the propagating medium only. Space, which may be quantized, is then logically modelled as a fluid, or multiple sets of two frames (2-fluid plasma).

    You ask; What is a 'good' vacuum? You should know a perfect vacuum cannot exist. A good vacuum has insignificant massive particles at n=>1. It will always still have free electrons at n=1 but though their coupling constant is high they are of course 'invisible' spectroscopically. If there are a lot and they are blowing across the light path they'll 'drag' the light (actually "rotate the optical axis of re-emissions" is the more precise description). This is the well known kSZ effect, also giving elliptical polarity and Faraday Rotation (IFR), all 'anomalous' under current theoretical assumptions.

    Make sense?

    Peter

    Tom,

    I agree SR's maths may be 'complete'. But it's clear is that it doesn't completely describe nature. SR as finally defined by Einstein however, reduced to the postulates, does so perfectly, with the soupcon of uncertainty inherent in the 'local' quantum mechanism and LT.

    All empirical evidence supports that definition of SR. It does not support the assumed mathematical description beyond that. If you think you can suggest some evidence that does so please specify it and I'll show you where the wrong assumption lies.

    Peter

    Pete,

    Actually I'd like to have a link to your papers, it looks like what you are developing is what gets too complex for me after the rationalization of the volumetric determination for rest mass energy distribution in what I've modeled.

    Beg pardon... the brief synopsis of method for determining distribution to density variation was just that. It starts with a simplistic (intentionally) model that is 'ballpark' parametric to establish density values for magnetic response, then G as a 'c' proportion lower density which is assumed to be the limit of cohesive coherence of energy, a 'c' proportion greater density than magnetic with the characteristic of translating electric response, and another 'c' proportion greater density with an assumed characteristic of

    inelasticity as the kinetic particle boundary. The parametric particle volume is then rationalized to a 'real' base radius by theory terms applied to a formulation of Coulomb's Law, then the exponential radial set of density difference equations are applied. The differentiation comes from the inertial density in relation to the kinetic density. If inertial > kinetic, the result is particulate, if inertial < kinetic the electrical density allows the charge to expand to fill it's container, in this manner the EM spectral range is equated the same way mass accumulation in particulate matter is equated.

    In the range of particulate matter (the model produces a rationale for an upper limit of stable mass quantity at 263.11 amu ) the difference between inertial and kinetic density, and the volumetric requirement of energy quantity at a constant density in the central core, results in the same "Incredible Shrinking Area of a Surface of a Volume" that Tom keeps trying to get people to understand is what is meant by 'curvature of space'. It's not like GR removes the field from the volume, it just uses acceleration instead of force to define it's size.

    If I get around to dusting it off, I'll be looking at what appears to be an optimal differential that results in some mass quantities and the volumes those quantities prescribe, having a greater propensity than other quantity values. It's been 26 years and nobody but Reagan's SDI was interested, the best I could do as an amateur was the make sure that D.O.D. knew first that I was tinkering on something so nobody could use it to sneak into DARPA.

    Your own efforts in treating discrete fields in aggregate gets into a complexity that naturally must treat inertia as an operation between masses. The resemblance to all those studies Einstein made of Brownian motion is what always dissuades me from attempting the math. But the over-riding conclusion is that in aggregate, domains develop and in relation to any dominant gravitational frame, gram molecular weight is one thing but size matters.

    NASA has some very interesting results from Voyager's transition to deep space you might want to look at. jrc

    p.s. I really have to get started on a noisy wheel bearing.