Hi Yuri

I did not realize about your post. I am sorry for this. I have considered the problem and Vesselin Petkov has a different view though is a little bit different.

I will take a look at your work as soon as possible.

Best regards and good luck in the contest

Israel

Daryl,

How is it that space is expanding, yet a constant metric, the speed of light, is used to measure it?

If two galaxies, x lightyears apart, were to grow to 2x lightyears apart, that is not expanding space, but increasing distance in stable space.

Also the effect attributed to expansion, is balanced by the contracting effect attributed to gravity, so it would seem space is like a rubber sheet, that when pushed in, in one spot, expands out in a corresponding manner in other areas. Since the overall result is flat space, whatever model is used, there doesn't seem any logic in assuming the universe as a whole is expanding.

As for dark energy, if this expansion is a form of cosmological constant, originally proposed to balance gravity, than we are only seeing the light that travels between galaxies and thus through those expanded areas, where the effect compounds, thus creating the impression the further the source, the faster the recession. So since it is a constant effect of space and not residual force from an initial event, there is no need to explain why it doesn't slow down at a rate proscribed by Big Bang theory, so no need for dark energy.

Israel,

The original formulation of "tired light" was still based on the notion of light as point particles and it was considered that they must be slowed by encountering some medium, but there was insufficient scattering to show this. The notion of light as a wave that expands, was not part of the original refutation.

Since redshift is proportional to distance, some form of lensing effect makes the most sense. In this regard, the cosmic background radiation, which is observed originating from the edges of the visible universe, is the logical solution to Olber's paradox, as it would be the light of stars over the horizon line of being redshifted completely off the visible spectrum.

Just because galaxies are redshifted must mean they are moving away, would like assuming gravitational lensing actually causes the source to move around at fantastic speeds, not that the light from that source has been bent.

Hi Israel,

The term "recessional velocity" is given as synonymous with redshift. While this is somewhat true (according to the expanding universe model, since physical distances between galaxies should increase with time in an expanding universe) it is also quite misleading since redshifts are supposed to be caused solely by the expansion of space in which the galaxies are supposed to be all at rest, remaining forever at the same coordinates; i.e., it's not the galaxies that fly apart, but space itself that expands. Furthermore, the idea of redshifts arising due to an actual recessional velocity makes no sense at all when values are often larger than 1.

Slipher initiated a programme for measuring redshifts from spectra of the "spiral nebulae" in 1912-1913, but even in 1917 there were still only three reliable measurements available (one blue-shifted). It was only in 1922, when his extended list was published in Eddington's book, that there was any kind of reliable evidence for expansion, as they really were predominantly redshifted. But then it was only when Hubble confirmed that the redshifts actually increase linearly with distance (in our neighbourhood) that the expanding universe idea found serious support. Please try to understand Eddington's account of this. He's saying it's not that the desks are actually flying apart, but the space between them is expanding.

But we're way past the initial indication now, and we've got very reliable measurements of redshifts well above z=6. Those can't be due to actual recessional velocities of objects moving through space (more than six times faster than the speed of light!), and the idea that space itself expands, with galaxies consequently "receding" through the growth of physical distance, makes a lot of sense as an explanation of that phenomenon. The light is supposed to be redshifted as the wavelengths of photons increase while travelling through expanding space. But still, none of this yet touches on general relativity explicitly.

Hope this helps,

Daryl

John,

"If two galaxies, x lightyears apart, were to grow to 2x lightyears apart, that is not expanding space, but increasing distance in stable space." You can't explain redshifts greater than 1 in such a model. Please read my last post, which I was writing at the same time as you.

"Just because galaxies are redshifted must mean they are moving away, would [be] like assuming..." And that's why it took Hubble's confirmation of a *redshift-distance relation* to provide the convincing evidence for expansion. Not everyone agreed already in 1922 that the Universe is expanding.

Daryl

Daryl,

That's my point. There is still the assumption of a standard metric. Yes, beyond z=1, it doesn't make sense. If the theory says that in billions of years, these galaxies will be so far away, that their light can no longer reach us, ie, assuming a standard speed of light, what is the basis of this standard, if the very fabric of space is being stretched? If the speed of light is the most basic measure of intergalactic space and this space is actually being stretched, wouldn't necessary proof of this be that the measure itself is also stretched? Otherwise it is just expanding distance in these standard units.

Of course, then if the galaxies were always x lightyears apart, because the speed of light increased to match the stretched space, then the universe wouldn't appear to expand!!!

As they say, can't have your cake and eat it too.

An increasing redshift-distance correlation would be symptomatic of a lensing effect, as it would compound over distance, since it would further magnify what had already been magnified.

An interesting article from the point of view of an engineer who had to work with cosmologists.

Israel

You said: 1)If one assumes space not as empty vessel but as a fine continuous

massive fluid (aether or quantum vacuum as you prefer to name it) then motion

of physical objects turns out to be truly relational. If one recognizes this it

is quite legitimate to grant this type of space the status of a PSR.

What I have tried to argue with the snapshot example is that it is not legitimate to grant this type of space the status of a PSR.That is because you cannot identify space points with field values. You can imagine 2 configurations of the universe with field configurations ''translated 10 meters to the left'' in relation to absolute space. What absolute space does is define an equilocality relation between those snapshots, and the presence of a quantum field does not solve the problem. This argument is not new, but due to Julian Barbour.

It could be solved if there were any procedure by which we could find our ''preferred position'' (in relation to eh PSR), but experimental evidence, at least to the extent that I know, has never produced such information.

You said:'' I see no problem assuming space as the PSR in as much as this help us to solve the problems that we have. I would say that it is premature to affirm that my proposal would produce statements that can never be verified.

If you could put onto the table some specific examples I would appreciate it.''

I agree that we may introduce PSR if it is useful, But the statements that cannot be verified that I mentioned are precisely ''preferred position''. No experiment has ever revealed a ''preferred position'', but a theory built upon a PSR would necessarily refer to such positions (I can´t see how it could be done otherwise, if you have any idea please tell me). So this is why I concluded that the concept of PSR muct be REALLY useful if we are going to introduce it.

You have said:

''The fact the special relativity (SR) has no preferred frames forbids us to state that relativistic effects "REALLY" occur. The words "ACTUALLY" or "REALLY" in SR are prohibited since this implies accepting a PSR.''

I don´t quietly understand that. In relativity, there is only one space-time manifold, but different basis in which we may write 4-vector an so on. So yes, time dilatation DOES occur (see the experiment where clocks in the earth and in an airplane measure different intervals for a round trip on earth), lenght contraction DOES occur.

And finally:

''As you noticed this picture appears, at first sight, not to be satisfactory as an explanation of some other gravitational effects such as clocks. However, we may elaborate further this idea making some valid considerations''.

I agree with you in this point.

Best reagards, and thanks for an exciting discussion.

dear Israel

it was good advice when you suggested, in commenting my essay, that I should have a look at your essay

I really ejoyed reading it. We have different views on preferred frames

and relativistic theories but we share the intuition that there are interesting issurs at the interface between fundamental physics and philosophy of science

best wishes for the competition

Giovanni

    Dear Daryl

    This is part 1.

    Unfortunately you did not answer any of my questions. Your reply was only alluding to the explanation of the redshift according to the expansion model. I asked the questions because I am interested in understanding, above all, the rationale that led physicists and astronomers to reach the conclusion that distant galaxies were moving away from us.

    I have some papers that date back to 1913-1917. The first one (1913) is due to Slipher. The title is: The radial velocity of Andromeda Nebula. He reported four measurements realized in the fall of 1912. The average is -300 km/s. The minus sign means approaching or blushifted. It seems that this was the very first estimation of the velocity of a nebula. He then concluded:

    That the velocity of the first spiral observed should be so high intimates that the spirals as a class have higher velocities than do stars and that it might not be fruitless to observe some of the more promising spirals for the proper motion. Thus extension of the work to other objects promises results of fundamental importance...

    Now, I would like to focus in two points. The first is in regard to the link between the frequency shift of spectral lines (i.e. blue or red shift) and the radial velocity of stars and nebulae. The second issue has to do with the realizations drawn from this correlation.

    One of my pivotal questions was: how did astronomers calculate the radial velocities of galaxies? This question is equivalent to ask: under what theoretical and conceptual framework were the calculations of velocities performed?

    The answer dates back to the end of the XIX century. Astronomers did not directly measure velocities v; the data they really obtained were spectra of the light emitted by the luminous object under study. They realized that the corresponding spectral lines were shifted with respect to a reference spectrum. The theoretical framework they used to link the frequency shift df with the velocity of an object was provided by the well known Galilean Doppler effect (DE). Indeed, on the basis of this relationship the most NATURAL inference to make from the evidence is that objects either approach or move outward. Thus, by the end of the century astronomers were using routinely and successfully the DE to estimate the velocity of celestial bodies by just paying attention to a shift in the spectral lines. Starting in 1905 the aether was rejected and the DE was generalized to the relativistic case. So, with no aether in mind, astronomers continue to make the same inference of radial velocity from noticing a df corresponding to a celestial body.

    On the other hand, before 1908 astronomers used to estimate the distances by the parallax method. This method, as we know, is limited to some parsecs (probably some hundreds). From 1908-1912 Henrietta Leavitt overcame this problem by means of the variable Cepheid method. With these tools astronomers were able to estimate distances of objects of the order of thousands and even millions of pc. In 1915 Slipher published another article entitled: Spectrographic observations of nebulae. Here he reported the results of the studies realized on 15 spiral nebulae. Two of them with negative velocities (approaching), one unknown and, the rest positive velocities (moving away). In 1915-16, G. Pease also published articles in relation to the radial velocities of nebulae. In 1917 Slipher reported the study of the radial velocities of 25 nebulae estimated from 40-50 spectrographs, i.e., a statistics of 2 measurements per nebula. He found that 21 have positive velocities and 4 negative velocities. The range for the positive velocities went from 150 km/s up to 1100 km/s. From these data, he concluded: The average velocity is 570 km/s, is about 30 times the average velocity of the stars. And much greater than that known of any other class of celestial bodies.

    Let's halt for a moment to make a brief analysis about the previous statements. So far, all the calculations were carried out based on the DE and therefore the conclusions that the galaxies are approaching/moving away naturally follows. The important point here to stress is that the majority of galaxies appear to be moving away. This fact could be taken as an argument to support the hypothesis that nebulae are not part of the milky way. The other crucial point is that we have evidence to start to generate the idea that if most of the nebulae are moving away it is probable that we are at the center of the universe or an explosion. This is one of the most natural realizations on the basis of the prevailing conceptual-theoretical framework of that time. And therefore astronomers had some conceptual elements to conceive the idea of space expansion.

    By 1916 Einstein met de Sitter at Holland. Each guy proposed a model of the universe. Einstein supported a static universe and de Sitter an expanding one. Both universes were unstable but the de Sitter model required that the average density of matter were close to zero. One of the peculiarities of this model is that it predicted a frequency shift towards the red as function of space expansion. Actually, they interpreted this not as a space expansion but as an increasing of distance in the sense of an Euclidean space which within the context of special relativity is equivalent to saying that galaxies are moving away. However, the astronomical evidences were not enough to settle the issue. In 1917 they published their results as you cited in your essay.

    to be continued...

    Israel

    This is part 2

    Three years later, in 1920, the Shapley-Curtis Debate took place and in 1922 and 1924 Friedman published his solutions.

    During this period Eddington appeared in the scene (1923). From the paragraphs you quoted it can be easily grasped that astronomers have already estimated a considerable amount (80) of radial velocities as well as the corresponding distances. Eddington said: . .the results seem to agree very well with a linear law of increase, the velocity being simply proportional to the distance [this is of course Hubble's law]. Then in 1927 Lemaitre put forward his expanding solution and, finally, Hubble made his report in 1929 with more reliable data.

    Here it is valid to question how physicists came up with the idea of correlating distances (d) with v (which judged in retrospective appears to be wrong), but whatever the reasons were, it is evident that the conclusions astronomers such as Eddigton were reaching were based on the kinematics of the special relativity. Hence radial velocities can only have meaning within this framework and consequently have NOT any single relationship with the notion of expansion. It is worth noticing that Eddington had already developed the realization that it is quite weird that most galaxies were apparently moving away from the sun following the more-less linear relation d vs v. So, if we insist in following this line of thought, the picture one would arrive at is that our galaxy is at the center of some sort of explosion and --as you contend--, it could natural to speculate the hypothesis of expansion. The previous analysis has revealed us the error in the conceptual reasoning. The mistake was to consider that the df is proportional to v at any value of the distance. Slipher, Eddintong, Hubble, etc. were following an inductive reasoning in believing that the same physical interpretation granted to the case of planets and close stars also applied for distant galaxies. At cosmological distances this criterion is no longer plausible.

    In the following paragraphs I elucidate how physicists made the connection of Hubble's law with expansion. To this purpose I shall quote what Einstein wrote in 1924 in his little book: Relativity: The Special and General Theory. There he proposed two hypotheses to state his arguments as to the cosmological problem:

    My original considerations on the subject (cosmological problem) were based on two hypotheses:

    (1) There exists an average density of matter in the whole of space which is everywhere the same and different from zero.

    (2) The magnitude (radius) of space is independent of time [not expanding].

    However, already in the twenties, the Russian mathematician Friedman showed that a different hypothesis was natural from a purely theoretical point of view. He realized that it was possible to preserve hypothesis (1) without introducing the less natural cosmological term [lambda] into the field equations of gravitation, if one was ready to drop hypothesis (2). Namely, the original field equations admit a solution in which the world radius depends on time (expanding space). In that sense one can say, according to Friedman, that the theory demands an expansion of space.

    A few years later Hubble showed, by a special investigation of the extra-galactic nebulae (milky ways), that the spectral lines emitted showed a red shift which increased regularly with the distance of the nebulae. This CAN BE INTERPRETED IN REGARD TO OUR PRESENT KOWLEDGE only in the sense of Doppler's principle, as an expansive motion of the system of stars in the large -- as required, according to Friedman, by the field equations of gravitation. Hubble's discovery can, therefore, be considered to some extent as a confirmation of the theory.

    The last paragraph is the key to understand how Einstein (and many other theoreticians and astronomers) linked Hubble's law to the new theoretical framework (TF) provided by the Friedman solutions and, in general, by the GTR. From the Friedman solution, similar to the de Sitter case, the df is directly related to expansion. I intentionally emphasize: CAN BE... ...KOWLEDGE with the aim of stressing the fact that they are bounded to the TF in which the Doppler effect was embedded. So, Hubble's law expressed as correlation between v vs d is meaningless and even misleading within the context of expanding spaces (FR, FRWL etc). Under the expansion programme Hubble's law have a straightforward meaning only as relation df vs d. The rest is the story that we all know today: big bang, dark energy and dark matter, CMBR, etc.

    to be continued

    Israel

    This is part 3

    Ok, I understand that expansion is one explanation to df vs d. Then, what do I argue? Based on the preceding historical discussion, I ask: Is the expansion the only possible explanation? If, according to the conclusions drawn from the essay, the aether is reconsidered, the answer clearly goes in the negative. As we have seen, all that is required is to have a df as the distance increases. The theory of waves can easily reproduce this. And it tells us that the larger the distances the more energy is dissipated/scattered by the aether and therefore light will appear red shifted for an observer on the earth. In this model there is no expansion and space is essentially Euclidean avoiding in this way the horizon and flatness problems. This also explains Olber's paradox even if the universe were infinite in extension. The CMBR it is not interpreted as a relic radiation but it is just the signature of a thermodynamic system in equilibrium. Since the universe had no beginning it has enough time to create the chemical elements required to form the stars and galaxies, etc.

    This model also implies that not only the determination of red shifts is in need of corrections but also the distances; for the luminosity is function of the light energy per unit area per unit of time. Moreover, this model offers us another great advantage above expansion, since space is essentially Euclidean we do not have the conceptual difficulties that John Merryman points out. It is not necessary to compute the distance at the time of emission when the space was less stretched and so forth. We can see that this model is quite simple, explains many problems, it is more consistent and could lead to new insights.

    Israel

    Dear Gionvanni

    Thanks for reading my essay and for the comments. You may be interested in joining the discussion above with John Merryman and Daryl Janzen in relation to the physical interpretation of the red shift that led physicists to the idea of expansion and thus to the the big bang model which in turn has led to the present problems in physics. There I explain the misunderstandings and that the aether assumption can solve most of the present problems all at once.

    Best regards

    Israel

    Israel,

    One possibility that might be worth considering is that light is the medium and waves are the features/information content of this expanding medium, rather than a stable aether as the medium, with light as the waves traveling through it. This would explain why sources are so clear from literally billions of lightyears away.

    What that would mean is that it is the simple radial expansion of volume with distance that causes the light to expand and weaken. So when detectors/telescopes receive a quanta of light from this field, it is a sample of the field, not a particular corpuscular quantum of light which traveled individually for billions of lightyears and thus would be far more prone to scattering.

    Not only would this fit with Christov's paper and the various loading theories of quanta, but in my digital vs. analog essay, I point out another factor; Since light is received as quanta, past a certain point of luminosity, where there is so little light that it is being received as individual quanta, the loading will take longer, thus stretching out the reception. My analogy was to a dripping faucet. As you close it, up to a certain point, there is just a decreasing stream of water, but once it starts dripping, since the size of the drips remains constant, the times between each drip grow longer. If we are treating these quanta as waves, the effect is redshift.

    Dear Daniel this is part 1

    You: What I have tried... ...the problem

    The argument of the snapshot has some issues. The first is that it is referred to macroscopic situations. In this sense I agree with you that you cannot tell the difference because macroscopic objects do not change too much from time to time. But at the microscopic level this no longer holds due to the continuous activity of the vacuum. One unrealistic situation that I found in the idea of a snapshot is that one cannot take instantaneous pictures of an event, basically for two reasons: First because the uncertainty principle will play the role in the outcome of the snapshot and second because an instant implies an interval of zero time which is physically inconceivable and experimentally unrealizable (although mathematically is possible). The idea that a snapshot captures an instant of time is misleading. One can only capture intervals of time different from zero (this is also stated in the uncertainty principle delta t delta E=h). And the problem comes from the mathematical representation of space and time as a continuum. Weistrass assigned to each value of a variable t a corresponding value of the function x and defined a one-to-one correspondence between an element of a domain and its counterpart in the image group. By doing this he got rid of the problem of infinitesimals (small intervals). Which misled physicists by making them think that a physical object can occupy a point in space and time. Mathematically this is correct, but physically is inconsistent since physical objects occupy space intervals (volumes) and things occur within time intervals. In a point of time (interval of zero magnitude) nothing occurs, everything seems to be frozen and if nothing changes how can we justify motion? If one assumes that space-time is physically continuous (composed of adimensional points) one arrives at the well know zeno's paradoxes. This is a topic I do not wish to address here. And like I said, despite these paradoxes, the continuum conception has been useful so far.

    You: but experimental evidence, at least to the extent that I know, has never produced such information... ...No experiment has ever revealed a preferred position, but a theory built upon a PSR would necessarily refer to such positions (I can't see how it could be done otherwise, if you have any idea please tell me). So this is why I concluded that the concept of PSR much be REALLY useful if we are going to introduce it.

    There are many experiments claiming the detection of the PSR, but since the PSR is not even recognize by the mainstream of physics they are not widely known. My reference 17 (Eq. 3.14 for instance) shows that in principle the PSR can be detected. There I explain that the measurement of velocities is not a trivial task as most people think.

    Your arguments to refute the PSR are the same arguments that have been given against the PSR since the special relativity was put forward. I have laid down some arguments in my previous reply to you and I think that my essay gives some others. The special relativity has used the principle of relativity to establish that there are no PSR, to argue that there is no privileged observer for the description of physical phenomena. All systems of reference are equivalent. And I think this is misleading. They are equivalent not because there is PSR but because an experiment has the same result no matter its state of motion, absolute rest or motion. I am going to express how the principle of relativity should be really understood. Just keep in mind that above all physics is not only a theoretical science but also an experimental one. So, imagine an observer equipped with his measuring instruments at rest in the PSR, i.e., in vacuum/aether. He then performs a series of experiments to find the relations among the different physical quantities. From these results he arrives at the formulation of the laws of physics. He then put his whole equipment in a rocket and moves at a constant speed in relation to the PSR. Then, he performs the same experiments and the same operations in the rocket to find the laws of physics. For his surprise he finds the same laws as those he found while in the PSR. He then arrives at the reasonable conclusion that the laws of physics should be the same in any other system of reference and hence establishes the principle of relativity. So far so good, but here it comes the anxious question: What experimental reasons does the observer have to reject the PSR despite the fact that he cannot identify with his experiments whether he is really at rest or in motion relative to the PSR? One will easily realize that there is no experimental argument to refuse the PSR, since he knows that the same laws will be found everywhere. If our friend accepts the existence any other system, why should he reject the PSR? Do you have an experimental argument to refuse it?

    Israel

    This is part 2

    Of course one can argue as you did. If I cannot determine whether I am at rest or in motion it is meaningless to say ''what is my absolute position?''. Above all, this is just a technical problem but this does not imply that the PSR does not exist. In my previous post, I asked you take a look at my reference 17. There I explain, for instance, that the one-way speed of light cannot be experimentally determined and it has never been measured. So, if I follow your same line of reasoning I could argue that the second postulate of special relativity is meaningless because it can never be experimentally verified. Again the determination of the one-way speed of light is a technical problem but the fact that it cannot be measured does not imply that (in an isotropic and homogenous space) the speed of light is not isotropic. In this same article I made the calculation of the measurement of the one-way speed of light. I showed that it is necessary, if one wishes to be coherent, to introduce a special system of reference (isotropic system) where it is assumed that the one-way speed of light is isotropic. Then, if an observer in the isotropic system judges the operation of measuring the speed of light of another observer in a system moving at constant speed v relative to the isotropic system, he will find that the observer in motion should measure a one-way speed of light dependent of v, i.e., anisotropic. But since the observer in motion can only measure round trip speeds, the average speed he will find is c, in agreement with actual experimental observations. So the observer in motion thinks that in his system the speed of light is also isotropic. Hence again, from the point of view of the observer in motion, he assumes that his system is the isotropic system and concludes that in the initial isotropic system the one-way speed of light is anisotropic although the two-way speed of light remains constant. Again we have another paradox since no observer can decide which system is the isotropic system, both are isotropic and both are anisotropic. If you are really interested in this problem you should take a look at my reference and references there in. There you will familiarize with the perplexities of special relativity. And so, probably, you will understand why one has to reintroduce the PSR; this is one way to eliminate all these antinomies.

    You: I don´t quietly understand that. In relativity, there is only one space-time manifold, but different basis in which we may write 4-vector an so on. So yes, time dilatation DOES occur (see the experiment where clocks in the earth and in an airplane measure different intervals for a round trip on earth), length contraction DOES occur.

    My above comments are related to this paragraph. What I can figure out is that you are confusing the experimental implications of the theory (i.e., the predictions of the theory) with the internal consistency of the theory. From the experimental point of view relativistic effects are real, they do occur (and they are real because the PSR must exist) but strictly speaking and in theoretical terms they are apparent. They are apparent because special relativity denies the PSR and therefore there is no real motion and no real effects (or absolute as you understand).

    Israel

    John

    You: is that light is the medium and waves are the features/information content of this expanding medium.

    It is hard for me to figure out how can light be a medium instead of a wave. From the perspective of wave mechanics, light is a feature of the medium. The medium is the thing that exist and the wave is a feature of the medium. How can the wave be the medium and the medium be the wave? I do not get this and I do not see why to should we interchange the roles. If you have a reference in which this idea is more elaborated I would appreciate it. In any case as I understand you bring expansion as main ingredient to explain the frequency shift. Whether space or light expands I see both equivalent, because in both cases the mechanism is expansion. I clearly understand that expansion is one possible explanation of the red shift. However, in my previous posts I argue in favor of a model where there is no expansion; light loses energy simply because the aether, as any other medium, is dissipative/dispersive and light (seen as wave) is absorbed as the distance grows, just as a water wave vanishes in a lake as it propagates. My point is: why should we resort to expansion if it is not necessary to explain the observations?

    Israel

    Israel,

    " The medium is the thing that exist and the wave is a feature of the medium. How can the wave be the medium and the medium be the wave?"

    I'm saying light is the medium and waves are features of it and its interaction with mass and the resulting measurements.

    Here is an interesting interview with Carver Mead, in which he makes a similar argument for electrons and other quantum phenomena.

    A water wave in an open area pool will dissipate much more quickly, as it spreads out, than one traveling down a narrow channel. That's what I mean by expansion being dissipative.

    Hi John,

    I am sorry I think I misunderstood what you said. Your last paragraph helped me to understand your point. As I understand your are talking about the expansion of a wave as is propagates outwards. I agree with this, the energy per unit area decreases as function of the inverse square of the distance. But there is an additional factor due to dissipation that should be added. On the other hand, one can assume, for the sake of simplification, the aether at rest, but in general it can be in motion. So this may be in agreement with you ideas.

    Israel

    Israel,

    I have no problem with an aether, but I think it is an effort to give space a material quality that obscures some very important factors. Consider centrifugal force; if you had an object spinning in deep space, the centrifugal force affecting something on its surface would not be due to some distant point of reference, but because of motion relative to an inertial state. I think that inertial state is also a fundamental factor in the speed of light. Light appears to travel at C, in any frame, but that is due to clocks running slower in frames in motion, which points back to that inertial state. Physics likes to say it's all about measurement and observation, but empty space isn't easily measurable or observable. That doesn't mean it should be denied, just because there are conceptual biases against it. Any more than an aether should be denied, just because it is difficult to detect.

    John

    Some times in ordinary language I used the word "aether" as a synonym of "quantum vacuum", "space" or "zero-point field"(ZPF), although I know that the aether has had different connotations along its history. The sense in which I mean to use the word "aether" is making allusion to the existence of a substance (which one can say that it is space itself) pervading the universe and at the same time as representative of the PSR. To be consistent we the prevailing view in physics we can convene in naming such substance the ZPF. The Casimir effect or the Lamb shift can be considered as proofs that the ZPF (i.e. space or aether, etc.) exists. Then we have a medium for light waves and this medium is slightly dissipative due to its massive nature. For relatively short distances (a few parsecs) its effect on light is negligible but at cosmological ones its effect is predominant. As to the speed of light you should take a look at my reference 17 there I explain why the speed of light is always measured to be c for any observer. So, we have the elements to hold that there is pervading massive substance.

    Cheers

    Israel

    Israel,

    An aether doesn't explain centrifugal force. If it did, then the more an object spins, the more it would cause the aether in its vicinity to swirl and this would presumably reduce centrifugal force. Which isn't how it works. No matter how much it spins, when the object on the surface is released, it flies off in the direction it is released.