• Cosmology
  • A Self-Gravitational Upper Bound on Localized Energy

I should add..

While mass-energy was a unified quantity near the Planck scale; I should state clearly that the differentiation of matter and energy does not just happen at the moment of decoupling / recombination - when the CMB is released. Rather differentiation begins with the appearance of the first particles, and continues to proceed as hundreds of thousands of years of decoupling unfold. Then at the point the matter-energy soup becomes transparent, we see the final decoupling and the energy associated with the CMB is released into the universe of matter.

So the question on the table involves how does the self-gravitation of undifferentiated mass-energy influence things - due to the fact that even pure energy partakes of gravitation. Of course; anywhere but at the Planck scale, this effect is incredibly small, and can be ignored. But in the realm of the incredibly small, or at the outset of the universe's story, this effect is dominant or deterministic.

Have Fun!

Jonathan

  • [deleted]

Constantinos,

What I mean is, it would be sufficient to say, as Einstein (and Descartes before) said, "No space is empty of field." (EEK and I have discussed this extensively).

So it's "turtles all the way down" for field theory. Remember, though, that mass and energy are equivalent, so quantum field theory should be entirely able to fill the all the gaps without --as Edwin pointed out -- the proliferation of fields of different variety. That leads back to a unified theory -- no particles required.

Tom

Everyone seems to first apply their own idea of what the title means, as did I above. After studying Kauffmann's paper I'd like to add to my above remarks. First, he is talking about the interaction of gravity with energy (including the energy of the gravitational field.) But more specifically he is saying that localized energy -- such as the 'virtual' particles of infinite energy that appear in QED -- will have a mass equivalence that generates its own gravitational field, and this field, if energy is to be conserved, will not be 'extra' energy added to the situation but will be energy of the particle that is effectively 'converted' to gravitational energy. He then proceeds to analyze the correction to the particle energy based on this appreciation of the problem to establish an upper bound on localized energy. It is this 'self-gravitational' effect that is responsible for the upper bound in the title of his paper. I believe that one should also read arXiv:0908.3024 to better understand this process.

Some of his math gets complicated, but I don't see anything that seems flat out wrong or ridiculous; although I have not followed every step of his derivation, it looks ok to me. A particularly interesting approach to avoiding perturbation theory, which is the source of all the infinities appearing in field theories, is to define an iterative approach, based on a partial fraction expansion. His paper developing this approach is arXiv:1301.1647. From the papers I've looked at Kauffmann has an incredibly broad (and deep) background, and a unique approach to many of the current problems with physics theories. I don't know how he retains arXiv publishing "privileges" as he seems to be independent of institutions, and therefore can depart from the dogma without suffering punishment. I'm very impressed with his work and thank Jonathan for bringing this to our attention. I hope Kauffmann can be persuaded to take part in the FQXi essay contests and threads. He would make a great addition and raise the level of discussions.

Edwin Eugene Klingman

Hi Peter,

Much as I appreciate your contributions, I don't think that it's the gravitational attraction between electrons and protons that Kauffmann is discussing. I've added additional interpretation in a comment below. Also, you object to his "spherically symmetrical field". As you go through the paper you'll find that he does this for simplicity, but then he generalizes this to 'any static energy density tensor', specifically refraining from the assumption of spherical symmetry (on page 9 in his relativistic treatment).

His is a pretty dense treatment with lots of implications, including dark energy. I hope this article gets the attention it deserves, and am grateful that FQXi has decided to allow us to bring topics of interest to new threads.

Best,

Edwin Eugene Klingman

Yes indeed Ed,

Your detailed description in the first paragraph above is spot on. It's not extra energy, but the portion effectively converted to gravitation, which would otherwise be unaccounted for. And I agree it helps to read his papers on 'Orthodox quantization of Einstein's gravity' and 'Nonperturbational "Continued Fractions"' if one seeks a detailed understanding of this paper. The continued fractions paper explains in detail the way he arrives at the final equation in the 'Self-gravitational upper bound' paper.

I agree Steven would be an asset to these discussions, and also that his knowledge is both deep and broad. I've got a working hypothesis that there is a cognitive advantage to being a mature or elder scientist today - with a broader understanding of a subject like Physics - when the object is to consider foundational questions. Having a detailed understanding within a single area of specialization does not confer the same degree of interdisciplinary awareness. This thread will continue in a bit...

Regards,

Jonathan

I'll start a new thread here..

Perhaps an elder scientist like Steven Kauffmann is better equipped to be an innovator - in some areas of Physics - than most younger researchers, simply because is knowledge is both broad and deep, with sufficient mastery of Maths essential to his purpose. Plus; as you pointed out Edwin, he no longer has a university affiliation to safeguard, and is somewhat more free to explore what topics he may.

At the end of FFP11, Physics professor Jaime Keller asked me "Why at a major conference with Nobel laureates and other top scholars speaking, were there so many dumb questions?" and I told him about RPI Chemistry prof John Carter's experience, where his students did not even want to hear the explanation of why things work as they do - but instead wanted only to know the equations to memorize, and answers for the test.

Unfortunately, Jaime is no longer with us. During his life; Keller started out in Chemistry, wanted to know how things work so learned Physics, then delved into Maths for a deeper understanding still - becoming an advocate of Clifford algebras in Physics. Given the time investment per subject; I guess that only a mature researcher could follow such a path. But it may be the only way to learn some subjects adequately.

All the Best,

Jonathan

    I wanted to add this;

    It sadly happens too often that researchers in Physics approach their retirement with ideas to develop, and the hope that the extra time will afford them opportunities to hone some works for publication - only to find out that is a difficult road for a retired scientist. I too am glad the arXiv folks continue to let Steven post his papers as pre-prints, even if some never see publication in journals. The quality of his work is almost always excellent, and indeed worthy of publication.

    Regards,

    Jonathan

    Hi Jonathan,

    You're welcome. For sure, Schiller's work supports what Steven is presenting. I think Steven took it a bit further. However as I alluded to above, I suspect there is more to maximum force than what Planck units might be showing us. I believe there probably is a maximum force in nature but it probably is not quite so clear cut as c^4/4G. With the advent of Joy's work and also Michael Goodband's work, we probably should account for extra spatial dimensions. And torsion. I am hoping that this will actually pull down Nature's maximum force from the Planck scale. Realistically, there is no experimental evidence at all that Planck length, etc. mean anything whatsoever. And Newton's G is one of the poorest known constants over a limited range. So we may have a long way to go here but speculation is always fun. And... have fun we must. :-)

    Best,

    Fred

    Edwin, and Jonathen.

    Thanks, I'd read it only once, quickly. Always limiting.

    A lovely apparent dichotomy seems to emerge; that dark energy contributes gravitational potential as well as the opposite expansion force. Is that a fair point?

    I actually find much agreeable. Indeed I've also suggested, for instance, the Unruh effect is nothing to do with 'acceleration' per se but motion through the medium, so resolves to propagation of matter (yes, virtual or 'photoelectrons') via photoionization, so speed dependent. I'm only able to do so as I've allowed the QV, Higgs field etc and dark energy a local kinetic identity.

    If gravity then emerges topologically, i.e. as a dark energy density gradient, then does not the 'cold spot' cluster theory also fit nicely into place? Do you have any particular citations for that one Jonathen? I haven't picked it up in that way. I'm months behind with my AAS & RAS paper reviewing, but I think it's just a good different characterisation I've missed that seems to fit the topographical 'energy density' model. In a nutshell, the energy fro matter is provided locally - leaving a 3D Dirac/Newton/Yukawa shaped 'cold zone'.??

    Edwin, thanks re morphology. I should have focussed my comment more on the dynamic aspects. As in the 'Montevideo interpretation' of QM it's incalculable, but I'm quite convinced we're missing an important trick resolving anomalies by ignoring it's effects.

    Finally Jonathen I agree. ArXiv as most science is rather too parochial to academia, and Steven's excellent work is a proof.

    Best wishes

    Peter

    At the beginning of the article, Kauffmann notes that :

    "But the uncertainty principle of quantum theory can manifest a disconcerting predilection to throw up infinite energies, and if we understandably quail at abandoning so firmly established a principle, it behooves us to at least try to ponder its self-gravitational implications."

    I do not believe that the uncertainty principle needs to be abandoned. But I do believe that it is time to recognize that few physicists are familiar with its mathematical origins, the assumptions built into it, and the consequent limited circumstances to which it can be applied. Its predilection to throw up all sorts of quantum oddities, is larger due to misapplying and/or misinterpreting it, usually by violating one of the assumptions, deep within its foundations.

    Rob McEachern

      Rob,

      I agree that the uncertainty principle is based as much on Fourier analysis as on physics. I believe the key aspect of 'reality' underlying this principle is the apparent fact that nature does nothing below a certain threshold of action. This, in my view, is what keeps the whole thing together. I've tried to imagine a universe with no minimum action, where anything goes, at any level down to zero (all noise, no signal?), and it's inconceivable to me that structure would survive in this situation.

      The dimensional aspect of ( M*L*L ) / T leads to convenient formulas in terms of position-momentum and energy-time and angular momentum and the ability to describe energy as h/T fits in perfectly with Fourier frequency analysis. But in my mind there is no necessity to generate infinite energy based on this fact, yet I resist postulating a "minimum time", so I've not been quite certain where the Fourier "prediction" breaks down for such high frequency components, as it must. I rather like Kauffmann's natural approach to self limiting energies

      Of course much of this problem is predicated on the possibility of virtual particles, which may have made sense with a vacuum energy 123 orders of magnitude greater than seems to be the actual case, but which I find to be highly unlikely. Yet assuming there is some corner of the universe where these energies actually exist, say some future super-super-LHC, it's still nice to know that there's a natural limiting mechanism.

      As you can tell by my previous comments, I find Kauffmann's work fascinating (probably because he is so in line with my own bias) and I think you would also. I would be interested in any comments you might have after looking at some of his other work.

      Best,

      Edwin Eugene Klingman

      Edwin,

      Let me be more specific about the nature of the problem. The uncertainty principle is a statement about how much information an observer can obtain from an observation; it says the minimum number of bits obtainable is 1, anything less, and an observation has failed to occur, which is of course possible. It is not a statement about *any* characteristic, attribute or property of the entity being observed. It is merely a statement about observations of such properties.

      Now consider Kauffmann's statement, on page 7, that:

      "Upon quantization, each such oscillator has a minimum positive energy...being completely mandated by the quantum uncertainty principle...always has infinite energy."

      The uncertainty principle mandates *nothing* of this sort. It is a statement about how much information about the oscillator energy can be *observed* (how many significant bits are contained within the energy measurement), not how much energy the oscillator *has*. Consequently what the principle mandates, in this situation is:

      *IF* you can successfully make an observation of each oscillator's energy, *THEN* that observation must, of necessity, contain a minimum of one bit of information about the amount of energy detected, *BUT*, you may fail to succeed in making any such observation, and thus obtain 0 bits of information.

      The correct use of the uncertainty principle cannot enable one to deduce "infinite energy". There is no "infinite energy", that must somehow be explained away.

      Rob McEachern

      I beg to differ;

      The uncertainty principle refers to how pairs of measurements yield a result that depends on the order in which two observations are made, such that any one definitive measurement clouds subsequent measurements of other quantities and there is thus a minimum uncertainty in the product of the two. Of course; this is a non-commutative relation, where the two measurements are usually taken to be orthogonal properties - say position and momentum.

      However; experiment shows we can bend the rules somewhat, by taking repeated weak measurements, as explained in this article by Steinberg et al..

      In Praise of Weakness from Physics World

      There is some question in my mind, though, about whether uncertainty is a property intrinsic to sub-atomic particles. Is there, in fact, a situation of their being loosely defined - except in relation to other forms? We have a kind of observer bias, from the fact that any definitive measurement we make is taken from a platform that occupies a certain location in space at a particular moment in time. While one could argue that observation is irrelevant to the state of a system, one can also say that the system's state is defined by its interactions with its surroundings.

      More on this later,

      Jonathan

      On the roots of uncertainty;

      It is my understanding that Heisenberg first came to discover this principle when studying the Rydberg-Ritz atoms and the process of using two spectral lines to find a third. He discovered that the order in which they are specified yields a unique result, and thought this was curious enough to deserve further study. He first discovered a principle about pairs of quantum measurements. Then later; he came to find out there was a minimum limit to the combined uncertainties thereof.

      Please correct me if my history (from my recollection of Connes' retelling) is inaccurate. In fairness; I should have used the word determinations, instead of quantum measurements, because in QM any measurement is a participatory process much like constructive geometry. But I'll return to this point, if there is time.

      Have Fun,

      Jonathan

      • [deleted]

      Jonathan,

      "We have a kind of observer bias, from the fact that any definitive measurement we make is taken from a platform that occupies a certain location in space at a particular moment in time. While one could argue that observation is irrelevant to the state of a system, one can also say that the system's state is defined by its interactions with its surroundings."

      It seems to me that all perception requires some form of frame. Such as taking pictures requires specific speed, filtering, aperture, direction, distance, lensing, etc. Otherwise there is blurring, washing, etc., as the amount of available information quickly goes to infinity and the result is white noise.

      This goes to the relationship of energy to information and that while information defines energy, energy manifests information. So when we combine energy, there are the resulting canceling effects, so combining the resulting information also causes canceling. Much as a top down/generalized view tends to blend the details and a bottom up specialized view cannot see the larger context.

      So locating a particle means having to filter out its motion and measuring motion means blurring its details.

        Jonathan,

        You have correctly stated the conventional misunderstanding of the principle. What you said is true. But it does not change the fact that it is frequently possible to measure a third "thing" and then infer, without measurement, what the two observations, whose product forms the principle, *must* be, with far greater accuracy than they could ever be directly measured. Things like FM receivers do this all the time. They accomplish this by exploiting *a priori* information about what the *CORRECT* mathematical model, for the observations, *IS*. The uncertainty principle was derived, under the assumption that no such a priori information is being exploited.

        But more importantly, the principle merely states a limitation upon what an observer can know about an entity. That is not the same as stating that the entity has a corresponding limitation. It may indeed have such a limitation. But the uncertainty principle says nothing about it.

        More specifically, the principle has much more to do with "resolution" than with "accuracy". As an analogy, a telescope has a resolution that depends upon the size of its aperture. When one uses a telescope to observe a binary star, one may not be able to observe more than one speck of light. But that does not unable one to deduce that there is only one star. What is 'Uncertainty" is how many entities there *are* to be observed within the two-parameter space, not the values of those two parameters. Two spectral lines may be far too close together, to *resolve* them is a short time period. but this fact does not prevent a dual-frequency FM receiver from determining the two frequencies, with great accuracy, within the same time period.

        Rob McEachern

        • [deleted]

        Edwin,

        In your Mar.7,2013 @ 18:30 GMT reply to Rob you write, "describe energy as h/T". Please help me get this 'buzzing phrase' out of my mind. What does it mean? You ask why I ask? Because in The Thermodynamics in Planck's Law I derive the duration of time required for an 'accumulation of energy' h to occur is given by h/kT. So how does this relate to "energy as h/T" ?

        Constantinos

        • [deleted]

        Jonathan is correct. Heisenbeg uncertainty does not refer to a system's information content. If such were true, we would know with certainty the point where quantum phenomena become classical.

        Tom

        Thanks John,

        I think the camera analogy is rather appropriate here. There is always a question about what you are trying to capture or emphasize. Is sharpness of focus on the subject more important, or is the depth of field paramount because we need to see the background to establish context? Is sharpness of time definition of greatest value, as when determining the winner of a race, or do do you want to preserve the blur of motion for artistic effect, and leave the shutter open longer?

        Some phenomena are too faint to photograph without a long exposure, and others are too swift for anything but the shortest exposure possible. So you correctly point out that even a single observation involves a trade-off of sorts. I'll have to think more on this.

        Regards,

        Jonathan