• [deleted]

Robert,

You stated: "In a very real sense, Newton's law of gravity is perfectly analogous to the JPEG image compression algorithm. It is a "lossy" description of the original data, no more no less; the reconstructed, predicted "image" is slightly different from the original. In contrast, Einstein's theory of gravity appears to be a "lossless" compression algorithm."

I think that this sentence might be counter to everything else you are stating in your essay. By definition, the process of anti-differentiation (which Newton used to map gravitational force into his law of gravity) is a reduction of possible information into one interpretation of the data and which likely has passed into GR (i.e. still lossy). My essay has some questions but a sketch I just posted may be a simpler visual explanation.

Regards,

Jeff

  • [deleted]

Hi Robert,

Being an old copywriter, I cannot help to try to catch yor wonderful essay in a one-liner: "Confusing Mathematics for Physics is to Complicate the Simple and Simplify the Complex."

We are several authors in this contest who - from different points of wievs - question the role of mathematics in physics. Mathematics has for long been its lingua franca and the "shut up and calcilate" promotors are not just a few. The funny thing is that during the same period of time, since the mid seventies, when calculation capacity has been exponentially increasing, physical theory-building has been comparatively meagre. More is less, it seems.

Best regards,

Inger

Robert

"The speed of light is not directly observable; it is not an observable phenomenon at all. There is always a "privileged" observer."

"...When a light wave is first created, it is created in the reference frame of this privileged observer. It is created at the frequency observed by that observer, not the Doppler shifted frequency of an observer in a different frame of reference. And the privileged observer is always at rest with respect to itself."

"...Hence, when all other observers transform their actual observables to the privileged observer's frame, they too must infer the same constant speed of light."

These phrases and meanings are precisely common to our essays. But I'm awestruck by your logical analysis of mathematical limits. I abandoned the study of maths from intuition about it's shortcomings modelling reality, for which you now give the precise reasons, beautifully argued and written. I now far better understand WHY the Fourier transform fails, though I even wrote of FM receiver mechanisms some time ago.

My route to this end was logic and ontology, also observational from optics and as an astronomer. I dare to suggest I've also pushed a little further than you, in maverick style, to find curved space-time and even pre big bang conditions. Though I hazard that you too report far from all your findings.

I hope you'll find time to read my essay. Quite different to yours as it simply analyses the mechanistic evolution of real systems without abstraction, avoiding the limitations you so brilliantly identify. I'd like to cite you, but saw no references. Do you have anything on this published?

There are a couple of areas we diverge, and I'd like to scrutinise those, along with a couple of new kinetic considerations I consider, such as the relationship of f and wavelength for different observer frames, and the severe limitation I derive for spatial limits of the emitters frame.

Very Best wishes

Peter

    Sorry, I didn't mean to say that handedness is not a real property, only that it is not the ONLY property of gloves. The question remains for me: why do quantum entities possess/reveal so little information?

    • [deleted]

    Dan,

    I totally agree with you....

    Dan, you asked:

    "Why do quantum entities possess/reveal so little information?"

    Why does a coin have only two sides? Why not?

    Lack of information is equal to lack of complexity; It is simple. And simple objects can be described via a simple set of symbols, ultimately just a single bit.

    So your question boils down to "Why do such simple entities exist?" Neither I nor anyone else can offer anything other than pure speculation, for an answer. Here is mine:

    Simple entities exist so that complex entities can evolve, as combinations of them.

    Peter,

    Most of the basic ideas in my essay can also be found in a book I wrote, twenty years ago:

    "Human and Machine Intelligence: An Evolutionary View"

    After reading Roger Penrose's book, "The Emperor's New Mind", I was inspired to write my own book as a rebuttal to his. His was a best seller, mine fell into a black-hole shortly after being published. Used copies can still be found on amazon and other on-line book sellers.

    The book is mostly about the nature of intelligence, viewed as an outgrowth of sensory signal processing. The quantum aspects came about, in order to refute Penrose's claim that intelligence is probably the result of some as yet undiscovered quantum oddity.

    Robert

    I also disagree we have any intelligence, as a result of Penrose's quantum oddity or otherwise! I can however see both sides of that one. Having arrived at a recycling cosmology with largely (but not solely) re-ionized matter there is some certainty that with infinitely many recyclings some particles in of our brains may have originally been part of the brain of some intelligent being! Pretty crazy stuff I know, but I use rigorous logic, from where it emerges as implicit! So there may be hope for us yet. Of course mechanistically I agree entirely with you, which I'd expect is 99.99999% of the game. (I wonder if homeopathy really works, we may find out in the UK as our new Health Secretary is a fan!)

    I'll track down a copy of your book. I'm sure it's more readable than Penrose. I'd also like to cite it, or perhaps your essay, in a current paper. I wasn't implying anything by my comment except the 'great minds think alike' commonality of our conclusions, and coming from such very different directions must imply some fundamental veracity. I've found some massive importance in your truths, touched on in my essay.

    I do hope you'll get to read it, can find and extract the real gold nuggets, and give me your views. I also need a bit of maths thrown at the concepts as I'm averse to going too near the stuff myself.

    Well done for yours. Certainly a top score coming from me I think. Have you read Tom Ray's yet, also nicely debagging Bell a la Joy Christian.

    Peter

    • [deleted]

    P S

    Dear Robert, please don't mistake my oneliner as a disrespectful and slipshod simplification of the complexity of your essay! Should it be so, i apoligize. D S

    As four your old book, I´will try to find an purchase it. Not only to be able to better understand your essay, but also because there might be a possible connection with my PhD thesis, which deals with the contrast/controversy between the simoplifications of artificial intelligence and the complexity of human knowledge.

    • [deleted]

    Hi Robert,

    You were saying:

    "If you look at the relations given for the Uncertainty Principle and Shannon's Capacity, for the single particle case mentioned, in which S/N =1, then the uncertainty principle boils down to the statement that "1 = maximum number of bits of information that can be extracted from an observation, in the worst case."

    Duh

    So what is the big deal? What makes this so significant?"

    If I'm understanding this correctly, and my calculation is right, you're saying that the boundary between the quantum and classical world is fuzzy, and that the commutator vanishes when the signal (particle count) increases and particle count error stays constant.

    So, if I had a ball of 1e200 fundamental particles (and I know this count precisely because I know their individual masses and I weighed the ball real good, etc, etc), this would give me a commutator with the value of 0.001, not 1. So, I would be 999/1000th into the classical regime.

    To me, that is an absolutely fantastic calculation, because:

    1) It makes sense to me even from a purely classical point of view: http://www.phys.unsw.edu.au/jw/uncertainty.html

    2) As far as I can tell it's novel, because I read about theoretical physicists humming and hawing about what may or may not constitute the boundary between the quantum and classical regime all of the time, in not so specific language as yours.

    Holy cow dude!

      • [deleted]

      (yes, I realize that particle count is not a constant, but I'm sure there is a slick way to calculate a mean based on the various different field configurations... the point is that you totally just blew my mind)

      • [deleted]

      No, I believe I had it wrong just then.

      I believe that in your formula for the commutator relationship

      [math]\delta t \cdot \delta f = \frac{1}{\log_2(1 S/N)},[/math]

      S seems to be the actual particle count, and N seems to be idealized particle count. For instance, if you had all kinds of particles bunched together and considered them to be a single baseball, then N = 1. On the other hand, if you consider every particle individually, then N = S (the actual particle count). As S/N goes to 1, a transition from the classical to the quantum regime occurs. I believe this to mean that when S is held constant, then a token amount of random quantum noise is added into the system whenever N is incremented by 1.

      Maybe you could talk to S Hossenfelder about this. She has some very interesting things to say about gravity and singularities and a vanishing Planck action (h; the main commutator coefficient). Perhaps this is the mechanism in question. I believe that string theory kind of says that black holes are "one" particle, insomuch that they are a continuation of the particle spectrum, and so your calculation would probably apply to both electrons and black holes -- in a unified kind of way.

      Altogether, I think that it's extremely impressive. I do believe it when you say that it's a mathematical truth, and also the physical interpretation is pretty convincing.

      There is a much simpler way of looking at the Shannon Capacity relation:

      dt specifies the duration of the observation

      df specifies that number of samples per second that must be taken to preserve the entire information content of the continuous, band-limited signal being observed.

      The log term specifies the number of bits needed, within each sample (it is counting bits, that is why it is always given as base-2) to preserve all the information. Obtaining more bits per sample would not significantly increase the total number of recoverable bits of information.

      So the Shannon Capacity boils down to the statement that the MAXIMUM number of recoverable bits of information, cannot exceed the number of bits required to "digitize" the signal in the first place.

      The uncertainty principle, at the other end, boils down to the statement that the MINIMUM number of recoverable bits of information, corresponds to a signal that can be "digitized" with exactly one sample, with exactly one bit per sample = one bit.

      If you obtain fewer bits than that, then you have failed to make any observation at all.

      So, of course the uncertainty principle represents a fundamental limit. The point is that the limit is true by definition of what is meant by a bit of information. It says nothing at all "interesting" about the nature of reality. Heisenberg mistakenly thought he had discovered some deep, underlying mystery of nature. In fact, he merely discovered a very peculiar way of restating the definition of a bit of information.

      Rob McEachern

      • [deleted]

      Hi Rob,

      Thanks for responding with some clarifications. Perhaps I can see now why you have such an interest in Lorraine's essay.

      - Shawn

      • [deleted]

      Perhaps I am wrong, but my hunch then is that the factor of 1/2 in the Heisenberg uncertainty principle is directly related to the concept of Nyquist frequency?

      • [deleted]

      Dear Robert and All,

      I follow this discussion with great interest - understand some of it, misunderstand some of it, and probably non-understand a great deal.

      My question is: Which is the reasonablr role of mathematics in physics? That is, a role in which mathematics is not confused for physics, but an effective tool of it?

      I take the double split experiment as an example:

      The emitter is designed to emit electrons with pre-specified and well-defined physical properties (physical initial conditions). The corresponding mathematical properties (mathematical initial conditions) are used for solving the mathematical model in order to predict the outcome of the experiment.

      The receiver is a screen that lightens when hit by an electron. In between the emitter and the receiver there is a modulator - the double-slit - characterized by the distance between the slits and the geometrical form of them. The physical design of the modulator is adapted to fit the pre-defined mathematical initial conditions of the emitted electrons. Can the modulator, by this, also be regarded as an additional part of the mathematical initial conditions? (If so, since the modulator is macroscopic, these initial conditions cannot be handled by Schrodinger equation, but that is not what is at stake here.)

      Do you wish to observe an interference-like pattern on the screen - do not insert a particle detector at any one of the slits, to "spy" on the electrons and see which split they really go through. The physical design of this particle detector is adapted to fit the pre-defined mathematical initial conditions of the emitted electrons. Can it also be seen as an additional part of the modulator? Does it, by this, add further to the mathematical initial conditions, and further complicate the mathematical solution? That it dramatically changes the physical outcome of the experiment is a truism.

      A reasonable question is: Can the particle detector be regarded as an additional part of the modulator? Or has it taken the role as the receiver in a completely different experiment? It seems reasonable to see it this way, since both the "first" screen and the particle detector say "particle here", whilst the screen in the second case says "particles here, there and everywhere" - no interesting information.

      Of course, an alternative - and more matter of fact - way to look at the whole thing is that inserting the particle detector into the channel between the electron emitter and the receiving screen means adding a strong source of noise to the channel. As such, it unavoidably changes the outcome of the original experiment in a radical way.

      Am I completely confused? If so... "Better a witty fool, than a foolish wit." (Shakespeare, Twelfth Night)

      Best regards,

      Inger

        Robert

        I agree waves are still poorly understood. I've written the odd paper from optics and more original viewpoints. I eventually resolved to the term 'signal velocity' for the purposes of c, more equivalent to group velocity. I have no clue how this may relate to information theory. Does it?

        An aspect poorly considered in representations is that, considering a soliton as a 'wave bundle' moving at c, the phase of the waves moving within the bundle only has a relevant speed wrt the rest frame of the bundle. Indeed they 'die out', as they are only 'fluctuations', or in a 3D particle model represent 'spin' so may 'return symmetrically' anyway. This puts to bed the issue of electron spin being superluminal. It's also consistent with light waves being re-emitted on an 'optical axis' NOT normal to the causal wavefront plane (as proved in Calcite crystals etc.) rather than; "'rays' made up of photons on 'vectors'!"

        So the 'flight time' across space of the 'entity' or wavefront containing the information would remain the tool we have to work with wrt c.

        But now my point. We have to consider the propagation 'medium', which we can consider as Boscovich's 'particles with sphere of influence' or Einstein's 'mass spatially extended' without invoking 'ether'.

        My proposition is that massive bodies, from particles upwards, can move, WITH their local 'spaces' (i.e. Earth atmosphere & ionosphere, and the Sun's Heliosphere), which is why we find the dense astrophysical shocks at the boundaries (See Kingsley Essay Fig 2.)

        Now everything else falls into place. And I do mean everything. CSL, CMB anisotropy and frames last scattered, the BCI/ECI frame issue, the aberration problem, Pioneer/Voyager/Flyby anomalies, Local Reality, Twins Paradox, etc.

        But as those are all swept under the carpet what value has a model that explains them?

        Does that make any sense to you?

        Peter

        (Re-posted answer to comment my blog. A response ref waves may be interesting. - see also earlier response above)

        Inger,

        Let me tell you a little story.

        Once upon a time, there was a man named Rene Descartes. Rene had written a wonderful, new essay, which he called "Meditations of First Philosophy." Being an ambitious, but cautious man, Rene sought help from the Powers-that-Be, to promote his essay. He wrote to them thus:

        "To those Most Wise and Distinguished Men, the Dean and Doctors of the Faculty of Sacred Theology of Paris, Rene Descartes Sends Greetings."

        Rene informed those illustrious men, how his new essay would surely convince all unbelievers to believe. He just needed a little help. So he asked those worthies to "assist me with your patronage", and that "its errors would be corrected by you" and then, having rendered this assistance, "these arguments ought to be regarded as the most precise of demonstrations." Then and only then, "you may be of a mind to make such a declaration and publicly attest to it."

        Rene went on to complain that, to date, unbelievers had not found his essay to be sufficiently believable. With great indignation, he described how unbelievers "attack not so much on my arguments regarding these issues, as on my conclusions." In his darker moments, Rene assumed that they did this, from malice. But he understood that unbelievers could not be trusted to understand logic and reason; after all, if they did understand his arguments, they would believe. But why would they attack only his conclusions, and not point out any errors in his logic? Surely, it was because they could find none!

        But the unbelievers were gentlemen and scholars. They deeply respected Rene's formidable powers of reasoning. Consequently, it would only be logical for them to assume that there could be no errors in his stated arguments. Surely, neither his stated premises nor his clever deductions could possibly be in error. Nevertheless, they were confident that the weirdness of his conclusion rendered that conclusion an obvious absurdity. So, being scholars, they consulted the method of "Reduction to an absurdity", and came to the realization that Rene must have made an erroneous assumption. But surely, any stated assumptions, from a mind such as Rene's, could not be in error. Hence, the assumption must have been unstated! With impeccable logic, these scholars and gentlemen thus realized that there was little point in even reading Rene's story - they must look elsewhere for his unstated assumptions.

        I too am an unbeliever. I do not believe the metaphysical stories told by physicists; the conclusions are absurd. But like Descartes's unbelievers, I concede that the physicist's stated logic, the actual physics, is likely to be impeccable, at least as far as it goes. So the actual experiments and mathematical theories need not be questioned. Rather, the unstated assumptions underlying the metaphysical stories, must be discovered, and their story must be told. That is what my essay attempts to do.

        Consequently, my essay is not really about either the math or the physics. Rather, it is about the associated metaphysics, that has been mistaken for physics. By metaphysics, I mean all the stories being told, about the "meaning" and the "significance" and the "interpretation" of the actual physics. I call these stories metaphysics, because they are almost entirely independent of the physics. The actual physics, is what it is, with or without these stories (hence the dictum "Shut up and compute!"). But it is this metaphysics that is the source of all the supposed weirdness. So you can eliminate this weirdness, by simply eliminating the current metaphysical story, and replacing it by a different story. The physics remains unchanged. But in order to get people to prefer any new story, over the old one, it has to be a more compelling story. A story devoid of absurdities, which offers an alternative, common-sense "meaning", may provide this compulsion. I aim to tell such a story.

        Now to address some of your specific questions:

        Math plays the same role in Physics, that it plays in Digital Photography. It can be used to clean up data, summarize data, and describe data, in a compressed, symbolic form (equations). Physicists, almost as a matter of pride, like to assume that it does much more than that. Regarding this point, here is a copy of a post I made in response to Matt Visser's essay:

        "In your summary, you ask "Exactly which particular aspect of mathematics is it that is so unreasonably effective?" in describing empirical reality.

        I would argue, that is not an aspect of mathematics at all, but rather, an aspect of physics. Specifically, some physical phenomenon are virtually devoid of information. That is, they can be completely described by a small number of symbols, such as mathematical symbols. Physics has merely "cherry picked" these sparse information-content phenomenon, as its subject matter, and left the job of describing high information-content phenomenon, to the other sciences. That is indeed both "trivial and profound", as noted in your abstract."

        Here is my alternative story of the double slit experiment:

        Lets begin by making an analogy between a radio signal, beamed towards a receiver, and particles beamed through the double slits, towards a detection screen. Suppose the received radio signal behaves oddly. Perhaps it has been distorted while traveling between the emitter and the receiver. Would the odd behavior vanish if the receiver was moved closer to the emitter? In the double slit experiment, odd behavior is said to occur. It is said that the particles behave like waves, when detected at a great distance from the slits. An "interference" pattern appears, and individual particles seem to have flown through both slits at once! My my! Does this odd behavior persist if the detection screen is moved closer? Does an interference pattern appear, when the screen is placed in contact with the slits? I don't think so. Nor does a particle appear to have flown through both slits. Why the difference? Because the particles are scattered by tiny electromagnetic forces produced by the atoms making up the structure of the slits.

        To visualize this, suppose we replaced the electromagnetic forces, with a stronger version of the gravitational force. Now suppose a particle is beamed through a single slit. If the particle travels exactly through the middle of the slit, it will pass straight through, undeflected, since the gravitational forces, due to the massive sides of the slits, will be symmetrical, and cancel each other out. But any particle that travels through the slit, slightly off-center, will be pulled slightly to one side; it will be scattered. When the slit is in contact with the screen, this slight deflection is unnoticeable. But when the screen is far from the slit, the slight angular deflection will cause the particle to strike the screen far to one side of the center. Now imagine that the deflection angles are quantized; particles will now only strike the screen at discrete angles. An "interference-like" pattern has appeared. Now add a second slit. This reduces the mass and hence the gravitation force on one side of the slit through which the particle travels, but not the other. So deflecting forces become less symmetrical and the "interference-like" pattern changes in comparison to that of a single slit.

        As I have noticed elsewhere in these posts, the magnitude to the Fourier Transform of the slit geometry looks just like an "interference" pattern. This is a mathematical identity. I conclude from this, that it is the attributes of the slit's geometry that are ultimately responsible for the "interference" pattern, not the attributes of the waves or particles beamed through the slits. If you change the geometry, you change the pattern, even through no change has occurred in any attributes of the particles.

        This type of alternative story, does not change either the experimental results, or the theoretical results. It is simply a slapped-on "interpretation." But so is the existing absurd story. I remain an unbeliever in absurd stories.

        Dear Robert,

        Thanks. That is very much the way I see the two-slit experiment.

        I have not yet found very much data on various slit-widths, distances, separations, wave-lengths, momenta, etc. to test this perspective, and it is absolutely certain that no one has analyzed the wave function that I present in my current essay, The Nature of the Wave Function, in these terms. Therefore I too am an unbeliever in the various weirdnesses that various physicists insist must be believed in, and am ready to question their logic rather than give up local realism.

        By the way, I enjoy your comments and would be very interested in any overall observation you might have deriving from the almost 300 essays submitted questioning the fundamental assumptions of physics. When one considers the fact that a large number of these come from quite competent individuals, this seems rather momentous to me.

        Edwin Eugene Klingman

        Dear Robert,

        first of all, congratulations for your very impressive essay. I think you very proper explicated the logical flaws many scientists trap in and furthermore explicated what for example in this forum is more and more realized as being inevitable: namely due to the inaccessability of the physical details "as they really are out there" we attach our physical meanings (for example about space and time or particle properties) to the consequences of measurements of those physical details - and infere that our attached meanings are also valid in between those measurements. There are even some people who assume that in between those measurements the properties in question do change due to some constructed rule or even randomly (but in accordance with QM-probabilities). Those attempts are highly questionable from my point of view outlined in my own essay.

        So, for me, your essay is one of the better ones in this contest.

        Nonetheless, may i make some remarks about your interpretation of the double-slit experiment?

        You wrote:

        "Now imagine that the deflection angles are quantized; particles will now only strike the screen at discrete angles. An "interference-like" pattern has appeared. Now add a second slit. This reduces the mass and hence the gravitation force on one side of the slit through which the particle travels, but not the other. So deflecting forces become less symmetrical and the "interference-like" pattern changes in comparison to that of a single slit."

        This explanation unfortunately is not at all in accord with the experimental results. If you decide to close one slit before the "thing" hits the slit-aperture, this would be somewhat consistent with your explanation. But if both slits are open before and after the "thing" does interact with the double-slit aperture and just an instant before the "thing" hits the final measurement device far away from the slits, it turns out that *dependent* on wether we want to "determine" the "things'" path or its interference pattern, the "thing" has instantly another probability distribution at the final device. This means you cannot explain this result by local conditions at the two open slits, be those conditions of gravitational, electromagnetical or other "local realistic" forces.

        All the best,

        Stefan Weckbach