• [deleted]

Hector,

Regarding the "Unreasonable Effectiveness of Mathematics", in an earlier post, under Matt Visser's essay, and repeated somewhere under my own, I wrote:

"In your summary, you ask "Exactly which particular aspect of mathematics is it that is so unreasonably effective?" in describing empirical reality.

I would argue, that is not an aspect of mathematics at all, but rather, an aspect of physics. Specifically, some physical phenomenon are virtually devoid of information. That is, they can be completely described by a small number of symbols, such as mathematical symbols. Physics has merely "cherry picked" these sparse information-content phenomenon, as its subject matter, and left the job of describing high information-content phenomenon, to the other sciences. That is indeed both "trivial and profound", as noted in your abstract."

Regarding the effectiveness of "Shannon's information theory" as compared to "algorithmic information theory", I am very strongly of the opinion that the former is much more effective than the latter, in all the areas, like measurement theory, that have much real relevance to "an observer". The difference lies in the relative importance of "Source Coding" vs. "Channel Coding"; lossless compression algorithms, in spite of any "astonishing fact", are virtually useless in the realm of observing and measuring. One of the biggest problems hindering the advancement of modern physics, is that physicists "don't get it"; contrary to popular belief, observation and measurement are not about preserving source information, they are about distinguishing "relevant information" from "irrelevant information" as quickly and efficiently as possible. A lossless Source Coder, with sunlight as its input, would preserve huge amounts of information about the solar spectrum, that is absolutely irrelevant to any human observer, other than an astrophysicist. That is why the channel coder in the visual pigments in the retina totally ignore this "irrelevant information". The same is true of auditory information; well over 99% of the "Source Information" is discarded before the information stream ever exits the inner ear. While this information has great relevance to a modern telephone modem, it has none at all to a human observer.

Since, as Shannon demonstrated, all channels have a limited information carrying capacity, it is imperative for any multi-stage information processing observer, to remove capacity-consuming "irrelevant information" as quickly as possible. This presents a "chicken and egg" dilemma, that has been debated since at least the time of Socrates, 2500 years ago. How can you find what you are looking for, when you don't even know what you are looking for.

Nevertheless, as I pointed-out in the essay, when you do know, such as when you have created an experiment in which only a single frequency or energy exists, looking for (attempting to observe and model) a Fourier superposition, rather than a single frequency or energy, is a positively dumb thing to do. It is no wonder why there is so much "weirdness" in the interpretations given to such inappropriate models.

You stated that "Your view... seem to suggest most of the world information content is actually algorithmic random, hence not "capturable" by mathematical equations". That is not my view. My view is that much of the "world information content" is very predictable. HOWEVER, the function of any good design for a sensor, instrument, or observer, in other words, a Channel Coder, is to make sure that it's output is devoid of all such redundant predictabilities. Hence, although the world may not be random, any good channel coder will render all observations of that world into algorithmic random output. One does not need to observe the solar spectrum today, precisely because one can predict that it will look the same tomorrow. Evolution via natural selection has ensured that biological observers do KNOW what they are looking for, and actively and very, very effectively avoid ever looking at anything other than what they are looking for. Consequently, equations may very well be able to capture the "Source Information" about observed elementary particles. But they cannot capture the "Source Information" of a human observer, attempting to interpret any information. Such an observer has spent it's entire life recording outputs, and basing all its behaviors, on sensory channel coders that are attempting to produce algorithmic random inputs to the brain. The brain's function is then to look for "higher level" correlations between these multi- sense, multi-time inputs, in order to generate higher level models of exploitable, predictable correlations.

Unfortunately, for the better part of a century, quantum physicists have thought they should be looking for superpositions. But as Josh Billings (Henry Wheeler Shaw) once said:

"The trouble with people is not that they don't know, but that they know so much that ain't so."

Rob McEachern

Dear Robert,

You present a very interesting, and, I believe, useful point of view here. I particularly appreciate your remarks about Bell's theorem. I must confess that I am not yet quite sure what I think about all your conclusions, but I certainly agree that equations (at least, the usual differential equations that make up much of the language of modern physics) carry negligible intrinsic information, and that a legitimately information-theoretic view of fundamental physics is needed. Of course, fundamental physics (particularly quantum gravity) is already trending in this direction, but I believe that paradigms predating the information age still exert significant inhibitory influence. Personally, I think that covariance (Lorentz invariance, etc.) has more to do with order theory than with Lie group symmetry, and this viewpoint is more congenial to an information-theoretic perspective. In any case, I enjoyed reading your work! Take care,

Ben Dribus

    Benjamin,

    With regard to "what I think about all your conclusions", bear in mind that my main conclusion is this:

    1) QM claims to be a good description of how "elementary particles" behave; they have a "wave-function"

    2) QM claims to be a good description of how "human observers of elementary particles" behave; they too have a "wave-function"

    I believe the first proposition is true. But the second is false.

    The problem is not that the particles behave weirdly, but that the physicists are behaving weirdly, when they have attempted to interpret their own observations and theories. They have completely misinterpreted what their own equations actually "mean". While applying the concepts of information theory to the behaviors of "the observed" would be helpful, applying them to the behaviors of "the observers" is imperative.

    On a more technical level, my conclusion is that, while treating reality as a "Fourier Superposition" may be "sufficient" for many purposes, it is neither "Necessary" nor even "Desirable", for many others. Physicists have yet to appreciate that fact.

    Rob McEachern

    Rob,

    Once again you put your finger on the problem. I agree that the first proposition is true, the second false. As I note in my essay, The Nature of the Wave Function, the assumption that wave functions are Fourier superpositions of sine waves has 'built into it' the assumption of single frequency sinusoidals of infinite extent. This has (mis)lead some physicists to speak of "the wave function of the universe", and confused John Bell, who claimed: "nobody knows just where the boundary between the classical and quantum domain is situated" [p.29, 'Speakable...']. He claimed the "shifty split" between microscopic and macroscopic defies precise definition.

    And yet the physical wave described in my essay has finite extent [p.5]. It has real dimensions and the 'trailing vortex' is finite -- typically the length of an atomic orbit [see essay]. Fourier decompositions of infinite extent are believed by many to be limitless. With a real field, there is a real boundary.

    Keep fighting the good fight.

    Edwin Eugene Klingman

    Shawn,

    Personally, I do not see that much significance in the holographic principle. It is another manifestation of the problem I noted in my essay; A description of the world/reality, and the world/reality itself, have different properties. Is the holographic principle, a property of the description, a property of the world, or both?

    I previously noted that the Shannon Capacity simply corresponds to the number of bits required to digitize a band-limited signal. But what does "band-limited" mean? It means the signal has been passed through a filter, which introduces correlations between "nearby" measurements of the signal; indeed, any sample-measurements made between "sufficiently close" samples (the Nyquist sampling rate), will be so highly correlated with the Nyquist rate samples, that their values can be predicted with arbitrarily high accuracy, from the Nyquist samples. Hence, they produce no additional "information"; thus, a higher sampling rate will not increase the amount of information in the digitized signal.

    Now consider an observable signal, expanding away from a source. At any given radius, R, from the source, how many samples does an observer have to take, on the surface of the sphere, of radius R, in order to digitize all the information in the signal? The answer depends on if the band-limiting filter (which may be either a temporal filter, spatial filter, or both) is at the source, or at the observer. If it is a spatial filter at the source, then whatever correlations the filter produced, between samples, will expand along with the surface of the sphere. Consequently, the number of spatial samples required to capture all the possible information is independent of the size of the sphere. But if the filter is applied by the observer, and the same filter is used for all radii, then the number of spatial samples will increase in proprotion to the area of the sphere. So which is it? According to Wiki - Holographic Principle:

    "The holographic principle was inspired by black hole thermodynamics, which implies that the maximal entropy in any region scales with the radius squared, and not cubed as might be expected. In the case of a black hole, the insight was that the informational content of all the objects which have fallen into the hole can be entirely contained in surface fluctuations of the event horizon."

    and

    "The holographic principle states that the entropy of ordinary mass (not just black holes) is also proportional to surface area and not volume; that volume itself is illusory and the universe is really a hologram which is isomorphic to the information "inscribed" on the surface of its boundary."

    Those statements imply that the Holographic Principle assumes the filter is an attribute of the observer, rather than the source. Consequently, the principle is a statement about an attribute of an observer's description, not an attribute of the source.

    Rob McEachern

    • [deleted]

    Robert

    I think article of Frank Wilczek interesting for you.

    Total Relativity: Mach 2004

    http://ctpweb.lns.mit.edu/physics_today/phystoday/%28356%29Total%20Relativity.pdf

      5 days later
      • [deleted]

      Robert,

      Yuri Manin smartest modern person, expert of relation between physics and mathematics

      http://www.emis.de/journals/SC/1998/3/pdf/smf_sem-cong_3_157-168.pdf

      http://www.ams.org/notices/200910/rtx091001268p.pdf

      I hope interesting for you.

      If you do not understand why your rating dropped down. As I found ratings in the contest are calculated in the next way. Suppose your rating is [math]R_1 [/math] and [math]N_1 [/math] was the quantity of people which gave you ratings. Then you have [math]S_1=R_1 N_1 [/math] of points. After it anyone give you [math]dS [/math] of points so you have [math]S_2=S_1+ dS [/math] of points and [math]N_2=N_1+1 [/math] is the common quantity of the people which gave you ratings. At the same time you will have [math]S_2=R_2 N_2 [/math] of points. From here, if you want to be R2 > R1 there must be: [math]S_2/ N_2>S_1/ N_1 [/math] or [math] (S_1+ dS) / (N_1+1) >S_1/ N_1 [/math] or [math] dS >S_1/ N_1 =R_1[/math] In other words if you want to increase rating of anyone you must give him more points [math]dS [/math] then the participant`s rating [math]R_1 [/math] was at the moment you rated him. From here it is seen that in the contest are special rules for ratings. And from here there are misunderstanding of some participants what is happened with their ratings. Moreover since community ratings are hided some participants do not sure how increase ratings of others and gives them maximum 10 points. But in the case the scale from 1 to 10 of points do not work, and some essays are overestimated and some essays are drop down. In my opinion it is a bad problem with this Contest rating process. I hope the FQXI community will change the rating process.

      Sergey Fedosin

      Robert,

      I'm just a pedestrian bystander here, but I'd like to attempt a couple of observations about particle-wave duality, the double slot experiment and entanglement.

      - As I understand, 'particles' can only traverse spacetime as propagating waves.

      - Conversely, particles are manifested when its propagation energy is localized (I think physically reconfigured as rest mass-energy) or absorbed.

      - Even a single particle emission propagating as a wave can simultaneously pass through two (proximal) slots, to be manifested as a single localized particle upon detection.

      Regarding entanglement, I agree with your assessment. As I understand, entangled particles are most often physically produced from a single particle. I think that the particles' properties, or their manifestation frequencies, are entangled during that initial process...

      Jim

        Dear Robert,

        Reading your essay, I tend to agree with many of your conclusions, but there was one particular passage that does not seem to be correct to me. In the description of the double slit experiment, you describe the particle detectors as only counting particles. I think it is critical to consider that they also record the location of the particle detection which, as it happens, coincides with the apparent path of a wave. You stated:

        "In double-slit experiments, much is often made of the fact that the distribution of particles looks like an interference pattern, even when the particles are sent through the apparatus one at a time, as if that actually matters. Well, it might matter if a detector tried to measure a wave-like property like frequency or phase or a superposition. But neither of the systems just described even attempt to do that. They simply count particles and infer the frequency from the counts. It does not matter if the particles arrive all at once or one-at-a-time."

        "Why does an 'interference pattern' occur at all? Because the slits do not have Gaussian responses. They have Sinc-function-like responses, whose combination just happens to look like an interference pattern. There is no wave behavior. There are just particles in a correlated energy/frequency state. But detectors like those described do not really distinguish between particles and waves; in effect, they just count received energy quanta and then make an inference, not a measurement; an inference based on a priori information."

        IMO, the interference pattern occurs because the spatial distribution of particle detections occurs only along the path of their propagating wave forms. It is the obvious interference pattern that provides physical evidence of the wave distribution from each slot, even when a single quanta passes through the system at any time.

        It is for this reason that I conclude that particles propagate only as waves, and are localized as discrete particles.

        BTW, I'm only a retired information systems analyst - not at all a physicist or mathematician, so my perspective may differ. I found your article interesting to the extent that I could follow it, but frankly I had envisioned a somewhat different discussion based on the title.

        As Benjamin Dribus discussed in his essay, theories are evaluated on the basis of their success in explaining and predicting physical phenomena. However, I think that far too often a mathematical formulation that successfully predicts the outcome physical processes are merely presumed to accurately represent the explanation of the causal process. In the example I'm most interested in, general relativity (GR) is thought to accurately describe how gravitation physically works, in contrast to Newton'attractive force'. However, IMO GR very successfully describes only the effects of gravitation in the context of the described abstract system of dimensional coordinates. I certainly do not think that the dimensions of spacetime directly cause any physical effects...

        I think similar presumption that a mathematical formulation that accurately predicts observed physical phenomena, like those requiring compensatory dark matter or dark energy, for example, do not necessarily describe the actual physical processes producing those effects any more than Ptolemy's ability to predict the motions of planets in the sky was proof of the physical process presumed to produce those phenomena. Well, maybe just a little...

        Yours is a very interesting essay - I enjoyed it.

        Sincerely, Jim

        Dear Robert!

        Great essay and profound ideas! Obviously the new physics of the information age is not "physics formulas" and "physics forms"?.. The highest score. Good luck in the contest. Sincerely, Vladimir

        James,

        Interpreting the double-slit pattern as being produced by interfering waves has been the standard interpretation for decades. But I perceive two major problems with it:

        First, why does it look like the Fourier Transform of the double-slit geometry? This "pattern" is independent of the existence of particles, waves, physics or physicists. In other words, the information seems to come from the slits, not the entities passing through the slits, whether particles or waves. The latter seem to merely act like the carrier of a radio transmission, but the information being modulated unto the carrier comes entirely from the geometry of the slits.

        Second, as described in the second half of my post on Sept. 7, in response to Inger Stjernqvist, the pattern can just as easily be described as a "particle scattering pattern" as a "wave interference pattern". Consequently, it is not NECESSARY to view the latter as the only possibility.

        Lastly, regarding my choice of title, formulae that are mathematically identical, can be interpreted as corresponding to very, very different physical realities. As I pointed-out in other posts, one cannot actually observe a "wave-function", one can only observe a probability distribution, that seems to correspond to the magnitude of the "wave-function". But that magnitude is mathematically identical to the output of a filter-bank, that simply histograms particles (hence the correspondence with probability distributions); but that filter-bank does not depend on the existence of "wave-functions", de Broglie frequencies, Fourier superpositions, entanglement, or any of the other supposed wavelike properties. In other words none of those properties are NECESSARY to explain what is going on. They are merely SUFFICIENT.

        Rob McEachern

        Dear Robert,

        You are right in emphasizing that the amount if information written in states is much more than that corresponding to evolutions I.e the physical law. But you overlooked the huge algorithmic compression of the physical law. Moreover, the goal of physics is to connect preparations with observations, I.e. to make predictions with initial conditions known. Inducing a mechanism forom just observation is speculative, as it is often in cosmology.

        My best

        Mauro

          Mauro,

          Far from overlooking "the huge algorithmic compression", I noted that it is the very sparse information content, of the phenomenon that physicists have chosen to observe, that has made such large compressions possible.

          I then noted that the "problem of interpretation", is that the behavior of the "observer", as opposed to the "observed", has never been based on "predictions with initial conditions known", precisely because, unlike the "observed", the "observer" is not a sparse information content phenomenon; by assuming they can treat the "observer" in the same manner that they treat the "observed", physicists have made a very bad assumption.

          Rob McEachern

          Rob,

          I read through your Sep. 7 story - I empathize with your perspective (see my brief essay). As basically a retired information systems analyst myself, it seems to me that the slots encode additional 'particle' location selection information within the separated signals.

          Not being indoctrinated by any physics education, IMO the fundamental difference between detected particles and particles propagating as waves is that detected particle are physically localized while propagating waves are physically distributed in space and time. As such, a localized particle cannot physically traverse spacetime, and the location of a propagating wave cannot be definitively determined without producing a detected, non-propagating particle.

          Why would the interference pattern disappear if the detection screen were moved too near the slots? Let me put it simply: if you shine your flashlight on the house across the street a much larger area will be (dimly) illuminated by the reflection of dispersed photons than if you put your hand in front of the lens.

          Waves passing through two slots must disperse through spacetime before their signals can physically interact. Likewise, if two slots are separated by a distance that exceeds the amplitude of the input emitted wave, particles will be detected behind no more than one slot.

          Regardless of what the consensus of physicists is, I think these observation support the interpretation that waves propagate; particles do not.

          Thanks for your consideration, Jim

          Dear Rob,

          As I said in my first comment, your essay was one of the best. I'm glad everyone else agreed with me.

          Edwin Eugene Klingman

          • [deleted]

          Many thanks for your fine essay!

          I suspect that many of the FQXi authors have experienced criticisms from referees and editors who have not considered your argument. I too have a small collection of journal referee comments stating that my nuclear model is "inconsistent with the uncertainty principle" and therefore "not quantum mechanical" and therefore simply wrong - no matter what kind of agreement with experimental data is found.

          The antidote for what has become a worldwide scandal would be to append your essay to every discussion of the uncertainty principle in the textbooks!

          5 days later

          Dear Rob McEachern,

          Your focus on the uncertainty principle receives some support in Physical Review Letters 109, 100404 (7 Sept 2012) in which the authors experimentally observe a violation of Heisenberg's "measurement-disturbance relationship" and demonstrate Heisenberg's original formulation to be wrong.

          Edwin Eugene Klingman

          • [deleted]

          Robert,

          You certainly mistook me. I never claimed that a transformation measures. While time is commonly considered a basic physical quantity, mathematically trained EEs like me do not have a problem with the alternative choice of frequency as a basic physical quantity. Neither the measurable (elapsed) time nor the measurable frequency may change their sign. This physical restriction is not made in the mathematical model if we are using positive and negative numbers (IR). There is a tailor-made mathematics in IR. Complex Fourier transformation (FT) belongs to IR while real-valued cosine transformation (CT) belongs to IR. A Hendrik van Hees blamed me for damaging the reputation of my university because I argued that there is no loss of information except for the arbitrarily chosen point of reference when FT is replaced by CT. MP3 proves me correct.

          I would appreciate if you were in position to agree or disagree on my Fig. 2. Notice, CT does not need Heaviside's trick. It is just a clean mathematical flip-flop. CT of something consine transformed yields the original function. FT of a measured function of time includes addition of something unreal. The same is true for FT of measured frequency.

          Therefore I see a bug in the interpretation of quantum mechanics.

          Eckard