I was not being very precise in the previous post, when I stated "the Fourier Transform of the Southern slit, another sinc function, but offset in latitude." The Fourier Transform is a complex function. By "offset in latitude", I mean the latitude is encoded into the phase of the Fourier Transform. Hence, the Fourier Transform of the North and South slits differ in phase, but not magnitude.

Dear Robert,

if the slit geometry is/was the routing system that is/was responsible for the routing distributions (frequencies of the particles' impact), the inconsistency remains.

Because you write "Consequently, the slit geometry 'is' the routing system, whenever you change it, you change the routing". Obviously, following from your statements, in delayed-choice experiments the slit geometry must have changed *after* a particle succesfully went through the aperture with both slits open. Because the arrival distributions have suddenly changed for those cases - despite the fact that both slits were open.

But you assumed "The routing distributions are given by the Fourier Transform of the slit geometry". Even if i consider the system to be an analog computer in the classical fashion, this computer would be limited to only propagate its information with maximally the speed of light from the measurement plane ("europe") back to the slit geometry at the "east coast of the U.S.A." (double-slit geometry). If this would be the case, it would contradict the experimental observations obtained by delayed-choice experiments which exclude a backwards propagation of information with maximally the speed of light. The contradiction is, logically your routing system cannot at the same time be 'X' and be 'NOT X', if you think of it in classical terms like analog computers and such.

Consequently, you neither can think about the system as equipped with wave-like nor equipped with particle-like entities which "travel" between the U.S.A. and europe.

Best wishes,

Stefan

    Stefan,

    An inconsistency does indeed remain. But that too is caused by additional misinterpretations of what is happening. Physicists have been using the wrong analogies, to think about this, for a very long time.

    I want you to look at a diagram, while I describe this. But to avoid violating someone's copyright, I don't want to copy and paste it here. Please open another browser window. Under Wikipedia's page for "Wheeler's delayed choice experiment", there is a heading for "External links", the first of which is "Wheeler's Classic Delayed Choice Experiment by Ross Rhodes". If you click on that, it should take you to "www.bottomlayer.com/bottom/basic_delayed_choice.htm". The figure of interest is on that page.

    Imagine "1" to be a radio transmitter antenna. "5" are two radio receivers, attached to highly directional, large, parabolic antennae. The slits at "2" create "Multi-Path" Interference. The radio antenna is simultaneously transmitting multiple stations, or channels. So there is also "Multi-Channel" Interference.

    To receive a signal with no interference, you must use a frequency band-limited filter to remove "Multi-Channel", and a spatial beam-limited filter to remove the "Multi-Source". The telescopes (large parabolic antennae) at "5" act as the spatial filters.

    Next, consider that, in addition to producing the "Multi-Path", the two slits act as a crude diffraction grating, dispersing the spectrum into the interference "fringes", which are the different "channels", that when combined together, result in the "Multi-Channel", so by "tuning" to just one "fringe", you have effectively tuned to just one channel, and thereby eliminated the "Multichannel" Interference. Consequently, there is no surprise that you see no interference, since the apparatus filtered out both the "Multi-Path" and the "Multi-Channel". So called Quantum erasers", in effect, merely remove the filters to restore the interference.

    The last sentence in the Introduction, on the Wiki page cited above, reads "The fundamental lesson of Wheeler's delayed choice experiment is that the result depends on whether the experiment is set up to detect waves or particles." Yes indeed they do, and for exactly the reasons given in my essay and the related posts under both my essay and Lorraine Ford's. More specifically, it depends on whether the experiment was set up to detect a Fourier Superposition (waves) or a single frequency (particle). Thus, on page 6 of my essay, I stated that "...the correct model for the observations is not a superposition, but is indeed a single frequency wave..." Having the correct model matters, It matters a LOT; why it matters was discussed, at some length, in the posts under Lorraine Ford's essay. Basically it boils down to this; knowing what to look for, enables one to recognize and filter out all the "crap", that does not look like what one is looking for.

    This brings us to the two Gaussian filters described in my essay. Look again at the figure cited above. Replace each of the two receiver/telescopes at "5", with a pair of of identical receivers/telescopes. Carefully design the frequency filter passbands to have Gaussian responses, as described in the essay. Now, as described, you have a pair of particle counters that can be used to INFER, not measure, the single frequency, via the ratio of particle counts, with an accuracy that greatly exceeds the uncertainty principle. That is why having the correct model matters.

    Next, consider the following:

    The Fourier transform of a Gaussian function, is itself, another Gaussian function. Furthermore, a Gaussian function yields the minimum possible time-bandwidth product, and thus the minimum "uncertainty" in the Fourier uncertainty principle.

    Returning, for the moment, to the radio signal analogy, remove the slits at "2", digitize the signal, multiply it (windowing) by a Gaussian shaped "window" and compute a Discrete Fourier Transform (DFT). Voila! You have just constructed not just a pair of Gaussian filters, but an entire "filter bank", each consecutive pair of which, can be used as described in my essay. Each of these pairs can be used to estimate the signal frequency. But, of course, due to the narrow bandwidths, pairs that are tuned closest to the signal frequency, have much higher signal-to-noise ratios, and thus provide much more accurate frequency estimates. In the formula for a Fourier Transform, multiplying by the complex exponential represents a tuning operation. The subsequent integration is a lowpass filtering operation. Multiplying by the Gaussian shapes the frequency response of that lowpass filter, to be another Gaussian function. Hence, a Fourier Transform can be viewed as a filter bank; a large number of tuned receivers. In a DFT, these receivers may be numbered, 1,2,3...n, and are spaced "D" Hz apart.

    In the formula in the essay: f = nD+D/2-(cD/2)ln(a(f)/b(f)). nD+D/2, is thus the center frequency of a receiver pair. The amplitude ratio term, in effect, yields a fine frequency interpolation. The reason I mention all this is to point out that the amplitude (particle count) ratio only yields the frequency offset from the pair's center frequency. The complex exponential in the formula for the Fourier Transform, acts a a single-stage, frequency tuner, known in the old radio receiver literature as a heterodyne. Now replace the two-slit "diffraction grating" and you have introduced a second-stage of frequency tuning, a superheterodyne.

    In effect, this entire apparatus is nothing more than a crude superheterodyne tuner, feeding into a spacial filter, followed by an AM receiver. Adding the paired AM receivers converts it into an FM receiver.

    Rob McEachern

    Like the 26 letters of the English alphabet, the representations created by De Broglie and Fourier are sufficient, but not necessary, in order to describe probabilities of detection. De Broglie associated a frequency with an energy. But that is not necessary. Fourier enabled superpositions to be described in terms of frequencies and phases. But that is not necessary. They are merely sufficient.

    It is often asked what exactly is the QM probability distribution, the probability of? Why does only the magnitude of the wave function matter? Because descriptions in terms of frequency and phase are merely sufficient. They are not necessary.

    It should be clear from my previous post, that the magnitude of a Fourier Transform may me viewed as a filter-bank of tuned energy detectors. The point is, they can be DIRECTLY viewed as being tuned in ENERGY, rather than frequency. For example, in the Gaussian detection curves used to construct an FM detector, one can substitute (make a change of variables) energy, for frequency, and everything will still work. Instead of being viewed as a Frequency Modulation detector, it can be DIRECTLY viewed as an Energy Modulation detector. De Broglie's association of a given Energy with a given Frequency, becomes superfluous. It is sufficient, but not necessary.

    Consequently, instead of viewing the description as a Fourier superposition of sinusoids, each associated with a given energy, one can directly view it as a superposition of tuned Energy Detectors. If a detector, a receptor, has a probability of detection versus energy that is a Gaussian function, then the paired detectors may be used to infer estimates of that energy, from the ratio of a pair of detectors' "total energy received" outputs. The process by-passes the wave versus particle distinction as unnecessary; what matters is energy detection. Whether you view the energy as arriving in single particles, or waves of particles, becomes superfluous.

    In case it is not obvious, let me point out, that when the Discrete Fourier Transform is used, the Energy Detection Filter Bank, noted in the previous posts, produces detectors that are tuned to Discrete Energies. So, if Energy is quantized, you end up with detectors, with specified probabilities of detection versus energy, tuned to each of the quantized energy values.

    • [deleted]

    Dear Robert McEachern,

    I strongly support your uttered elsewhere argument that the objective reality must not be confused with its mathematical description. See my Fig. 1.

    What about your effort to explain the physics to physicists in a simplifying manner by means a crash course of the theory of signal processing, I see this attempt with mixed feelings.

    Among the roughly 300 contestants you seem to be the one who outed himself as THE expert in the usual theory of signal processing. I see this theory affected by the same serious flaw that affects physics.

    I ask you to look at my Fig. 2 and read the belonging text if necessary twice. Maybe some Figs. in my previous essays may also help you to understand my point. You will certainly know that MP3 works well on the basis of cosine transformation.

    When you repeatedly wrote "sufficient but not necessary" then you might have understood the redundancies at least in part. Most experts I met at DAGA and ASA meetings were not completely aware of it.

    I am curiously waiting for your comment on my Fig. 2.

    Regards,

    Eckard

      • [deleted]

      Dear Robert,

      i suppose that physicists all over the world which every day are concerned with QM experiments of the kind we discuss here, surely have investigated the simple picture of emitter and reveiver you have drawn.

      The reason, - at least for me - that they haven't adapted this picture is also simple: It does not fit into the observed facts, despite the fact that it may or may not explain the classical double-slit experiment.

      You wrote:

      "An inconsistency does indeed remain. But that too is caused by additional misinterpretations of what is happening. Physicists have been using the wrong analogies, to think about this, for a very long time."

      The question then is, what is the "right" analogy?

      Please think about the use of your model when it comes for example to the double-double-slit experiment. Your desription then, consistently should be, that if at one double-slit you want to measure the interference pattern, but you don't do this in coincidence with what happens behind the other double-slit, you *don't* gain the interference pattern.

      But if you do measure with the same devices at both sides (the same devices that are also used for the classical double-slit experiment), you gain the interference pattern by measuring it like described by me in a post above.

      I wrote this post as a new thread, because there are contestants who are also interested in this discussion but sometimes it's hard to check all hidden replies.

      Best wishes,

      Stefan

        Hi Robert,

        As promised, I read your (very thoughtful and nicely constructed) essay.

        You write, "physicists seek to predict how physical substances behave. But they could never have predicted that cars would stop at red traffic lights, anymore than they could predict that they would drive on one side of the road in England and on the other in the United States."

        I would argue, however, that -- on the assumption of symmetry -- physicists could predict negative acceleration that compels stopping as well as going; and they could predict that a two-way channel compels cars to drive on one side of the boundary or the other.

        On the other hand, we agree a great deal on information theory and its role in physics. Particularly, " ... physicists have lost sight of the fact that the math is devoid of information and thus has very little to say about anything *interesting*." Most mathematicians would agree, I think -- mathematics isn't "about" anything any more than the alphabet is more than arbitrary symbols plus their rules for combination.

        As regards relativity, though, we'll have to remain in absolute disagreement. If there is an observer in a privileged reference frame, the whole edifice of Minkowski space collapses. The lack of such an inertial frame doesn't mean that "all systems are equivalent," as you claim -- it means that one system can be smoothly and continuously transformed into another. Big difference, and the support for this is the assumption of symmetry as referenced above.

        Good read, though -- best wishes in the contest!

        Tom

        Eckard,

        Your concern regarding the question "What is a negative frequency?", begs another question; "What is frequency?" Until frequency is defined, one cannot tell if a negative frequency is meaningful.

        In communications theory, frequency is DEFINED to be the first derivative, with respect to time, of a time-varying phase. Thus, a positive frequency corresponds to an increasing phase angle. So, for example, on the face of a clock, if a second hand rotates clockwise, it is said to have a frequency of +1 cycle/minute. But if it rotates counter-clockwise, it is said to have a frequency of -1 cycle/minute. That seems perfectly meaningful to me. Counting downwards is every bit as "real" as counting upwards.

        • [deleted]

        How about positive and negative curvature 2D copies of 3D space ?

        See my essays:

        http://www.fqxi.org/community/forum/topic/946

        http://fqxi.org/community/forum/topic/1413

        Stefan,

        I pointed out that the delayed choice experiment, as described in the references I gave, was not merely badly designed, but badly conceived. The telescopes block the path from a slit just as surely as if the slit had been closed. It thereby precludes any possibility for this apparatus to produce an "interference pattern".

        You then wrote "i suppose that physicists all over the world,... surely have investigated the simple picture of emitter and reveiver you have drawn."

        You say they surely have. I say they surely have not. Show us a better design for the experiment. Show us one that removes the "interference pattern", by merely delaying some choice, any choice, other than the choice in which you choose to block one of the paths, just as surely as if you closed a slit. We await your better design.

        • [deleted]

        Dear Robert,

        you wrote

        "I pointed out that the delayed choice experiment, as described in the references I gave, was not merely badly designed, but badly conceived. The telescopes block the path from a slit just as surely as if the slit had been closed."

        Robert, you say the telescopes block the path from a slit just as surely as if the slit had been closed. I say, if one slit would indeed be "closed", why are there two similar patterns behind each slits? Your "closed slit" approach would result in a different pattern, namely the pattern that is left over after one slit has been really *closed*.

        I am sure you have a "mechanism" that explains how the two distinct telescopes do mutual exclusively indicate a hit, but never both at the same time.(I anyway don't have such a mechanism besides my interpretation i gave in my essay).

        But that't not the point i want you to consider. The point is that for example the double-double-slit experiment (posted above) needs an explanation which fits into your explanatory scheme. Maybe you succeed, maybe not. Anyway it would, for the sake of better evaluating your model, be interesting what you can say about this experiment. That's all i wanted to remark.

        Best wishes,

        Stefan

        • [deleted]

        Robert,

        Don't confuse mathematics with physics. Mathematics has unfortunately become the bad habit to define anything at will. Physics is or at least should be bound to nature.

        I think Dirac was not horribly wrong when he meant that there is no negative frequency in reality. Of course mathematicians do not have problems with anything negative. If there were 3 persons in a room before 5 left it then 2 have to come in as to make the room empty.

        You might feel more familiar with some theories than me because I do not uncritically accept them. Didn't your attempt to explain negative frequency by means of the definition you mentioned just shift the question? Is there negative elapsed time alias absolute phase? My ears are unable to hear future signals because I am never drunk.

        May I invite you to try and refute at least what one out of my five Figs. tries to tell?

        Eckard

        Eckard,

        The cosine transform has the same fundamental problem as the complex Fourier transform, and all other such transforms. The problem is that NONE of them even ATTEMPT to measure ANY frequency, positive or negative. NONE of the things they actually measure, even have the units of frequency. Instead, they all, in effect, merely predefine a set of waveforms with predefined frequencies, then measure the degree of correlation between those predefined waveforms, and the input waveform.

        If one wishes to treat frequency as an actual observable, then one needs to at least attempt to observe it. An FM receiver does that. NO transform ever even tries.

        7 days later

        Robert

        Superb essay. I was really lifted reading it. Perhaps there may be hope for us yet! I hope you'll read mine as I take the 'low road' of interpreting the findings themselves not abstracting them into numbers, and find a simple way of explaining CSL logically, consistent with Peter Jacksons brilliant anaysis and mechanism (which I don't think all have grasped). I see you saw some of the potential.

        I hope you do well, my score should help, and also hope you'll also read and commnet on mine.

        Regards

        Rich

        Dear Robert,

        Very nice essay indeed. I think that the transition to a more "informational" physics is in its way. One can see it all over, specially in theories of quantum gravity but also in more traditional subfields. Even in orthogonal disciplines such as biology. I find very interesting the way you present the idea that initial conditions have more information content than the equations describing a natural phenomenon. The idea is not completely new although your treatment is better compared to what I had found before. A nice philosopher that has advanced similar ideas (with whom I may strongly disagree but he makes a good case) is James McAllister from Leiden. Your view, as so do McAllister's seem to suggest that most of the world information content is actually algorithmic random, hence not "capturable" by mathematical equations.

        Computer programs may do better than mathematical equations in this sense, because unlike what your JPEG example suggests, there is no clear cut distinction between program and data in general (follows from Turing's universality). As you know compression algorithms compress the data and also encode the instructions for the decompression in the same file, making it undistinguishable from the original data itself until it self-decompress. But that applied to the real-world is of course already making the assumption that one can fully capture the initial conditions in some compressed form, which in agreement with McAllister it may just not be the case, given that all models we have of natural phenomena are approximatively and there is always some data that escapes to equations and eventually make our theories useless in the long run due to nonlinear dynamics. In this regard, the problem may not be the poor information of the equations, but the poor measure of the initial conditions at our coarsed grained macroscopic reality.

        You also rightly point out that traditionally physical laws are seen like compression algorithms, this is an old idea advanced by Greg Chaitin in numerous of his books and papers, in the context of algorithmic information theory (my own research field). And I think is the working hypothesis of science, and no alternative seems better so far. The main divergence I have with your point of view (and McAllister's for that matter) is what is traditionally known as the The Unreasonable Effectiveness of Mathematics in the Natural Sciences, to quote Eugene Wigner. This translates in the world of lossless compression algorithms in the astonishing fact that we are able to compress many, if not most of natural data, up to some point, including initial conditions, and to predict, at least in the short-term, many natural phenomena. So even if some data is left out, laws seem to describe a good deal of the important (to us) behaviour of natural phenomena, that is the regular behaviour that can be synthesised for our advantage (e.g. for prediction purposes).

        All in all a very nice essay, I only quite regret that information enthusiasts never go beyond Shannon's information theory, which is the theory as it was left almost 60 years ago, and which is a theory of communication. The cutting-edge theory of information is currently algorithmic information theory (Kolmogorov complexity, logical depth, algorithmic probability).

          Hi Robert,

          Here's another attempt at answering your question... "So what is the big deal? What makes this so significant?"

          After reading:

          - Your essay

          - 'The Heisenberg Uncertainty Principle and the Nyquist-Shannon Sampling Theorem' by Pierre Millette

          - 'An Introduction to Information Theory: Symbols, Signals and Noise' by John Pierce

          - 'Communication in the Presence of Noise' by Claude Shannon

          I am left with the impression that Shannon and Piece predicted that the holographic principle would become a naturally accepted concept in physics. They detail how the volume of the signal space "creeps" away from the origin of the space as the dimension of the space increases; how there is dimensional reduction in the message space when compensating for phase "differences" (same message, different phase) that can arise when sampling of the signal. Seems at first glance to be hint at how to get rid of singularities at the centres of black holes.

          Perhaps it's not quite the same thing. On the other hand, if it's the same thing, then that's quite significant. In any case, I note that Shannon was not directly referenced in 't Hooft's first paper called 'Dimensional Reduction in Quantum Gravity'.

          - Shawn

            • [deleted]

            Hector,

            Regarding the "Unreasonable Effectiveness of Mathematics", in an earlier post, under Matt Visser's essay, and repeated somewhere under my own, I wrote:

            "In your summary, you ask "Exactly which particular aspect of mathematics is it that is so unreasonably effective?" in describing empirical reality.

            I would argue, that is not an aspect of mathematics at all, but rather, an aspect of physics. Specifically, some physical phenomenon are virtually devoid of information. That is, they can be completely described by a small number of symbols, such as mathematical symbols. Physics has merely "cherry picked" these sparse information-content phenomenon, as its subject matter, and left the job of describing high information-content phenomenon, to the other sciences. That is indeed both "trivial and profound", as noted in your abstract."

            Regarding the effectiveness of "Shannon's information theory" as compared to "algorithmic information theory", I am very strongly of the opinion that the former is much more effective than the latter, in all the areas, like measurement theory, that have much real relevance to "an observer". The difference lies in the relative importance of "Source Coding" vs. "Channel Coding"; lossless compression algorithms, in spite of any "astonishing fact", are virtually useless in the realm of observing and measuring. One of the biggest problems hindering the advancement of modern physics, is that physicists "don't get it"; contrary to popular belief, observation and measurement are not about preserving source information, they are about distinguishing "relevant information" from "irrelevant information" as quickly and efficiently as possible. A lossless Source Coder, with sunlight as its input, would preserve huge amounts of information about the solar spectrum, that is absolutely irrelevant to any human observer, other than an astrophysicist. That is why the channel coder in the visual pigments in the retina totally ignore this "irrelevant information". The same is true of auditory information; well over 99% of the "Source Information" is discarded before the information stream ever exits the inner ear. While this information has great relevance to a modern telephone modem, it has none at all to a human observer.

            Since, as Shannon demonstrated, all channels have a limited information carrying capacity, it is imperative for any multi-stage information processing observer, to remove capacity-consuming "irrelevant information" as quickly as possible. This presents a "chicken and egg" dilemma, that has been debated since at least the time of Socrates, 2500 years ago. How can you find what you are looking for, when you don't even know what you are looking for.

            Nevertheless, as I pointed-out in the essay, when you do know, such as when you have created an experiment in which only a single frequency or energy exists, looking for (attempting to observe and model) a Fourier superposition, rather than a single frequency or energy, is a positively dumb thing to do. It is no wonder why there is so much "weirdness" in the interpretations given to such inappropriate models.

            You stated that "Your view... seem to suggest most of the world information content is actually algorithmic random, hence not "capturable" by mathematical equations". That is not my view. My view is that much of the "world information content" is very predictable. HOWEVER, the function of any good design for a sensor, instrument, or observer, in other words, a Channel Coder, is to make sure that it's output is devoid of all such redundant predictabilities. Hence, although the world may not be random, any good channel coder will render all observations of that world into algorithmic random output. One does not need to observe the solar spectrum today, precisely because one can predict that it will look the same tomorrow. Evolution via natural selection has ensured that biological observers do KNOW what they are looking for, and actively and very, very effectively avoid ever looking at anything other than what they are looking for. Consequently, equations may very well be able to capture the "Source Information" about observed elementary particles. But they cannot capture the "Source Information" of a human observer, attempting to interpret any information. Such an observer has spent it's entire life recording outputs, and basing all its behaviors, on sensory channel coders that are attempting to produce algorithmic random inputs to the brain. The brain's function is then to look for "higher level" correlations between these multi- sense, multi-time inputs, in order to generate higher level models of exploitable, predictable correlations.

            Unfortunately, for the better part of a century, quantum physicists have thought they should be looking for superpositions. But as Josh Billings (Henry Wheeler Shaw) once said:

            "The trouble with people is not that they don't know, but that they know so much that ain't so."

            Rob McEachern

            Dear Robert,

            You present a very interesting, and, I believe, useful point of view here. I particularly appreciate your remarks about Bell's theorem. I must confess that I am not yet quite sure what I think about all your conclusions, but I certainly agree that equations (at least, the usual differential equations that make up much of the language of modern physics) carry negligible intrinsic information, and that a legitimately information-theoretic view of fundamental physics is needed. Of course, fundamental physics (particularly quantum gravity) is already trending in this direction, but I believe that paradigms predating the information age still exert significant inhibitory influence. Personally, I think that covariance (Lorentz invariance, etc.) has more to do with order theory than with Lie group symmetry, and this viewpoint is more congenial to an information-theoretic perspective. In any case, I enjoyed reading your work! Take care,

            Ben Dribus

              Benjamin,

              With regard to "what I think about all your conclusions", bear in mind that my main conclusion is this:

              1) QM claims to be a good description of how "elementary particles" behave; they have a "wave-function"

              2) QM claims to be a good description of how "human observers of elementary particles" behave; they too have a "wave-function"

              I believe the first proposition is true. But the second is false.

              The problem is not that the particles behave weirdly, but that the physicists are behaving weirdly, when they have attempted to interpret their own observations and theories. They have completely misinterpreted what their own equations actually "mean". While applying the concepts of information theory to the behaviors of "the observed" would be helpful, applying them to the behaviors of "the observers" is imperative.

              On a more technical level, my conclusion is that, while treating reality as a "Fourier Superposition" may be "sufficient" for many purposes, it is neither "Necessary" nor even "Desirable", for many others. Physicists have yet to appreciate that fact.

              Rob McEachern