If you do not understand why your rating dropped down. As I found ratings in the contest are calculated in the next way. Suppose your rating is [math]R_1 [/math] and [math]N_1 [/math] was the quantity of people which gave you ratings. Then you have [math]S_1=R_1 N_1 [/math] of points. After it anyone give you [math]dS [/math] of points so you have [math]S_2=S_1+ dS [/math] of points and [math]N_2=N_1+1 [/math] is the common quantity of the people which gave you ratings. At the same time you will have [math]S_2=R_2 N_2 [/math] of points. From here, if you want to be R2 > R1 there must be: [math]S_2/ N_2>S_1/ N_1 [/math] or [math] (S_1+ dS) / (N_1+1) >S_1/ N_1 [/math] or [math] dS >S_1/ N_1 =R_1[/math] In other words if you want to increase rating of anyone you must give him more points [math]dS [/math] then the participant`s rating [math]R_1 [/math] was at the moment you rated him. From here it is seen that in the contest are special rules for ratings. And from here there are misunderstanding of some participants what is happened with their ratings. Moreover since community ratings are hided some participants do not sure how increase ratings of others and gives them maximum 10 points. But in the case the scale from 1 to 10 of points do not work, and some essays are overestimated and some essays are drop down. In my opinion it is a bad problem with this Contest rating process. I hope the FQXI community will change the rating process.

Sergey Fedosin

Robert,

I'm just a pedestrian bystander here, but I'd like to attempt a couple of observations about particle-wave duality, the double slot experiment and entanglement.

- As I understand, 'particles' can only traverse spacetime as propagating waves.

- Conversely, particles are manifested when its propagation energy is localized (I think physically reconfigured as rest mass-energy) or absorbed.

- Even a single particle emission propagating as a wave can simultaneously pass through two (proximal) slots, to be manifested as a single localized particle upon detection.

Regarding entanglement, I agree with your assessment. As I understand, entangled particles are most often physically produced from a single particle. I think that the particles' properties, or their manifestation frequencies, are entangled during that initial process...

Jim

    Dear Robert,

    Reading your essay, I tend to agree with many of your conclusions, but there was one particular passage that does not seem to be correct to me. In the description of the double slit experiment, you describe the particle detectors as only counting particles. I think it is critical to consider that they also record the location of the particle detection which, as it happens, coincides with the apparent path of a wave. You stated:

    "In double-slit experiments, much is often made of the fact that the distribution of particles looks like an interference pattern, even when the particles are sent through the apparatus one at a time, as if that actually matters. Well, it might matter if a detector tried to measure a wave-like property like frequency or phase or a superposition. But neither of the systems just described even attempt to do that. They simply count particles and infer the frequency from the counts. It does not matter if the particles arrive all at once or one-at-a-time."

    "Why does an 'interference pattern' occur at all? Because the slits do not have Gaussian responses. They have Sinc-function-like responses, whose combination just happens to look like an interference pattern. There is no wave behavior. There are just particles in a correlated energy/frequency state. But detectors like those described do not really distinguish between particles and waves; in effect, they just count received energy quanta and then make an inference, not a measurement; an inference based on a priori information."

    IMO, the interference pattern occurs because the spatial distribution of particle detections occurs only along the path of their propagating wave forms. It is the obvious interference pattern that provides physical evidence of the wave distribution from each slot, even when a single quanta passes through the system at any time.

    It is for this reason that I conclude that particles propagate only as waves, and are localized as discrete particles.

    BTW, I'm only a retired information systems analyst - not at all a physicist or mathematician, so my perspective may differ. I found your article interesting to the extent that I could follow it, but frankly I had envisioned a somewhat different discussion based on the title.

    As Benjamin Dribus discussed in his essay, theories are evaluated on the basis of their success in explaining and predicting physical phenomena. However, I think that far too often a mathematical formulation that successfully predicts the outcome physical processes are merely presumed to accurately represent the explanation of the causal process. In the example I'm most interested in, general relativity (GR) is thought to accurately describe how gravitation physically works, in contrast to Newton'attractive force'. However, IMO GR very successfully describes only the effects of gravitation in the context of the described abstract system of dimensional coordinates. I certainly do not think that the dimensions of spacetime directly cause any physical effects...

    I think similar presumption that a mathematical formulation that accurately predicts observed physical phenomena, like those requiring compensatory dark matter or dark energy, for example, do not necessarily describe the actual physical processes producing those effects any more than Ptolemy's ability to predict the motions of planets in the sky was proof of the physical process presumed to produce those phenomena. Well, maybe just a little...

    Yours is a very interesting essay - I enjoyed it.

    Sincerely, Jim

    Dear Robert!

    Great essay and profound ideas! Obviously the new physics of the information age is not "physics formulas" and "physics forms"?.. The highest score. Good luck in the contest. Sincerely, Vladimir

    James,

    Interpreting the double-slit pattern as being produced by interfering waves has been the standard interpretation for decades. But I perceive two major problems with it:

    First, why does it look like the Fourier Transform of the double-slit geometry? This "pattern" is independent of the existence of particles, waves, physics or physicists. In other words, the information seems to come from the slits, not the entities passing through the slits, whether particles or waves. The latter seem to merely act like the carrier of a radio transmission, but the information being modulated unto the carrier comes entirely from the geometry of the slits.

    Second, as described in the second half of my post on Sept. 7, in response to Inger Stjernqvist, the pattern can just as easily be described as a "particle scattering pattern" as a "wave interference pattern". Consequently, it is not NECESSARY to view the latter as the only possibility.

    Lastly, regarding my choice of title, formulae that are mathematically identical, can be interpreted as corresponding to very, very different physical realities. As I pointed-out in other posts, one cannot actually observe a "wave-function", one can only observe a probability distribution, that seems to correspond to the magnitude of the "wave-function". But that magnitude is mathematically identical to the output of a filter-bank, that simply histograms particles (hence the correspondence with probability distributions); but that filter-bank does not depend on the existence of "wave-functions", de Broglie frequencies, Fourier superpositions, entanglement, or any of the other supposed wavelike properties. In other words none of those properties are NECESSARY to explain what is going on. They are merely SUFFICIENT.

    Rob McEachern

    Dear Robert,

    You are right in emphasizing that the amount if information written in states is much more than that corresponding to evolutions I.e the physical law. But you overlooked the huge algorithmic compression of the physical law. Moreover, the goal of physics is to connect preparations with observations, I.e. to make predictions with initial conditions known. Inducing a mechanism forom just observation is speculative, as it is often in cosmology.

    My best

    Mauro

      Mauro,

      Far from overlooking "the huge algorithmic compression", I noted that it is the very sparse information content, of the phenomenon that physicists have chosen to observe, that has made such large compressions possible.

      I then noted that the "problem of interpretation", is that the behavior of the "observer", as opposed to the "observed", has never been based on "predictions with initial conditions known", precisely because, unlike the "observed", the "observer" is not a sparse information content phenomenon; by assuming they can treat the "observer" in the same manner that they treat the "observed", physicists have made a very bad assumption.

      Rob McEachern

      Rob,

      I read through your Sep. 7 story - I empathize with your perspective (see my brief essay). As basically a retired information systems analyst myself, it seems to me that the slots encode additional 'particle' location selection information within the separated signals.

      Not being indoctrinated by any physics education, IMO the fundamental difference between detected particles and particles propagating as waves is that detected particle are physically localized while propagating waves are physically distributed in space and time. As such, a localized particle cannot physically traverse spacetime, and the location of a propagating wave cannot be definitively determined without producing a detected, non-propagating particle.

      Why would the interference pattern disappear if the detection screen were moved too near the slots? Let me put it simply: if you shine your flashlight on the house across the street a much larger area will be (dimly) illuminated by the reflection of dispersed photons than if you put your hand in front of the lens.

      Waves passing through two slots must disperse through spacetime before their signals can physically interact. Likewise, if two slots are separated by a distance that exceeds the amplitude of the input emitted wave, particles will be detected behind no more than one slot.

      Regardless of what the consensus of physicists is, I think these observation support the interpretation that waves propagate; particles do not.

      Thanks for your consideration, Jim

      Dear Rob,

      As I said in my first comment, your essay was one of the best. I'm glad everyone else agreed with me.

      Edwin Eugene Klingman

      • [deleted]

      Many thanks for your fine essay!

      I suspect that many of the FQXi authors have experienced criticisms from referees and editors who have not considered your argument. I too have a small collection of journal referee comments stating that my nuclear model is "inconsistent with the uncertainty principle" and therefore "not quantum mechanical" and therefore simply wrong - no matter what kind of agreement with experimental data is found.

      The antidote for what has become a worldwide scandal would be to append your essay to every discussion of the uncertainty principle in the textbooks!

      5 days later

      Dear Rob McEachern,

      Your focus on the uncertainty principle receives some support in Physical Review Letters 109, 100404 (7 Sept 2012) in which the authors experimentally observe a violation of Heisenberg's "measurement-disturbance relationship" and demonstrate Heisenberg's original formulation to be wrong.

      Edwin Eugene Klingman

      • [deleted]

      Robert,

      You certainly mistook me. I never claimed that a transformation measures. While time is commonly considered a basic physical quantity, mathematically trained EEs like me do not have a problem with the alternative choice of frequency as a basic physical quantity. Neither the measurable (elapsed) time nor the measurable frequency may change their sign. This physical restriction is not made in the mathematical model if we are using positive and negative numbers (IR). There is a tailor-made mathematics in IR. Complex Fourier transformation (FT) belongs to IR while real-valued cosine transformation (CT) belongs to IR. A Hendrik van Hees blamed me for damaging the reputation of my university because I argued that there is no loss of information except for the arbitrarily chosen point of reference when FT is replaced by CT. MP3 proves me correct.

      I would appreciate if you were in position to agree or disagree on my Fig. 2. Notice, CT does not need Heaviside's trick. It is just a clean mathematical flip-flop. CT of something consine transformed yields the original function. FT of a measured function of time includes addition of something unreal. The same is true for FT of measured frequency.

      Therefore I see a bug in the interpretation of quantum mechanics.

      Eckard

      Eckard,

      I'm not sure what you mean by "agree or disagree on my Fig. 2", I agree that the cosine transform involves real-valued functions, and that the Fourier Transform involves complex valued functions. But what follows from this? It does not follow, that either is any better or worse than the other, at describing real observations. They just describe them in different ways.

      What is much more significant, is the number of "components" employed in those descriptions. Why choose to describe a single frequency, as a "transform" involving a superposition of many frequencies, when you know, a priori, that there is only a single frequency present, by experimental design?

      If you wanted to, you could use either of the above transforms, to describe a straight line segment. By why would you want to?

      Rob McEachern

      • [deleted]

      Robert,

      When you suggested describing a straight line segment equally well with CT or FT, you tacitly assumed a segment out of a line of all time or all frequencies from minus infinity to plus infinity. However, negative frequencies are obviously not measurable, and as my Fig. 1 illustrates the same holds for negative elapsed time. I am trying to make you aware of consequences from this given in reality restriction to one-sided quantities (IR) which has been widely ignored so far. In principle it is possible to shift the points t=0 or omega=0 at will and use IR instead. However, you will certainly agree that the natural zeros of frequency, elapsed time, wave number, distance, etc. are reasonable even if the block universe denies this. Use of IR instead of the tailor-made IR introduces redundant apparent symmetry, cf. e.g. Ken Wharton. Our usual notion of time adds an arbitrarily chosen non-natural point of reference.

      Perhaps I need not explain why we are using integral transformations in signal processing. You argued yourself that it is more appropriate to consider a single frequency instead of infinitely many time components, i.e. points in time domain. The other way round, a single step in time domain can be thought as infinitely many frequency components.

      Since you were trained as a physicist, you certainly learned to overlook a trifle; Via FT, a not just real-valued but also unilateral function of a quantity in x domain corresponds to a quantity with Hermitian symmetry in complex Y domain and vice versa. You may substitute x by either time or frequency.

      You wrote: "I'm not sure what you mean by "agree or disagree on my Fig. 2", I agree that the cosine transform involves real-valued functions, and that the Fourier Transform involves complex valued functions. But what follows from this? It does not follow, that either is any better or worse than the other, at describing real observations. They just describe them in different ways."

      At first, for a one-sided original function of x, the CT does not introduce redundant (unphysical) data. A one-sided function of x can immediately be cosine transformed in an also one-sided function of y. I hope you will now agree on this.

      Secondly, the FT from x to Y implies a preparing fictitious analytic continuation from objectively given IR into a selected IR. Heaviside's trick assumes the missing in IR data equal to zero and then split into mutually canceling even and odd components. A correct interpretation of results one finally got in complex Y domain does therefore require a complete inverse FT including inversion of the agreed analytic continuation. Neglect of this trifle can cause serious misinterpretation. Check this yourself with respect to QM.

      Thirdly, the often uttered guess that the complex representation is the most general one is wrong. The Y domain contains arbitrarily chosen redundant x-continuation which does not immediately relate to the likewise arbitrarily redundant y-continuation in the X domain.

      Did you rethink your utterance "Counting downwards is every bit as "real" as counting upwards"? I think it does not matter whether you count your age upwards or downwards. You are permanently getting older and hopefully wiser.

      Eckard Blumschein

      Eckard,

      I did not "tacitly assumed a segment out of a line of all time or all frequencies from minus infinity to plus infinity." I explicitly stated that I was considering only a finite line SEGMENT, rather than an infinite line, precisely to avoid any infinities of time.

      "Perhaps I need not explain why we are using integral transformations in signal processing." Most actual signal processing, involving transformations, is performed digitally, and uses discrete rather than integral transformations. Being discrete, they do not extend over either infinite times or infinite frequencies.

      The redundancies you mention are real. But they only exist when the function being transformed is a single, real function. When one considers pairs of real functions, like the real and imaginary components of a complex function, the redundancies no longer exist. Pairs of real functions are just as real as individual real functions. The decision to treat these pairs as a single complex function was merely done as a matter of convenience; in the days before computers existed, multiplying (via pencil and paper computation) complex exponentials was much easier than multiplying trigonometric functions. Hence the popularity of complex notation.

      Rob McEachern

      Robert,

      Why don't I manage explaining to you serious mistakes? I will try it again. Did you understand that IR stands for the entity of all real numbers while IR stands for the entity of all positive real numbers?

      You argued that the problems with infinity can be avoided when using "discrete rather than integral transformations". Isn't a discrete CT or FT also an integral transformation? Aren't the (positive or negative) integer numbers and the (only positive) natural numbers IN also infinite?

      I wrote IR and IN for blackboard bold letters.

      A segment can be part of IR or of IR. You can imagine IR like a symmetrical with respect to zero folded together IR. However, does a symmetrical wrt zero IR make sense? If any negative value is identical except for the sign with its positive counterpart, then this pair is only in mathematical sense a pair without physical meaning.

      Did you now understand why CT is tailor-made and why apparently physical symmetries are artifacts of not up to correct interpretation properly performed mathematical tricks?

      Eckard

      Eckard

      Eckard,

      You asked:

      "Did you understand that IR stands for the entity of all real numbers while IR stands for the entity of all positive real numbers?" Yes.

      "Isn't a discrete CT or FT also an integral transformation?" No. Integral Transforms exploit the fact that trigonometric functions are orthogonal with respect to continuous integration. Discrete transforms exploit a different fact; namely that trigonometric functions are also orthogonal with respect to discrete, finite summations.

      I fail to see how CT usage of only positive numbers solves any of the problems with physical interpretation. CT still relies on the principle of superposition, just like the FT. Assuming that the MATHEMATICAL principle of superposition is a PHYSICAL principle is the problem. The problem is not the usage of real versus complex functions.

      Rob McEachern

      • [deleted]

      Robert,

      As far as I know, orthogonality is a pretty general property: Two vectors are orthogonal if and only if their dot product is zero. This holds in IR,IR, and IN.

      Well, I should better have said "Isn't DFT a FT and DCT a CT?" Of course, the DFT is a transform for Fourier analysis of finite-domain discrete-time functions. This is however irrelevant for my argumentation.

      You wrote: "I fail to see how CT usage of only positive numbers solves any of the problems with physical interpretation. CT still relies on the principle of superposition, just like the FT. Assuming that the MATHEMATICAL principle of superposition is a PHYSICAL principle is the problem. The problem is not the usage of real versus complex functions."

      I am curious how you will explain why superposition is not a physical principle.

      Did you not understand my Fig. 1? Let me tell a joke that ridicules unphysical mathematical reasoning:

      There are three people in a room. Then five people are leaving this room. Ergo two people have to come in in order to make the room empty.

      The consequence I am trying to make aware of is found in the essay by Ken Wharton.

      Eckard

      8 days later
      • [deleted]

      Hi Robert,

      From what I've been told about the original papers on the holographic principle, it is a statement that there is a kind of gauge redundancy that ultimately separates the root states from the alias states brought on by noise and phase distortion. As far as I'm concerned, this is too similar to the dimensional reduction of signal space in Shannon's theory for Shannon's theory to be cast aside in a cavalier fashion.

      The major critical difference between this Shannon-esque point of view and the traditional pre-holographic view is that the Shannon POV goes to show that the states "leak" out of the black hole right from the start; there is no information loss paradox when you go this non-traditional route; the traditional view sees the energy leak out due to the noise and phase distortion in a similar fashion, but there is ambiguity as to what occurs to the states (do they stay behind with the black hole proper, do they leak out, do they vanish). If you prefer to say that the holographic principle is bunk because it does not account for Shannon's work, then fine, but you're basically discrediting Shannon too and I'm left unimpressed by the raw butchery.

      I'm not really sure if your other comment is for or against black hole complementarity.

      - Shawn

      • [deleted]

      P.S. Perhaps it's a more appetizing concept if I don't mention the word "state" and instead talk about signals, alias signals, and the Hawking radiation being a manifestation of the alias signals. Or, perhaps not.