[deleted]
How about positive and negative curvature 2D copies of 3D space ?
See my essays:
http://www.fqxi.org/community/forum/topic/946
http://fqxi.org/community/forum/topic/1413
How about positive and negative curvature 2D copies of 3D space ?
See my essays:
http://www.fqxi.org/community/forum/topic/946
http://fqxi.org/community/forum/topic/1413
Stefan,
I pointed out that the delayed choice experiment, as described in the references I gave, was not merely badly designed, but badly conceived. The telescopes block the path from a slit just as surely as if the slit had been closed. It thereby precludes any possibility for this apparatus to produce an "interference pattern".
You then wrote "i suppose that physicists all over the world,... surely have investigated the simple picture of emitter and reveiver you have drawn."
You say they surely have. I say they surely have not. Show us a better design for the experiment. Show us one that removes the "interference pattern", by merely delaying some choice, any choice, other than the choice in which you choose to block one of the paths, just as surely as if you closed a slit. We await your better design.
Dear Robert,
you wrote
"I pointed out that the delayed choice experiment, as described in the references I gave, was not merely badly designed, but badly conceived. The telescopes block the path from a slit just as surely as if the slit had been closed."
Robert, you say the telescopes block the path from a slit just as surely as if the slit had been closed. I say, if one slit would indeed be "closed", why are there two similar patterns behind each slits? Your "closed slit" approach would result in a different pattern, namely the pattern that is left over after one slit has been really *closed*.
I am sure you have a "mechanism" that explains how the two distinct telescopes do mutual exclusively indicate a hit, but never both at the same time.(I anyway don't have such a mechanism besides my interpretation i gave in my essay).
But that't not the point i want you to consider. The point is that for example the double-double-slit experiment (posted above) needs an explanation which fits into your explanatory scheme. Maybe you succeed, maybe not. Anyway it would, for the sake of better evaluating your model, be interesting what you can say about this experiment. That's all i wanted to remark.
Best wishes,
Stefan
Robert,
Don't confuse mathematics with physics. Mathematics has unfortunately become the bad habit to define anything at will. Physics is or at least should be bound to nature.
I think Dirac was not horribly wrong when he meant that there is no negative frequency in reality. Of course mathematicians do not have problems with anything negative. If there were 3 persons in a room before 5 left it then 2 have to come in as to make the room empty.
You might feel more familiar with some theories than me because I do not uncritically accept them. Didn't your attempt to explain negative frequency by means of the definition you mentioned just shift the question? Is there negative elapsed time alias absolute phase? My ears are unable to hear future signals because I am never drunk.
May I invite you to try and refute at least what one out of my five Figs. tries to tell?
Eckard
Eckard,
The cosine transform has the same fundamental problem as the complex Fourier transform, and all other such transforms. The problem is that NONE of them even ATTEMPT to measure ANY frequency, positive or negative. NONE of the things they actually measure, even have the units of frequency. Instead, they all, in effect, merely predefine a set of waveforms with predefined frequencies, then measure the degree of correlation between those predefined waveforms, and the input waveform.
If one wishes to treat frequency as an actual observable, then one needs to at least attempt to observe it. An FM receiver does that. NO transform ever even tries.
Robert
Superb essay. I was really lifted reading it. Perhaps there may be hope for us yet! I hope you'll read mine as I take the 'low road' of interpreting the findings themselves not abstracting them into numbers, and find a simple way of explaining CSL logically, consistent with Peter Jacksons brilliant anaysis and mechanism (which I don't think all have grasped). I see you saw some of the potential.
I hope you do well, my score should help, and also hope you'll also read and commnet on mine.
Regards
Rich
Dear Robert,
Very nice essay indeed. I think that the transition to a more "informational" physics is in its way. One can see it all over, specially in theories of quantum gravity but also in more traditional subfields. Even in orthogonal disciplines such as biology. I find very interesting the way you present the idea that initial conditions have more information content than the equations describing a natural phenomenon. The idea is not completely new although your treatment is better compared to what I had found before. A nice philosopher that has advanced similar ideas (with whom I may strongly disagree but he makes a good case) is James McAllister from Leiden. Your view, as so do McAllister's seem to suggest that most of the world information content is actually algorithmic random, hence not "capturable" by mathematical equations.
Computer programs may do better than mathematical equations in this sense, because unlike what your JPEG example suggests, there is no clear cut distinction between program and data in general (follows from Turing's universality). As you know compression algorithms compress the data and also encode the instructions for the decompression in the same file, making it undistinguishable from the original data itself until it self-decompress. But that applied to the real-world is of course already making the assumption that one can fully capture the initial conditions in some compressed form, which in agreement with McAllister it may just not be the case, given that all models we have of natural phenomena are approximatively and there is always some data that escapes to equations and eventually make our theories useless in the long run due to nonlinear dynamics. In this regard, the problem may not be the poor information of the equations, but the poor measure of the initial conditions at our coarsed grained macroscopic reality.
You also rightly point out that traditionally physical laws are seen like compression algorithms, this is an old idea advanced by Greg Chaitin in numerous of his books and papers, in the context of algorithmic information theory (my own research field). And I think is the working hypothesis of science, and no alternative seems better so far. The main divergence I have with your point of view (and McAllister's for that matter) is what is traditionally known as the The Unreasonable Effectiveness of Mathematics in the Natural Sciences, to quote Eugene Wigner. This translates in the world of lossless compression algorithms in the astonishing fact that we are able to compress many, if not most of natural data, up to some point, including initial conditions, and to predict, at least in the short-term, many natural phenomena. So even if some data is left out, laws seem to describe a good deal of the important (to us) behaviour of natural phenomena, that is the regular behaviour that can be synthesised for our advantage (e.g. for prediction purposes).
All in all a very nice essay, I only quite regret that information enthusiasts never go beyond Shannon's information theory, which is the theory as it was left almost 60 years ago, and which is a theory of communication. The cutting-edge theory of information is currently algorithmic information theory (Kolmogorov complexity, logical depth, algorithmic probability).
Hi Robert,
Here's another attempt at answering your question... "So what is the big deal? What makes this so significant?"
After reading:
- Your essay
- 'The Heisenberg Uncertainty Principle and the Nyquist-Shannon Sampling Theorem' by Pierre Millette
- 'An Introduction to Information Theory: Symbols, Signals and Noise' by John Pierce
- 'Communication in the Presence of Noise' by Claude Shannon
I am left with the impression that Shannon and Piece predicted that the holographic principle would become a naturally accepted concept in physics. They detail how the volume of the signal space "creeps" away from the origin of the space as the dimension of the space increases; how there is dimensional reduction in the message space when compensating for phase "differences" (same message, different phase) that can arise when sampling of the signal. Seems at first glance to be hint at how to get rid of singularities at the centres of black holes.
Perhaps it's not quite the same thing. On the other hand, if it's the same thing, then that's quite significant. In any case, I note that Shannon was not directly referenced in 't Hooft's first paper called 'Dimensional Reduction in Quantum Gravity'.
- Shawn
Hector,
Regarding the "Unreasonable Effectiveness of Mathematics", in an earlier post, under Matt Visser's essay, and repeated somewhere under my own, I wrote:
"In your summary, you ask "Exactly which particular aspect of mathematics is it that is so unreasonably effective?" in describing empirical reality.
I would argue, that is not an aspect of mathematics at all, but rather, an aspect of physics. Specifically, some physical phenomenon are virtually devoid of information. That is, they can be completely described by a small number of symbols, such as mathematical symbols. Physics has merely "cherry picked" these sparse information-content phenomenon, as its subject matter, and left the job of describing high information-content phenomenon, to the other sciences. That is indeed both "trivial and profound", as noted in your abstract."
Regarding the effectiveness of "Shannon's information theory" as compared to "algorithmic information theory", I am very strongly of the opinion that the former is much more effective than the latter, in all the areas, like measurement theory, that have much real relevance to "an observer". The difference lies in the relative importance of "Source Coding" vs. "Channel Coding"; lossless compression algorithms, in spite of any "astonishing fact", are virtually useless in the realm of observing and measuring. One of the biggest problems hindering the advancement of modern physics, is that physicists "don't get it"; contrary to popular belief, observation and measurement are not about preserving source information, they are about distinguishing "relevant information" from "irrelevant information" as quickly and efficiently as possible. A lossless Source Coder, with sunlight as its input, would preserve huge amounts of information about the solar spectrum, that is absolutely irrelevant to any human observer, other than an astrophysicist. That is why the channel coder in the visual pigments in the retina totally ignore this "irrelevant information". The same is true of auditory information; well over 99% of the "Source Information" is discarded before the information stream ever exits the inner ear. While this information has great relevance to a modern telephone modem, it has none at all to a human observer.
Since, as Shannon demonstrated, all channels have a limited information carrying capacity, it is imperative for any multi-stage information processing observer, to remove capacity-consuming "irrelevant information" as quickly as possible. This presents a "chicken and egg" dilemma, that has been debated since at least the time of Socrates, 2500 years ago. How can you find what you are looking for, when you don't even know what you are looking for.
Nevertheless, as I pointed-out in the essay, when you do know, such as when you have created an experiment in which only a single frequency or energy exists, looking for (attempting to observe and model) a Fourier superposition, rather than a single frequency or energy, is a positively dumb thing to do. It is no wonder why there is so much "weirdness" in the interpretations given to such inappropriate models.
You stated that "Your view... seem to suggest most of the world information content is actually algorithmic random, hence not "capturable" by mathematical equations". That is not my view. My view is that much of the "world information content" is very predictable. HOWEVER, the function of any good design for a sensor, instrument, or observer, in other words, a Channel Coder, is to make sure that it's output is devoid of all such redundant predictabilities. Hence, although the world may not be random, any good channel coder will render all observations of that world into algorithmic random output. One does not need to observe the solar spectrum today, precisely because one can predict that it will look the same tomorrow. Evolution via natural selection has ensured that biological observers do KNOW what they are looking for, and actively and very, very effectively avoid ever looking at anything other than what they are looking for. Consequently, equations may very well be able to capture the "Source Information" about observed elementary particles. But they cannot capture the "Source Information" of a human observer, attempting to interpret any information. Such an observer has spent it's entire life recording outputs, and basing all its behaviors, on sensory channel coders that are attempting to produce algorithmic random inputs to the brain. The brain's function is then to look for "higher level" correlations between these multi- sense, multi-time inputs, in order to generate higher level models of exploitable, predictable correlations.
Unfortunately, for the better part of a century, quantum physicists have thought they should be looking for superpositions. But as Josh Billings (Henry Wheeler Shaw) once said:
"The trouble with people is not that they don't know, but that they know so much that ain't so."
Rob McEachern
Dear Robert,
You present a very interesting, and, I believe, useful point of view here. I particularly appreciate your remarks about Bell's theorem. I must confess that I am not yet quite sure what I think about all your conclusions, but I certainly agree that equations (at least, the usual differential equations that make up much of the language of modern physics) carry negligible intrinsic information, and that a legitimately information-theoretic view of fundamental physics is needed. Of course, fundamental physics (particularly quantum gravity) is already trending in this direction, but I believe that paradigms predating the information age still exert significant inhibitory influence. Personally, I think that covariance (Lorentz invariance, etc.) has more to do with order theory than with Lie group symmetry, and this viewpoint is more congenial to an information-theoretic perspective. In any case, I enjoyed reading your work! Take care,
Ben Dribus
Benjamin,
With regard to "what I think about all your conclusions", bear in mind that my main conclusion is this:
1) QM claims to be a good description of how "elementary particles" behave; they have a "wave-function"
2) QM claims to be a good description of how "human observers of elementary particles" behave; they too have a "wave-function"
I believe the first proposition is true. But the second is false.
The problem is not that the particles behave weirdly, but that the physicists are behaving weirdly, when they have attempted to interpret their own observations and theories. They have completely misinterpreted what their own equations actually "mean". While applying the concepts of information theory to the behaviors of "the observed" would be helpful, applying them to the behaviors of "the observers" is imperative.
On a more technical level, my conclusion is that, while treating reality as a "Fourier Superposition" may be "sufficient" for many purposes, it is neither "Necessary" nor even "Desirable", for many others. Physicists have yet to appreciate that fact.
Rob McEachern
Rob,
Once again you put your finger on the problem. I agree that the first proposition is true, the second false. As I note in my essay, The Nature of the Wave Function, the assumption that wave functions are Fourier superpositions of sine waves has 'built into it' the assumption of single frequency sinusoidals of infinite extent. This has (mis)lead some physicists to speak of "the wave function of the universe", and confused John Bell, who claimed: "nobody knows just where the boundary between the classical and quantum domain is situated" [p.29, 'Speakable...']. He claimed the "shifty split" between microscopic and macroscopic defies precise definition.
And yet the physical wave described in my essay has finite extent [p.5]. It has real dimensions and the 'trailing vortex' is finite -- typically the length of an atomic orbit [see essay]. Fourier decompositions of infinite extent are believed by many to be limitless. With a real field, there is a real boundary.
Keep fighting the good fight.
Edwin Eugene Klingman
Shawn,
Personally, I do not see that much significance in the holographic principle. It is another manifestation of the problem I noted in my essay; A description of the world/reality, and the world/reality itself, have different properties. Is the holographic principle, a property of the description, a property of the world, or both?
I previously noted that the Shannon Capacity simply corresponds to the number of bits required to digitize a band-limited signal. But what does "band-limited" mean? It means the signal has been passed through a filter, which introduces correlations between "nearby" measurements of the signal; indeed, any sample-measurements made between "sufficiently close" samples (the Nyquist sampling rate), will be so highly correlated with the Nyquist rate samples, that their values can be predicted with arbitrarily high accuracy, from the Nyquist samples. Hence, they produce no additional "information"; thus, a higher sampling rate will not increase the amount of information in the digitized signal.
Now consider an observable signal, expanding away from a source. At any given radius, R, from the source, how many samples does an observer have to take, on the surface of the sphere, of radius R, in order to digitize all the information in the signal? The answer depends on if the band-limiting filter (which may be either a temporal filter, spatial filter, or both) is at the source, or at the observer. If it is a spatial filter at the source, then whatever correlations the filter produced, between samples, will expand along with the surface of the sphere. Consequently, the number of spatial samples required to capture all the possible information is independent of the size of the sphere. But if the filter is applied by the observer, and the same filter is used for all radii, then the number of spatial samples will increase in proprotion to the area of the sphere. So which is it? According to Wiki - Holographic Principle:
"The holographic principle was inspired by black hole thermodynamics, which implies that the maximal entropy in any region scales with the radius squared, and not cubed as might be expected. In the case of a black hole, the insight was that the informational content of all the objects which have fallen into the hole can be entirely contained in surface fluctuations of the event horizon."
and
"The holographic principle states that the entropy of ordinary mass (not just black holes) is also proportional to surface area and not volume; that volume itself is illusory and the universe is really a hologram which is isomorphic to the information "inscribed" on the surface of its boundary."
Those statements imply that the Holographic Principle assumes the filter is an attribute of the observer, rather than the source. Consequently, the principle is a statement about an attribute of an observer's description, not an attribute of the source.
Rob McEachern
Robert
I think article of Frank Wilczek interesting for you.
Total Relativity: Mach 2004
http://ctpweb.lns.mit.edu/physics_today/phystoday/%28356%29Total%20Relativity.pdf
Yuri,
Thanks. I did find it to be interesting.
Rob McEachern
Robert,
Yuri Manin smartest modern person, expert of relation between physics and mathematics
http://www.emis.de/journals/SC/1998/3/pdf/smf_sem-cong_3_157-168.pdf
http://www.ams.org/notices/200910/rtx091001268p.pdf
I hope interesting for you.
If you do not understand why your rating dropped down. As I found ratings in the contest are calculated in the next way. Suppose your rating is [math]R_1 [/math] and [math]N_1 [/math] was the quantity of people which gave you ratings. Then you have [math]S_1=R_1 N_1 [/math] of points. After it anyone give you [math]dS [/math] of points so you have [math]S_2=S_1+ dS [/math] of points and [math]N_2=N_1+1 [/math] is the common quantity of the people which gave you ratings. At the same time you will have [math]S_2=R_2 N_2 [/math] of points. From here, if you want to be R2 > R1 there must be: [math]S_2/ N_2>S_1/ N_1 [/math] or [math] (S_1+ dS) / (N_1+1) >S_1/ N_1 [/math] or [math] dS >S_1/ N_1 =R_1[/math] In other words if you want to increase rating of anyone you must give him more points [math]dS [/math] then the participant`s rating [math]R_1 [/math] was at the moment you rated him. From here it is seen that in the contest are special rules for ratings. And from here there are misunderstanding of some participants what is happened with their ratings. Moreover since community ratings are hided some participants do not sure how increase ratings of others and gives them maximum 10 points. But in the case the scale from 1 to 10 of points do not work, and some essays are overestimated and some essays are drop down. In my opinion it is a bad problem with this Contest rating process. I hope the FQXI community will change the rating process.
Robert,
I'm just a pedestrian bystander here, but I'd like to attempt a couple of observations about particle-wave duality, the double slot experiment and entanglement.
- As I understand, 'particles' can only traverse spacetime as propagating waves.
- Conversely, particles are manifested when its propagation energy is localized (I think physically reconfigured as rest mass-energy) or absorbed.
- Even a single particle emission propagating as a wave can simultaneously pass through two (proximal) slots, to be manifested as a single localized particle upon detection.
Regarding entanglement, I agree with your assessment. As I understand, entangled particles are most often physically produced from a single particle. I think that the particles' properties, or their manifestation frequencies, are entangled during that initial process...
Jim
Dear Robert,
Reading your essay, I tend to agree with many of your conclusions, but there was one particular passage that does not seem to be correct to me. In the description of the double slit experiment, you describe the particle detectors as only counting particles. I think it is critical to consider that they also record the location of the particle detection which, as it happens, coincides with the apparent path of a wave. You stated:
"In double-slit experiments, much is often made of the fact that the distribution of particles looks like an interference pattern, even when the particles are sent through the apparatus one at a time, as if that actually matters. Well, it might matter if a detector tried to measure a wave-like property like frequency or phase or a superposition. But neither of the systems just described even attempt to do that. They simply count particles and infer the frequency from the counts. It does not matter if the particles arrive all at once or one-at-a-time."
"Why does an 'interference pattern' occur at all? Because the slits do not have Gaussian responses. They have Sinc-function-like responses, whose combination just happens to look like an interference pattern. There is no wave behavior. There are just particles in a correlated energy/frequency state. But detectors like those described do not really distinguish between particles and waves; in effect, they just count received energy quanta and then make an inference, not a measurement; an inference based on a priori information."
IMO, the interference pattern occurs because the spatial distribution of particle detections occurs only along the path of their propagating wave forms. It is the obvious interference pattern that provides physical evidence of the wave distribution from each slot, even when a single quanta passes through the system at any time.
It is for this reason that I conclude that particles propagate only as waves, and are localized as discrete particles.
BTW, I'm only a retired information systems analyst - not at all a physicist or mathematician, so my perspective may differ. I found your article interesting to the extent that I could follow it, but frankly I had envisioned a somewhat different discussion based on the title.
As Benjamin Dribus discussed in his essay, theories are evaluated on the basis of their success in explaining and predicting physical phenomena. However, I think that far too often a mathematical formulation that successfully predicts the outcome physical processes are merely presumed to accurately represent the explanation of the causal process. In the example I'm most interested in, general relativity (GR) is thought to accurately describe how gravitation physically works, in contrast to Newton'attractive force'. However, IMO GR very successfully describes only the effects of gravitation in the context of the described abstract system of dimensional coordinates. I certainly do not think that the dimensions of spacetime directly cause any physical effects...
I think similar presumption that a mathematical formulation that accurately predicts observed physical phenomena, like those requiring compensatory dark matter or dark energy, for example, do not necessarily describe the actual physical processes producing those effects any more than Ptolemy's ability to predict the motions of planets in the sky was proof of the physical process presumed to produce those phenomena. Well, maybe just a little...
Yours is a very interesting essay - I enjoyed it.
Sincerely, Jim
Dear Robert!
Great essay and profound ideas! Obviously the new physics of the information age is not "physics formulas" and "physics forms"?.. The highest score. Good luck in the contest. Sincerely, Vladimir