Dear Paul,
Congratulations mainly for the splendid essay. I am sorry I did not manage to rate it in time. You may have a look to: http://vixra.org/pdf/1306.0226v2.pdf
Good luck to expert's rating as well,
IH
Dear Paul,
Congratulations mainly for the splendid essay. I am sorry I did not manage to rate it in time. You may have a look to: http://vixra.org/pdf/1306.0226v2.pdf
Good luck to expert's rating as well,
IH
Paul,
Thank you kindly for directing my attention to your essay; I only wish you would've done so prior to the rating deadline. I found the idea(s) expressed in your paper rather novel and interesting although I did not find them absurd. While reading your paper I couldn't help but think of Louis Kauffman's Virtual Logic where virtual is defined as "exists in essence but not in fact."
While I'm a big fan of Dr. Kauffman, I'm not such a big fan of the Aspect experiments. I believe Peter Jackson, in his current FQXi essay, explores rather well the ambiguities inherent in the Aspect results. I accept that non-locality is an aspect of nature largely due to the Aspect results IN CONJUNCTION WITH the Mach-Zehnder results of Herzog et. al.. I don't know why the Mach-Zehnder results don't garner more attention in that they are much more amenable to statistical analysis (as opposed to the Aspect experiments) and, in the context of the Bell Inequalities, they hold the same amount of sway. So, while I understand how your idea(s) deal with the Aspect results, I do not fully comprehend how they explain the Mach-Zehnder results as well; perhaps you could address this lack of understanding for me.
Now, to address your perception of my own essay. In my world view consciousness isn't emergent, rather, it's primal; consciousness, like magnetism, permeates and an entities consciousness permeability is a function of its structural complexity (as defined by Ben Goertzel in his Pattern Theoretics). This position is largely a result of my long-running yoga and meditation practice but it makes logical sense as well.
David Deutsch, a very interesting thinker in my opinion, observes that information can be transformed to suit a wide array of media and then asks, "What is the general form of information?" Dr. Goertzel, in my opinion once again, gives a compelling argument, with his Pattern Theoretics, that the general form of information is pattern; whether words in a book, bars in a barcode, or bit-strings in a sequence function, the relevant infromation manifests as pattern. In his book FROM COMPLEXITY TO CREATIVITY Dr. Goertzel conjectures that reality (IT) emerges from pattern dynamics; in fact, he conjectures that reality is, at a fundamental level, an evolving ecology of pattern. He models such an ecology as interacting systems of functions on Non-Well Founded Sets. He calls it a magician system with his aptly named magicians and anti-magicians running around casting spells on one another. Their spell casting leads to "structural conspiracies" which is simply patterns (structure) conspiring to maintain one another. These pattern dynamics lead to the attraction, autopoiesis, and adaptation of complexity science. What Dr. Goertzel doesn't, in my opinion once again, adequetely address is, "Where do the primal patterns come from?" I suggest they come from primal consciousness.
As essay author Joaquin Szangolies has pointed out, the problem with dual nature theories is, "What is the causal nexus between the two natures, between the mental and the physical, the Platonic and the phenomenal?" In many of his White Papers William Tiller, a very distinguished Stanford physicist, concedes that there is no broad concensus as to the definition of consciousness but then points out that we all agree that consciousness manipulates information - pattern. So this leads me to my definition of consciousness: consciousness is the causal nexus between the mental and physical, the Platonic and phenomenal, with said causal ability manifesting as a result of its ability to manipulate (i.e. create, annihilate, transform) information - pattern.
So to conclude, the gist of my essay is that while, inferentially, the correspondence principle suggests IT from QUBIT, what makes IT interesting is emergence and in order for science to properly understand emergence it would seem necessary to sacrifice the, in my opinion once again, erroneous fundamental assumption that matter and the four known forces are primary. In fact, in my world view the four forces unify under the informative tag "conscious intent." Of course this is controversial . . .
Thank you once again for directing my attention to your essay and best regards,
Wes Hansen
Wes - Thank you for your comments. I enjoyed reading Peter Jackson's essay and commented as such on his page.
I very much enjoyed the virtual logic paper you referenced by Kauffman. The unity in multiplicity is at the heart of many paradoxes. "It is ONE for the global observer and MANY for the local observer."
Our mathematical formalisms in QM and GR tend to all be global "God's eye views" (GEV's) which mislead us as to what we can know and predict about the universe. The "Local observer view" (LOV) however can have many unities.
For example, Maxwell's equations have time symmetric solutions: the retarded exp(-iwt) and advanced exp(+iwt) waves. Wheeler & Feynman explored this in their 1945 absorber paper referenced in my essay. Despite their mathematics coming out correctly by using half-retarded (from the past) and half-advanced (from the future) as the field generated by each charge, they still used a GEV 4-vector (Minkowski) background of time.
The theory of virtual particles that came out of Feynman's work was intended to eliminate the notion of a field. He laid the groundwork for what we now see as an obvious next step: ditch the idea of a monotonic and irreversible GEV background and replace it with a LOV perspective where photon traversals represent local increments and decrements in time. To relate the "LOV" subtime to what "we" view as "GEV" classical time: simply use the triangle inequality to sum all the "absolute" values of each photon traversal, and voila, we can now see the obvious relationship to the mathematics of quantum theory and entanglement.
Within the subtime context we associate the departure of a photon from the transmitter atom with exp(-iwt) and the arrival of that same photon at the receiver atom as exp(+iwt). If entanglement does indeed turn out to be a photon hot-potato protocol as I postulated, then of course the net result is zero: energy and information are conserved in the entangled pair.
I will take a closer look at the Mach-Zender results, and examine if the concept of subtime provides as much insight there as it does for entanglement.
I will also look at the other references you described. I'm not sure I am qualified to discuss autopoiesis or conciousness. It seems too far up the mesoscopic & macroscopic chain for a simple /direct analysis relative to subtime.
I feel even less qualified to post an opinion on the fundamental nature of the four standard model forces. It might be that subtime can be thought about in a similar way in all boson/fermion interactions.
Thank you.
Kind regards, Paul
Dear IH
Thank you for your comment. I have the paper you referenced printed and will read it this weekend.
Kind regards, Paul
Tom - thank you for your response. I concur. If there is no background of time, then any one-way measurement of traversal time between two different systems tells us nothing about the instantaneous speed of a single body. I liked your Kepler's orbitals example. Your point is well taken.
By the very definition of subtime, you cannot measure it without a two-way traversal. The photon has to bounce back and forth at least once, otherwise time (and hence any velocity measurement) is undefined. Even Einstein defined time as "the reading on a suitably-synchronized clock located at the same position as the event". Synchronization is a two-way event.
I don't readily grasp your connection between punctuated equilibrium and the quantum stroboscope, and wonder if I did not express myself well in the essay on this point.
I am familiar with Per Bak's work on SOC and Bar-Yam's work on complex networks. This is a good analogy to help us understand the sudden changes at the atomic level, while nothing appears to change at the macroscopic level.
However, the point of the quantum stroboscope is that "for each atom" there is no experience of the passage of time. The modular arithmetic of subtime recurrences is for all intents and purposes, "eternal". The experience of time is this therefore the chain of interactions which add or subtract information from the system.
Kind regards, Paul
Please find attached, a "one page" summary of the essay, along with the latest (corrected) version of the essay.
Kind regards, PaulAttachment #1: TimeOneSummaryV1.1a.pdfAttachment #2: 4_Borrill-TimeOne-V1.1b.pdf
David - thank you for your comments. I would not describe subtime as a second time axis. If anything, it is a "no time axis" solution: setting aside Minkowski's global background of time with a purely local "element of physical reality" between emitter and absorber. Unlike Minkowski space, time doesn't exist beyond both ends of each photon path.
I have thought about how to describe subtime mathematically. it seems obvious that Hilbert spaces have very little to say regarding time in their inner-products/conservation/unitary evolution. Its just a set of rules that gives us the right answer for statistical experiments. I think this is what David Mermin (Boojums all the way through) was getting at when he said that explicit "denial" is built into the mathematical formalisms of QT.
At the basic level, subtime doesn't seem to need any more than Euler's equation and the triangle identity (see attached summary) to describe it. As Feynman discovered, half the exp(-iwt) wave plus half the exp(+wt) advanced wave gets the right answer. 聽However, I think Feynman missed it because he built QED on top of Minkowski space, giving rise to statements like "electrons going back in time".
In my view, nothing goes back in time. From the perspective of the emitter, it is time itself that is going backwards in the retarded wave along the photon path. To the absorber, time is going forwards as it receives what it sees as an advanced wave. The sign of "time" in this context specifies the direction of information transfer, it has nothing at all to do with what we call forwards or backwards, past or future; we need's Born's modulus applied to subtime to yield observable evolution in classical time (Tc). This is nothing more than the triangle identity.
You are right, the mathematics is there. But it's so simple it seems hardly necessary. I do plan to publish the paper, but I need to follow the rules of the fqxi contest first.
I would enjoy an extended conversation with you on this, and will take up your offer to contact you directly by email. However, we can also continue to interact on this web site; only the community voting was shut down the day you left your comment.
Kind regards, Paul
David - thank you for your comments. I would not describe subtime as a second time axis. If anything, it is a "no time axis" solution: setting aside Minkowski's global background of time with a purely local "element of physical reality" between emitter and absorber. Unlike Minkowski space, time doesn't exist beyond both ends of each photon path.
I have thought about how to describe subtime mathematically. it seems obvious that Hilbert spaces have very little to say regarding time in their inner-products/conservation/unitary evolution. Its just a set of rules that gives us the right answer for statistical experiments. I think this is what David Mermin (Boojums all the way through) was getting at when he said that explicit "denial" is built into the mathematical formalisms of QT.
At the basic level, subtime doesn't seem to need any more than Euler's equation and the triangle identity (see attached summary) to describe it. As Feynman discovered, half the exp(-iwt) wave plus half the exp(+wt) advanced wave gets the right answer. 聽However, I think Feynman missed it because he built QED on top of Minkowski space, giving rise to statements like "electrons going back in time".
In my view, nothing goes back in time. From the perspective of the emitter, it is time itself that is going backwards in the retarded wave along the photon path. To the absorber, time is going forwards as it receives what it sees as an advanced wave. The sign of "time" in this context specifies the direction of information transfer, it has nothing at all to do with what we call forwards or backwards, past or future; we need's Born's modulus applied to subtime to yield observable evolution in classical time (Tc). This is nothing more than the triangle identity.
You are right, the mathematics is there. But it's so simple it seems hardly necessary. I do plan to publish the paper, but I need to follow the rules of the fqxi contest first.
I would enjoy an extended conversation with you on this, and will take up your offer to contact you directly by email. However, we can also continue to interact on this web site; only the community voting was shut down the day you left your comment.
Kind regards, PaulAttachment #1: 1_TimeOneSummaryV1.1a.pdf
The latest (Corrected) version of the essay is attached, along with a one-page summary for those who do not have time to read the full essay.
Kind regards, PaulAttachment #1: 2_TimeOneSummaryV1.1a.pdfAttachment #2: 5_Borrill-TimeOne-V1.1b.pdf
Hugh - thank you for your compliments on my essay. I did review your software cosmos description and found it interesting. Yes, it does bear a similarity to my "demon" manipulating virtual machines.
In response to the section in your paper on "Detecting a Simulation".
As I mentioned in my description, a program cannot distinguish itself from running in a virtual machine or on real hardware. Detecting whether or not you are in a simulation is thus not possible within a single measurement. However, with multiple (I hesitate to say simultaneous) experiments between different entangled systems (read different hardware) will we start to notice that most systems are asleep most of the time. The speed of light is the key, but our experiments must find a way to measure it relative to subtime, not what we imagine it to be on a classical background of time.
I like the cross-disciplinary dialog between computer science and physics.
Both computer scientists and physicists do not understand time. But at least some physicists will admit it.
Take a look at the new one-page summary I just uploaded to the site.
Kind regards, Paul
Steven - thank you for your complimentary comments on my essay. I'm glad you liked it. You might also enjoy the one-page summary I just uploaded.
When I first came across Everett's parallel universes I thought it was absurd. But then I saw the interpretation described more fully in Lee Smolin's early books, and David Deutsch's fabric of reality, I realized there must be something to it. Lee Smolin doesn't believe it but he is very good at describing why it is at least as good as any other interpretation.
However, I started wondering about what David Deutsch was trying to get at and read more of his papers. It then occurred to me that this is not unlike a computer program trying to find out if it is running on its own hardware, or is "sharing" the resources of a single set of hardware. I finally came to the conclusion that the universe might have just one set of hardware, but Deutsch's multiple universes can still co-exist, just like virtual machines in computer science coexisting on the same hardware. It didn't take me long to realize that if this is true, it might throw a wrench into the whole area of quantum computing, or at least give us another perspective on which to view it.
In the previous post, I described the need for experimenters to find a way to measure time of flight experiments relative to subtime, not to what we imagine it to be on a classical background of time. It is not difficult to conceive of many experiments to falsify the theory.
Be careful not to read too much into the (simplistic) description of the second experiment in my essay. Although the helical behavior of photons can manifest as eigenvalues marking off wavelengths. Their phases will align with those atoms whose reflectivity and absorption matches in the detection plane normal to the experiment (with of course an intensity proportional to the square of the amplitude). But this is important: this matching occurs at any random (or chosen) orientation. So experimenters will need to get creative to see the n-lambda effects.
Kind regards, Paul
Vladimir - Thank you for your kind comments on my paper. I describe the photon as the carrier of time, not just of information. I would not describe myself as a subscriber to the idea that the universe can be represented of one state that goes one tick at a time. Indeed, I believe that our current scientific results point to a total and absolute asynchrony between different entangled systems, and that when "new" information erases old information that it shows up as a dissipation of energy (and a corresponding increment or decrement of time). Mutual synchrony shows up only in entangled systems, and specifically between bipartite pairs of particles mediated by massless force carriers.
What happens in a vacuum? Nothing! Except (as Feynman believed) the traversal of particles with some probability of colliding. Yes, even photons may have a non-zero cross section for interaction with other photons. This is reminiscent of the Aharonov-Bohm effect, which suggest that local 'potentials' may be more fundamental than the 'field' (which they originally suggested may be derived or emergent from potentials).
Regarding your Beautiful Universe Theory, I have now put many hours over several days into reading and and trying to make sense of your and Eric Reiter's documents. Your documentation is impressive and I can see you have put a lot of work into it. However, we do differ in a number of respects. I find it hard to subscribe to 'ether theories' simply from a minimalist perspective. I also have a hard time subscribing to a single (cubic) packing scheme when there are so many sphere packing schemes in the mathematics (e.g. kissing number problem,http://mathworld.wolfram.com/SpherePacking.html, http://en.wikipedia.org/wiki/Sphere_packing or http://www.3doro.de/e-kp.htm) which appear to correspond much more realistically to modern atomic and molecular results.
Eric's Reiter's proposition (and experimental yes/no question) rests on the presumption that (a) we don't have any other way of explaining waves and (b) that we can reliably measure traversals throughout an apparatus, and "coincidence" in the arrival of photons at a detector. Both of these issues are dealt with in my essay. The latest version of which can be found here: TimeOne
Along with a one-page summary: One Page Summary
Deomenico - thank you for your comment.
I think there are many ways in which 'information' can be transmitted (and received) in interactions between fundamental particles. From the thinking I have done so far on this topic, I am comfortable believing that 'time' can be locally incremented and locally decremented in the exchange of massless (gauge) bosons. For me, photons and gluons clearly play a similar role and are likely to represent the "elements of reality" that combine to create the time that we observe. This is because massless particles have only two degrees of freedom in their lowest angular momentum state, which matches the advanced and retarded traversals along a single dimension between a transmitter and a receiver entity.
Yes, a system decoheres rapidly if you shine light on it and try to measure its properties.
Thank you again for your comments. Take a look at the one page summary that I added to my post above.
Kind regards, Paul