• Ultimate Reality
  • Classical Spheres, Division Algebras, and the Illusion of Quantum Non-locality:

Please understand..

The above comment is in the context where computability is seen as a necessary factor, which must have been present in order for the origin of the universe to proceed, in the very first moments of cosmological time. The fact that I see the Octonions as exhibiting dynamism is also - in my view - a property which endows them with all the attributes of a processor of information. Stated a different way S7 can be seen as a peculiar object which is capable of computation (i.e. - a computer processor). And to honor Rick's preference; one could say it is the octonion algebra itself, which holds the magic. Either way; this fulfills the sense of what Gerard 't Hooft told me, that we don't need atoms of space to be computers, because the laws of nature do the calculating for us.

I'm still fleshing out the details, but I'm operating on the assumption that the ab initio and a priori models must match.

All the Best,

Jonathan

Jonathan,

This is a very cerebral discussion, rising out of a very long history of analyzing nature. And I am not necessarily a very knowledgeable person, but it seems to me the process of the conversation becomes focused on the details, rather than the dynamics. I'm not disrespecting complexity, but it is fairly common and pervasive in nature. Can you contextualize complexity, because it seems quite easy to become lost in it?

It's been my contention that information and energy form a dichotomy. Information is necessarily static and energy is inherently dynamic. Now it seems as though what is going on here is to try and pull apart all these wheels and gears of the machine, to see how they work, yet it overlooks the actual dynamic that makes it work. Then, as with time, there is debate over what direction it goes, or even if it goes any direction.

I'm not going to disrupt the dynamic of this discussion, because it has quite a lot of momentum, but I do think there might be other ways and perspectives to look at things. Math likes to distill away all that is disorderly, but I think that just gives you the skeleton of the situation, not its seed. Then when flesh gets imagined back onto those bones, we end up with something that fills most the gaps, but doesn't work very well.

Not that I have anything other than the perspective of an outsider to offer, as I've made the point about how distilling time down to measures of interval only focuses on measures of change, not causes of change, which doesn't get much traction, so I guess I'll just keep tabs on what is an interesting conversation and situation.

Regards,

John M

Thanks John,

I've been known to ramble unproductively about subjects tangential to the topic myself, so I can't claim to be entirely focused on what matters either. I try to jump in when something important is being missed or glossed over, like when someone makes an important point that gets ignored.

I rather like to point out that there is a middle ground, then jump away when people get in a heated argument over details. I benefit greatly from the contrasting input of expert opinions, that address one aspect or another of the problem and solution, then I try to form a coherent picture that highlights the areas of agreement.

I find this thread particularly interesting in that regard, as the subject is quite polarizing, but if you look past the disagreement there is a lot of truth emerging, and a surprising amount of agreement among the 'combatants.'

All the Best,

Jonathan

Jonathan,

Thanks. I'm somewhat of the same philosophy, to step back and try seeing if the two sides are not part of some larger whole. Wars get found over such disputes. Communism and capitalism amount to different views of the same socio-economic puzzle.

I was just focusing in on your comment that these geometries are evolutionary and so presumably not static and in my mind, makes the progression of form dynamic and asymmetric. Which obviously goes to my argument over the mistreatment of time.

Regards,

John M

    Hi James,

    By a "physical measurement result" one means the discrete record of events -- a click, a blip, a flash, particle tracks in a cloud chamber.

    Whatever one proposes as the mechanism for causing such events, the variables are reduced by experimental controls to only two, i.e., the choice between recording an event, or not. This is classical randomness, demanded both by EPR and Bell.

    The choice function of Bell-Aspect results is invested in the experimenter. Meaning that the experimenter's choice of orientation determines the outcome -- say, click or no-click and the simultaneous correlation of the result between observers (A & B). Joy's framework of continuous measurement functions replicates the Bell-Aspect result by taking the choice function away from the experimenter and investing it in a natural physical space, orientable by the topology. Meaning that the space of measurement functions exists in such a way that forces random events (by 50-50 classical randomness) to behave the same as the experimenter-determined orientation in the Bell-Aspect result. Bell-Aspect depends on linear approximations; Joy's framework is one of nonlinear certainty.

    The difference is, that Joy's framework shows there is no boundary between quantum and classical domains. The world is fundamentally classical, just as EPR claimed and as Bell hoped. The assumptions of standard quantum theory make the world fundamentally probabilistic, as if the experimenter's choice of orientation is equivalent to the roll of dice every time a measurement is made.

    Only a physical experiment could demonstrate that Joy's measurement framework is strong enough to be incorporated into a theory. The computer simulations, however, are like a proof of concept. Even more, though, the concept is strongly in favor of continuous functions; that it translates smoothly from one machine language to another, would obviate any hint of biasing away from the continuous function model.

    Best,

    Tom

    James,

    Let me try and explain it this way:

    You're right that the digital computer is far more accurate than analog; it is not capable, however, of computing an actual continuous function, even though a simulation to arbitrary accuracy is more than sufficient.

    Analog computers were commonly used in 20th century warfare to train the fire of unpowered projectiles. This is because boundary conditions are well defined -- there is a point of origin, say the deck of a warship, and a target where the shell explodes. Between these points, one calculates a continuous trajectory. Without variables -- say wind direction and speed, pitch and roll of the ship, etc. -- the trajectory is completely determined by the boundary conditions.

    Now think of shooting a projectile to the moon. In principle it's the same as firing a shell to a target on the beach, a two-point boundary value problem in 6 dimensions. There are a whole lot of variables that ground control has to continually correct for such as gravitational anomalies, that are assisted by digital computers; however, nothing more than good old Newtonian physics is required. There is one completely determined trajectory calculated for least time, least fuel between the boundaries.

    The simulation of Joy Christian's continuous measurement function tells us that even microscale particles may have a completely determined trajectory, when boundary conditions are removed. In other words, removing an experimenter-created boundary between quantum and classical domains, no such boundary naturally exists.

    All best,

    Tom

    Hi Tom,

    I know you are a dogged fighter for your cause and this is not a bad thing.

    Talking of the "continuous trajectory" which we are trying to replicate and which "The simulation of Joy Christian's continuous measurement function tells us that even microscale particles (i.e. quantum domains) may have a completely determined trajectory, (like in the classical domains)":

    Will you regard such continuous trajectory as occurring below the Planck length, 10-35m or will you grant some discreteness below this scale? Does Joy's work give any indication of whether anything unusual could be present at this infinitessimal scale that can put to test continuous measurement?

    Akinbo

    Akinbo,

    The only cause I'm interested in, is good science.

    You ask, "Will you regard such continuous trajectory as occurring below the Planck length, 10-35m or will you grant some discreteness below this scale?"

    The Planck length is a practical limit, not a physical one. I grant discreteness at every scale -- and continuous measurement functions at every scale -- as classical physics always has.

    "Does Joy's work give any indication of whether anything unusual could be present at this infinitessimal scale that can put to test continuous measurement?"

    If there is no boundary between quantum and classical domains -- there is no domain restriction on measurement results. Results, of course, are always limited by available technology.

    Best,

    Tom

    Hi Jonathan,

    You write "The fact that I see the Octonions as exhibiting dynamism is also - in my view - a property which endows them with all the attributes of a processor of information."

    A processor has the capability of opening and closing logic gates at decision points, such that binary decision making at every point ensures the output result for which the computer was programmed.

    "Stated a different way S7 can be seen as a peculiar object which is capable of computation (i.e. - a computer processor)."

    It cannot. S^7 is a topological generalization of the Euclidean sphere in 8 dimensions ("a 4-sphere's worth of 3-spheres," as Joy has characterized it). It is the topological *orientation* in Joy's model that accounts for the measurement function; left and right hemisphere choices of a parallelized 3-sphere are at every decision point. Random natural decisions at any scale of measurement account for a smoothly continuous function that some call "super deterministic." (Bell's term.) While this term carries an unwarranted connotation of fatalism, it really just means that nature continuously participates with us in decision making, as an inducement to equilibrium at every scale.

    "And to honor Rick's preference; one could say it is the octonion algebra itself, which holds the magic."

    There's no magic in Joy's framework. It's classical physics.

    "Either way; this fulfills the sense of what Gerard 't Hooft told me, that we don't need atoms of space to be computers, because the laws of nature do the calculating for us."

    Right. What he means, is that every point of spacetime has to be physically real, in order to process information in a deterministic manner. If Professor 't Hooft were Einstein or Joy Christian, even with a Nobel prize to his credit he wouldn't be a professor; he would be regarded by the physics community as a solitary crank. Nevertheless, he deserves due recognition for sticking by his guns in defending super determinism. Does he get it? -- not so much.

    All best,

    Tom

    Tom,

    I am still not clear on what constitutes "experimental results" in the context of:

    Me (Paraphrasing Tom): "...a continuous measurement function with randomly changing boundary conditions...(simulation)...requires random input to a continuous trajectory, in order to replicate the function."

    I am wondering about the correctness of: The substitution of "...requires random input to a continuous trajectory, in order to replicate the function." for "...a continuous measurement function with randomly changing boundary conditions..." I am not saying anything is wrong with it.

    The two concerns are: Should this substitution be made? - and - What 'baggage' does one bring along with experimental results when they are used as inputs (If they are being used as inputs.)? That was why I asked if "experimental results" are being used as inputs.

    Are established physical experimental results of "...a continuous measurement function with randomly changing boundary conditions..." being used as inputs for "...random input to a continuous trajectory..."?

    On a different point:

    "...that it translates smoothly from one machine language to another, would obviate any hint of biasing away from the continuous function model."

    My earlier use of the words "machine language" referred to the one's and zero's that all computers actually operate with. I don't see that the choice of a computer language that serves as a translator between human language and machine language aids in proving anything about the validity of what is being modeled at the machine language level. What am I missing here?

    James Putnam

    Jonathan,

    I thought you would like this, as an example of the open ended nature of complex systems and why trying to solve the issues becomes a form of mobius strip of cyclical solutions and problems.

    Regards,

    John M

    "I am still not clear on what constitutes 'experimental results'"

    You don't understand clicks, flashes, blips, etc.? Physical events.

    "Are established physical experimental results of '...a continuous measurement function with randomly changing boundary conditions...' being used as inputs for '...random input to a continuous trajectory...'?

    You're overthinking it, James. Bell-Aspect results depend on input from a discrete linear orientation. A measurement function continuous from the initial condition allows nonlinear continuous input from random orientations.

    "My earlier use of the words 'machine language' referred to the one's and zero's that all computers actually operate with. I don't see that the choice of a computer language that serves as a translator between human language and machine language aids in proving anything about the validity of what is being modeled at the machine language level. What am I missing here?"

    Binary arithmetic (combinations of zeros and ones) is a universal software language that expresses data. Machine languages (Java, Mathematica, Fortran, Python, etc) are adapted to specific computing requirements for assembling and processing data. Computer code is written in a specific machine language. That several varieties of code can get the same result for simulating a continuous measurement function in Joy's framework, suggests to me that there is no bias from one language to another. As I told Florin (which he already knows, but may be helpful to you), one can bias a function for any one program toward discrete or continuous; however, if one gets no bias using common programming parameters in several languages, it's a pretty good bet that the simulation is a true representation of the given program.

    Best,

    Tom

    "I am still not clear on what constitutes 'experimental results'"

    Tom: "You don't understand clicks, flashes, blips, etc.? Physical events."

    Me: How about a straight answer to this question:

    "Are established physical experimental results of '...a continuous measurement function with randomly changing boundary conditions...' being used as inputs for '...random input to a continuous trajectory...'?

    Tom: "You're overthinking it, James. Bell-Aspect results depend on input from a discrete linear orientation. A measurement function continuous from the initial condition allows nonlinear continuous input from random orientations."

    My thinking will be over when I understand that there is data representative of real physical results being used as inputs for '...random input to a continuous trajectory...'?

    The importance to me of making this answer clear is to determine whether or not there is 'baggage' that comes along with using established repeatable physical empirical data. Do you understand what I mean by 'baggage'?

    James Putnam

    Tom,

    There were points in Rick's message that I would like to see your response to.

    James Putnam

    Hi Rick, James, Jonathan, and Tom,

    It is about time I defended Tom in public. I am doing this not because Tom has defended me and my ideas unaccountably many times before, but because I think Rick is criticising him too harshly and too unfairly.

    Let me begin by noting that I think you, Rick, are quite a talented mathematician. I think you know a thing or two about mathematics, and can effortlessly pick up even those things in mathematics you are unfamiliar with. And yet, I do not see you as a professional mathematician, nor as a professional scientist. I could be wrong about this, but that is the impression I got from reading your FQXi essay.

    Now Tom is definitely not a professional mathematician, or a professional physicist, or a professional computer scientist. Neither does he claim to be. And yet he has been able to see some of the subtleties within my framework that even the self-proclaimed professionals haven't been able to see. It is quite shocking to me to witness the blind-spots they have for some of the most elementary things in physics and mathematics. Now sometimes Tom gets things wrong---but who doesn't? We are all guilty of getting things wrong sometimes---even the mighty ones among us. It is therefore too harsh and too unfair to criticise Tom for getting things wrong sometimes. If we all got everything right all the time, then there would be no point in having any of these discussions here, or elsewhere.

    Now something of substance: Earlier in this thread Jonathan made an attempt to defend Tom, with a substantive argument. I think that was an excellent defence of what Tom has been saying, if only in a rather casual language compared to how Jonathan has put things. So there! You too, Rick, are sometimes unable to see things that others, like Jonathan in this case, are able to see so clearly.

    I rest my case!

    I certainly don't want to be drawn into this too deeply. As it is, there is enough on my plate.

    Best,

    Joy

    Joy, thank you for those even-handed comments.

    Tom,

    Me: "The importance to me of making this answer clear is to determine whether or not there is 'baggage' that comes along with using established repeatable physical empirical data. Do you understand what I mean by 'baggage'?"

    You: "No, and it doesn't matter, because the physical experiment hasn't been done. If you want to help, go to Joy's blog and chip in some cash."

    Maybe it doesn't matter and maybe it does. I can't say with reason unless I am certain about what constitutes what you referred to as "...actual, physical measurement results...".

    I am not pretending to understand the conversations with regard to theoretical interpretations. What I think I do understand is that an experiment will involve physical inputs leading to physical outputs. My question was directed at understanding whether or not the inputs to be used carryover meaning resulting from the methods used in the earlier experiments that yielded them?

    My second question was: Does the substitute method used in the computer simulations serve faithfully in place of the original method? Here maybe only theoretical discussion would make the answer clear. In that case, it probably won't be clear to me. That is ok, it is my own failing. None of this was intended to harm anyone's position. I am following and not leading.

    I assume that if these questions are relevant that they have been taken into consideration. Are they relevant or not? If they are relevant what was determined about them?

    James Putnam

    James,

    "My question was directed at understanding whether or not the inputs to be used carryover meaning resulting from the methods used in the earlier experiments that yielded them?"

    Thank you -- this is one of the most important questions in the world to me, personally. A great deal of my own research is devoted to answering it.

    I have disclosed an extreme bias of mine, in rejecting Bayesian inference. I see it as the crux of your question, which I think can be reformulated as, "Do prior results determine future probabilities?"

    Most of those who work in quantum theory and computer science exercise Bayesian assumptions routinely -- and they say, "yes.," that there is a definite probability for such and such event on the closed interval [0,1], and it can be calculated from prior probabilistic results. Then the Bayesian belief in that result is calculated for the combined probability, which is tested against actual physical results, and so on.

    Probability is added to probability, based on one's faith in their own personal belief and experience.

    This creates an illusion that reality is foundationally probabilistic -- that probabilities are additive. The statistical method that you are probably most familiar with, and that supports the deterministic philosophy that you and I share, is called "frequentist." This is a purely empirical result, based on a long run of independent (Bernoulli) experimental trials, in which one's confidence of the real probability on [0,1] grows with the number of trials conducted (law of large numbers).

    I take my disdain for Bayes' theorem a step further, and demand absolute independence of the linguistic model (an equation, or a computer program) from the empirical result. Language and meaning must be shown to correspond 1 - 1 without probabilistic inference, for determinism to hold as a foundational principle.

    "My second question was: Does the substitute method used in the computer simulations serve faithfully in place of the original method? Here maybe only theoretical discussion would make the answer clear. In that case, it probably won't be clear to me. That is ok, it is my own failing. None of this was intended to harm anyone's position. I am following and not leading."

    I'm following, too. I have never tried to disguise the fact, though, that I follow from a specific point of view, a bias that resists introducing the bias of personal belief into the measurement framework of a theory.

    All best,

    Tom

    Tom,

    From Tom's link: Machine language; "A set of instructions for a specific central processing unit, designed to be usable by a computer without being translated. Also called machine code."

    This is what I meant and what Rick said it was. Anyway my point made was that I questioned that the successful simulations in various computer languages do not add proof for the program being run. It also doesn't do harm. What I think it does do is demonstrate that Joy's model can be simulated by computer program.

    James Putnam

    James,

    While I agree with you to certain extent, you are underestimating the sociological value in the five different simulations of my model, authored by four different people, and written in four different programming languages.

    There were those who flatly denied that this was possible. They declared that my analytical model was a pure fantasy of mine. Some still remain in denial.

    Among others detractors, the panelist recruited by FQXi to evaluate my model flatly declared that my model predicts constant (rather than sinusoidal) correlation, of value -1, for all detector directions a and b. The simulations prove all of the detractors wrong.

    But you are right in that a computer simulation of a model is simply a numerical implementation of the model. It is neither its proof, nor its disproof. In a sense, it is merely a feel-good-factor. The real beef is the analytical model.

    Best,

    Joy