Declan,

Argh! Dang it! I was all ready to dismiss your 2012 essay out-of-hand as "obviously and immediately geometrically self-contradictory"... and then realized you've created a genuinely clever and self-consistent world with this idea, even if I'm still not convinced of it being the same world we live in.

If I'm reading your idea rightly, what you have created is a rigid, isotropic 3D universe in which gravity becomes something very much like optical density in a gigantic cube of optical glass. In fact, for photons I'm not seeing much difference at all between the variable-index glass cube model and your model. Light would curve near a star because the optical density of the glass would increase near the star, and so forth for all other gravity fields. That's about as close of a match between a model and what is being modeled that you can get.

But your truly innovative addition to such model is the idea that since matter has a quantum wave length, it is also subject to the same velocity and wavelength shifts in higher-optical-density space as are photons. Photon wavelengths shorten as the photons slow in denser glass, and similarly, so do your mass waves. But mass and total energy depends on these wavelengths, so you are using these changes to implement relativistic masses.

Once again, that sounds like it should be an immediate contradiction with the extremely well-proven results of SR... except that it is not. You have to compare any two frames relative to each other, not to your "primary" frame of the giant optical glass cube, and that should still give you self-consistent and SR-consistent results.

To make matters worse, even though you have clearly designated one inertial frame as being in some way "special", that does not necessarily and absolutely mean that your model necessarily contradicts the enormous body of experimental observations that on the exact equivalence of physics across all inertial frames.

Alas, the problem is not that simple, since it is most definitely possible to create asymmetric frame models that fully preserve SR. You just have to take more of a computer modeling perspective to understand how it works.

I think I've already noted elsewhere in these 2017 postings that from a computer modeling perspective it's not even all that difficult to create a model in which one inertial frame becomes the "primary" or "physical" inertial frame in which all causality is determined. All other inertial frames then become virtual frames that move within that primary frame. Causality self-consistency is maintained within such virtual frames via asymmetric early ("it already happened") and late ("the event has not yet occurred") binding of causality along their axes of motion relative to the primary frame. Speed of light constraints prevent anyone within such a frame from being aware of any causal asymmetry, since by the time the outcomes of both early (past) and late (future) binding events reach them, both are guaranteed to have occurred by information of the events reach the observer.

Incidentally, one of the most delightful implications of asymmetric causality binding in virtual frames is the answer it produces for the ancient question of whether out futures are predetermined or "free will". The exceedingly unexpected answer is both, depending on what direction you are facing! For us, if one plausibly assumes that the CMB frame is the primary frame, the axis of predestination versus free will is determined by whether the philosopher is facing toward or away from a particular star in the constellation Pisces, though I don't recall off hand which is which. Direction-dependent philosophy for one of the most profound questions of the universe, I love it!

Even better is the fact that no one in any of the frames, primary or virtual, can tell by any known test that can do whether they are or are not in the primary frame. Special relativity thus is beautifully maintained, yet at the same time having a single physical frame hugely simplifies causality self-consistency.

Bottom line: I can't even fault your idea for its use of what is clearly just such a singular frame, because I know that having such a singular frame can very beautifully support every detail of SR. Ouch!

So, ARGH! Your 2012 model is a lot harder to disprove than I was expecting... and please recall the goal in science is always to destroy your own models to prove that they really, truly can pass muster.

Well. Wow. I can't rate your 2012 contest model, which I think makes me happy because it would take me a lot of closer examination of your model to comment on it and feel confident. You have a lot of equations and equation specificity there.

But it's late so I'm calling this a wrap. I won't forget your model. And the key defense you might want to keep in mind, since I'm sure your earlier attempt got tossed out for violating SR, is simply this: Having a primary frame in a physics model is not a sufficient reason to dismiss it because there exist single-frame models can be made fully consistent with all known results of special relativity. Given that such models are possible, any attempt to eliminate a model solely on that criterion is a bogus dismissal. You have to find a true contradiction with SR, one that flatly contradicts known results, rather than just offending people philosophically for making SR more like a computer model and less like an absolutely pristine mathematical symmetry. It's not the beauty of the symmetry that counts in the end, it's whether your model matches with and perfectly predicts observed reality, that is, whether it is Kolmogorov in nature (see my essay again).

Thank you for helping me tear my hair out in frustration!... :)

(Actually, seriously: Good work! But still... argh!)

Cheers,

Terry Bollinger, Fundamental as Fewer Bits by Terry Bollinger

Dear Terry

"The universe indisputably possesses a wide range of well - defined structures and behaviors that exist independently of human knowledge and actions." is what you say. I'm not asking for a proof of that naturalistic dogma (it does not exist), only a minimum level of critical attitude. Hilbert eventually understood that what a point or a line is doesn't fall into the realm of logic/mathematics. And the literature dealing with what a bit information-theoretically is worth multi-gigabytes...

Heinrich

Dear Terry,

You presented your essay from a viewpoint with which I have little familiarity, and as a result I truly enjoyed having familiar ideas examined from a perspective that was novel to me.

A few comments:

1. Your example involving the sequence which can be found in the decimal expression of pi reminded me of the fact that most irrational numbers are still unknown to us. But with the irrational numbers we do know, your example gave me the idea that one might try the following cookie cutter approach which requires little creativity to help more efficiently compress a sequence: take a set of irrationals, take their representation in base 2,3 etc. up to 10 (if one wants to incorporate the compression of sequences which contain letters, then go higher) and create a table which contains the numbers and their expansions up to some number of digits, say, 50 million (the larger , the better). It seems that one then has a ready-made 3D "coordinate system" in which the three coordinates are: A symbolic representation of the irrational number, its n-ary expansion, and the position of the first digit of the sequence in that expansion. The sequence could then be compressed by just giving its coordinates. Due to my ignorance in these matters, I am not sure if this is too naive, elementary or unworkable of an idea, but I believe one cannot learn if one does not take the risk of occassionally embarrassing oneself.

2. Your reconceptualization of theory development in physics as data compression strikes me as an abstraction that could be useful for comparing the historical development of different theories. Perhaps it has some unexpected use and application in the history and philosophy of science. Unfortunately, I know too little about data compression to be able to assess the merits of this possibility, but it seems that you might? Another idea you discussed for which I see connections with the philosophy of physics is the trampoline effect applied to the standard model, which reminds me a bit of Kuhn's crisis phase.

3. Your discussion of the Kolmogorov minimum at times reminded me of the variational principles. Do you know whether such connections exist?

4. With regard to your first challenge, I am glad that you, as what appears (to me, at least) a hard-nosed physicist, ask the meaning of the mathematics we use to model the phase of the quantum state. All too often I find that people are not even aware of how little we know about its phyiscal origin. Saying that it is the time-dependent solution to Schrödinger's equation is too me little more than a fig leaf for our ignorance. I admit that my perspective is influenced by the fact that I have thought about this question quite a bit.

5. With regard to your second challenge, I think that there will be a convergence with respect to what from the Kolmogorov approach would be considered a simple answer and one that might in more qualitative terms be considered philosophically satisfying. I am glad that you called out the all too-convenient method of "solution by denial that a problem exists".

6. With regard to your third challenge, I believe that a refactoring of the Standard model will not happen before a paradigm change occurs. In my view, what is missing to discover a simpler understanding of the standard model is a conceptual framework which redefines its conceptual building blocks, analogous to how what was missing for the ancient mayas for a simpler understanding of astronomy was the concept of planets orbiting the sun. I am receptive to your call to lay off gravity when trying to simplify our understanding of the standard model, but that is only because I already hold the view (or bias) that if nature wanted gravity to be quantum, it would have given us more (actually, any) experimental evidence that it is quantum.

7. Your principle of Kolmogorov simplicity reminds me a little of Zeilinger's principle that the most elementary system carry only one bit of information. Any thoughts on the relationship between these principles?

Overall, I do agree that the way to advance our understanding of fundamental physics is to find simpler reconceptualitions. My background knowledge of Kolmogorov simplicity is too incomplete to be able to tell whether it is the definitive criterion for simplicity, but it certainly seems promising.

    • [deleted]

    Dear Heinrich Luediger,

    I took the liberty to read your "Context" essay before attempting to respond to your comments, to make sure that I understood fully what you are attempting to say. If you have read enough of my posting comments for this year's (2017) contest, you will surely be aware that I hold philosophy as an approach to life in high regard, and that some of my favorite essays this year were written by philosophers.

    My first warning that your essay might be rather unique was when you quoted a line from Kant that eloquently restates what every mother or father of an enquiring child already knows, which is that we humans like to ask "why" in situations where no one has an answer. Here is the Kant line you quoted:

    "... it is the peculiar fate of human reason to bring forth questions that are equally inescapable and unanswerable."

    From that simple observation you somehow (I do not yet see how) inferred this:

    "... we may read Kant's disturbing assertion as: human knowledge is without false floor, irreducible and hence not tolerating questions."

    I would estimate that well over 95% of readers would instead interpret that line from Kant as a gentle and basically humble reminder of how deeply ingrained curiosity is in most of us, and that the hard questions that such curiosity engenders are a good thing, rather than something to be discouraged. That you instead interpreted his comment as an assertion that people should stop asking questions is very unexpected.

    Thus I was genuinely curious to find out why you interpreted this line in this way, and so read your essay in detail to find out why.

    As best I can understand your worldview from that careful reading, you believe sincerely that special relativity, general relativity, and quantum mechanics are all unreal mathematical fantasies whose complex, incomprehensible mathematical structures are used by a small group of people, positivist scientists mostly, in positions of power and privilege. In contrast you believe that only the older Newtonian physics that is more accessible to your direct senses is valid. Finally, you believe that the same group that uses these false mathematical frameworks to maintain positions of privilege are also very worried that people such as yourself might join together to ask hard questions that would uncover the falseness of their mathematical fantasies, and so undermine their positions. You believe therefore that this same group works actively keep suppress people like you even from speaking about the falseness of their QM, SR, and GR frameworks.

    Let me specific about which lines in your essay led me to the above inferences:

    Pages 2-3: "Since both SR/GR and QM 5 are not associated with phenomena whatsoever, modern physics, by having taken us into the never-Here and never-

    Now, has become speechless, i.e. cannot translate logic and mathematics back to meaning other than by fantastic speculation and daring artistic impression."

    Page 3: "Hence it doesn't come as a surprise that mathematically driven physics moves tons of data just to remain void of experience. In other words, much of modern physics stands in false, namely affirmative-logical, relations to the rest of human knowledge."

    Pages 3-4: "So, I'm absolutely convinced that classical physics has not been falsified in the sense of contradicting human experience."

    Page 4: "Of course I'm not denying that there are instrumental observations that don't agree with classical physics, but that is not what theories primarily are about. Rather they are meant to 'make observable' novel domains of experience and in order not to 'sabotage' established domains of experience they are to be incommensurable, i.e. orthogonal, and thus additive."

    Page 4: "Positive, that is, logical knowledge does not permit rhetorical questions for the reason of creating strings or networks of affirmations and precipitating as unscientific whatever is not tractable by its analytical methodology. And by successively entraining us into its network we are becoming ants in an ant colony, fish in a fish school and politically-correct repliants of ever-newer, the less intuitive the better, opinions."

    The next-to-last quote above is to me the most fascinating. I was genuinely scratching my head as to how you were handling instrumental observations that do not agree with classical physics, of which there are shall we say, quite a few? I see that you do not deny the existence of such observations -- I was actually a bit surprised by that -- but that you instead seem to interpret them as ultimately irrelevant data that have very little impact on everyday Newtonian-framework reality and observation, and so do not really mean much... except to the positivists, who jumped on them collectively (additively) to create huge nonsensical mathematical fantasies that make bizarre and incomprehensible predictions that are unrelated to reality.

    However, I think it is the last quote above that is the most evocative of how you feel about what you perceive as the situation, and your level of anger about it. You seem convinced in that quote that this group has dedicated itself to ensuring that even that tiny remaining group of true, reality-believing inquirers such as yourself, the ones who still believe in the readily observable reality of the Newtonian world of physics, will be scooped up relentlessly, utterly isolated, driven to silence, and made into nothing more than mindless, unquestioning ants.

    Such a perspective helps make more comprehensible your unexpected view of the simple observation from Kant, the one about the incessant and unanswerable curiosity of most humans. I suspect (but am not sure) that you are reading Kant's line not as some gently intended general observation on the nature of curiosity in both children and adults, but as some sort of subtle warning from Kant to his followers that there exist people such as yourself who understand what he and his followers are really up to -- creating indecipherable scientific fantasies that they can then use to build up a power base -- and that this group needs to be shut down to keep them from asking unanswerable questions that would expose the unreal nature of their mathematical fantasies.

    I'll end by pointing out that I think you have a serious inconsistency in your beliefs, one that leaves you with two choices.

    You say you do not accept the reality of quantum theory, yet your daily actions powerfully contradict that assertion. Even as you read this text you are reaping enormous personal benefits of from these supposedly imaginary mathematical frameworks.

    Why? Well, are you or are you not using a personal computer, laptop, or cell phone to read this posting, and to post your own ideas?

    The problem is that semiconductor chips on which all of these devices depend cannot even exist within classical physics. They can only be understood and applied usefully by applying material and quantum theory. So, if you insist that only objects you can see with your own sense are real, look at what you are doing right now on your electronic devices. Ask anyone you can find with a solid-state electrical engineering background how such device work. Take the time and effort to let them teach you the basic design of devices that you can see are real and right in front of you, both at the laptop level and by using a Newtonian microscope to look at the complexity of the resulting silicon chips. Let your own senses convince you, with the help of someone you can trust--and surely you can find at least one electrical engineer whom you know well enough on a personal basis that you trust them to be honest about how those clearly real chips were designed and built?

    There are other examples. Do you have lights that turn on at night? Einstein was the one who created quantum mechanics when he explained why such sensors cannot be explained by classical waves.

    Do you recall the old cathode-ray tubes? Were you aware that the electrons that write images on the screens of such devices travel fast enough that you cannot design such devices without taking special relativity into account?

    But if you insist that none of this is real, I must ask: Shouldn't you then stop buying and using all such devices? Their very existence compromises your fundamental premise that they are based upon mathematics that are not real, and are designed only to perpetuate power. How then can you continue using them?

    The only other alternative I can suggest is that you examine more closely both why your feel there is a conspiracy.

    For whatever it's worth. I assure you as someone whose friends will testify to my honest and who has worked in high tech and science areas for decades that until I read your essay today, I had never before encountered the idea that QM, SR, and GR might be fantasies that some group of people uses to maintain power and suppress questions. The people I have known just found these mathematical constructs to be incredibly useful for building things (QM hugely, but also SR) and for understanding observational data (GR for astronomy). They would have been horrified (and literally unable to do their jobs) if someone had taken those tools away from them.

    Since you seem to be a thought leader for this idea that QM, SR, and GR are part of a large, centuries-old mathematical power conspiracy, I don't seriously expect you to be persuaded to abandon your belief in a conspiracy to promulgate false mathematics as physics. But I can attest to you that from my decades-long personal experiences at many levels of science and applied technology that I simply have not encountered anything that corresponds in to the kind of false math or false intent that you describe. So, I at least want to point out to you the option of changing your mind.

    Sincerely,

    Terry Bollinger

    Fundamental as Fewer Bits (Essay 3099)

    Essayist's Rating Pledge by Terry Bollinger

      Dear Armin,

      Thank you for such a thoughtful and detailed set of comments! I'll take out of order so I can address #3 first:

      ----------

      #3. Wow, good catch! Not only are variational principles relevant to straightening out the Kolmogorov path, I had to cut that section out due to length constraints!

      The variation of variational :) that I was originally planning to use began with an explanation of functionals (paths or trajectories) from Feynman's Quantum ElectroDynamics (QED). I then talked about the way to tell when you were close to the optimal path was that nearby paths would have very similar phases, causing the overall bundle of paths to reinforce each other. Finally, that has to be translated into the idea of similar data sets or messages are also mutually reinforcing.

      That last point is where it got too complicated, too diverse, and frankly too new. Data sets can sometimes match up in a fairly direct way, e.g. when comparing two genes by seeing how well their halves combine in solution. But in other cases you would need first to find just the right "space" in which to compare the data set, an idea that is closely related to data visualization. Finally, in the case of messages in the more conventional sense of known programs, you get into the complicated and historically rather unsatisfying field of evolutionary programming, albeit with an interesting twist that might well be worth exploring. The idea would be to create a set of transformation operators that all guarantee the program will still provide the same outputs (data set), use the operators to create as huge and dense of a cloud of such equivalent programs as possible, then look for regions in the cloud of programs in which subsets end up all being very similar. Those regions would nominally represent the "least action" regions, and thus the core of the real message.

      The biggest problem I see with the cloud idea is that unless the variational program generator is designed carefully it could easily create artifacts--e.g. areas of varying "program density"--that could mess up the search. For efficiency you would probably want to start with some kind of sparse Monte Carlo generation to look for "interesting regions", then start increasing the program (message ) densities within those regions to see if the trend holds, and to find more details.

      The overall process would not be terribly some other forms of evolutionary programs that also create equivalent or slightly different variations. However, here the focus would be on creating functionally identical programs, not variations, and then finding new ways to shorten or optimize them. The quality criterion would also be unusual and more automated, looking for message subsets that are common across messages and thus more likely to represent the key parts of the message.

      ----------

      #7. Again, good catch! Just a few day ago I added an extended reply to Noson Yanofsky in which I did some exploration of the idea of that over time, the amount of meaning per message increases. By "time" I should note that I mean not just the past few centuries or even millennia, but over the history of the entire universe. The end result for more common message types would be just one bit per message, but even in that case the meaning per bit -- the impact on the physical world -- would continue to increase over time.

      ----------

      #6. I like and agree without your point that it is way past time for the Standard Model to undergo a good discontinuous extended community reorganization of conceptual knowledge, or DECROCK. :) And yes, I just now made up that phrase and acronym because I cannot bring myself to slide two ten cent coins on a table after decades of hearing that once-noble phrase overused and misused for sales and research funding purposes. And besides, decrock -- let's make it a verb instead of acronym, so DECROCK has now been officially deprecated after one just one sentence of existence; sorry about that DECROCK, such are modern times! -- sounds like someone tipping over a crockpot to dump out aging bits of this and that that have been simmering for way too long. Dumping is must as much of a part of decrocking as creativity, since one of the critical features of such an event is that the explosion of creativity is different from ordinary, individual-level creating. Decrocking creativity is instead a community-wide crystallization effect in which previously disparate bits of data and isolated concepts suddenly start fitting together smooth, pushing out and displacing the older, less useful ideas that had been obscuring and blocking the crystallization process much like water that is too dirty can slow the formation of sugar crystals that otherwise might have formed spontaneously. Such a "sudden fitting of the pieces" happened both conceptually and quite literally in the case of the plate tectonics decrocking that took place in the early 1970s in the US. (In many other countries it happened years earlier.)

      (Belatedly initiating a Google deconfliction search... hmm... oh wow, really?... oh well, good enough, it's a very minor conflict community-wise, and it's not an verb...)

      So: It's way past time for the Standard Model to undergo a deep-dip decrocking! And as an extra benny, you get to keep your two dimes and shifty fingers in your pockets.

      ----------

      #5. I too hope that folks will begin to realize that spin statistics is a very deep and important issue, one that I would judge is playing some hidden and critical role in preventing a deeper consolidation of the Standard Model. This is like literally a Nobel Prize and worldwide fame just waiting to happen for anyone who can find it.

      #4. I hope also that someone can make some progress on that wonderful, beautiful little equation:

      [math]e^{i\pi}+1=0[/math]

      #2. Applying Kolmogorov minimization to histories of theories may be both doable and interesting, since such histories are data with structure. I would hesitate however to characterize the trampoline effect as similar to the slow accumulation of both stale facts and new facts that collectively lead to a new synthesis. The trampoline effect is pathological, creating something more akin to a huge boil full of, uh, we'll euphemistically call it fluid, that contains only expansions and variations of pathogenic tangents that lack the kind of new universe-inspired facts that cause a real Kuhn crisis to decrock the past and crystalize a brand new fabric of deeper comprehension.

      #1. You are saying something interesting there I think, but I have to confess that didn't quite get the idea?

      Cheers,

      Terry

      Fundamental as Fewer Bits by Terry Bollinger (Essay 3099)

      Essayist's Rating Pledge by Terry Bollinger

      Hello Terry,

      imo your contestant pledge is right on target, makes specific some of the concerns and disappointments i've felt in exploring many of the threads, and particularly the offers to barter good scores. Only point on which i hesitate is your contention that one should avoid rating an essay highly because its conclusions are agreeable to a given reader's perspective. After all the rationalizations are done most folks just do what they feel like doing, and to accept that reality seems to yield a less complex world view.

      Many thanks for the short and clear explanation of Kolmogorov complexity. Agree it is a good metric.

      Your three challenges seem for the most part well chosen and relevant to the present muddle in particle physics theory.

      the first, the Euler identity, seems perhaps the most difficult, as the compression is greatest there. Expressing it in terms of sin and cos suggests amplitude and phase of the wavefunction. And the presence of '1' suggests unitarity. Beyond that there seems to be only our desire to see what connections might 'pop out' in a deeper understanding of the physics, for which we as yet have no clear perpsective.

      the second, the quest for a simple explanation of fermion-boson statistics, also remains to be had. My sense again is that we need a deeper understanding of the wavefunction. Point particle quarks and leptons with intrinsic internal properties leave us lost in almost meaningless abstraction.

      and the third, to 'refactor' SM without adding gravity or complexity... Certainly to simplify, to reduce rather than increasing complexity in our models, is an essential aspect. Agree with the hope that such a simplification would have a natural place for gravity, that it would not be necessary to put it in 'by hand' so to speak.

      It seems to me that to meet your challenges will require improved models of the wavefunction.

      Finally i think it is good to keep in mind that the geometric interpretation of Clifford algebra, geometric algebra, has shown the equivalence of GR in curved space with 'gauge theory gravity' in flat space. Introduction of the concept of curved space came not from the physicists, from Einstein in particular, but from the math folks. Einstein was looking for math tools to express his physics understanding. Geometric interpretation was lost with the early death of Clifford and ascendance of the vector formalism of Gibbs, was not rediscovered until the work of Hestenes in the 1960s. What was available to Einstein was the tensor calculus. History is written by the winners, and Einstein's true perspective has perhaps been distorted by those who most readily embraced the formalism he adopted, the math folks and their acceptance of Riemann's view of his creation.

        Dear Terry (if that's you),

        Thanks for giving so much thought to my essay!

        To begin with: already the comment I left on your site should make clear that I'm not under the impression of an ongoing conspiracy, but rather believe that much of science has got "lost in math", to quote Sabine Hossenfelder. However, other than Hossenfelder I take the title of her book literally, namely that certain branches of physics have ended up in a blind alley by having moved (in Kantian terms) beyond possible experience. Hence my comment was triggered by your plain assertion that the universe 'indisputably' exists independent of mankind. I was simply shocked to find eclipsed the knowledge and (logically negative) experiences of people I guess we equally admire (Hilbert, Goedel, Tarski, etc. and also Wittgenstein).

        Though I admit that my essay is fairly provocative, and obviously arousing your dissent, you shouldn't claim of the essay what espressis verbis it doesn't. You say that I interpret Kant's view (of the peculiar fate of human knowledge) as the assertion "that people should stop asking questions", whereas I say that "...for us to be human the scientific-rhetorical question, while it has no answer, is yet the condition sine qua non,...". So, what I say is that the question is very important, but that we should let fare all hope that it can be answered for the reason of being made up from incommensurables, i.e. containing a priori elements. Hence the question is the ground from where to think beyond it.

        I happily use my computer for the reason that it is not quantum but wonderfully deterministic. The behavior of electronic components has been derived from Bohr's model of the atom. The foundations of the electronic band structure were developed by Bloch, Bethe, Peierls and A. Herries Wilson between 1928 and 1931, who all were students of Sommerfeld or Heisenberg. So, much quantum, but little mechanics there.

        Last, in my essay I say that modern physics offers explanations and models for instrumental observations deviating from classical physics. And that's absolutely fine with me unless these mathematical devices are being reified (as e.g. space-time or configuration space), for then they begin to 'predict' things beyond possible experience.

        You see, no conspiracy only lost in math...

        Heinrich

        Greetings Peter and Michaele,

        Thanks you for this marvelous and extremely interesting set of comments! I did not know of the existence of viXra.org, which seems to have the same free-access goals arXiv.org originally intended to provide. Once I found it (with some difficulty; Google Scholar does not index it) and your spot there I downloaded a large sampling of your papers.

        Each number (n) indicates your comment paragraph to which I am responding:

        (1) That's a good catch on my Pledge. The italicized part of my line about not making the conclusion everything shows my intent was what you just said it should be, but my second line sort of contradicted that. I've updated the Pledge to v1.3 to fix the second line; please take a look and see if it works.

        (2) Thanks! To be honest, looking at Kolmogorov more closely for the purposes of this contest helped me understand it better, too. Recognizing that the Kolmogorov minimum model is isomorphic to a formal model for lossless data compression was fun, sort of like a little "aha!" light going off in my head.

        (3) That is encouraging feedback on my three challenges; thanks!

        (4) The Euler equation challenge was in some ways the most interesting to me, in no small part because it is a pure and pristine outcome of the argument in the essay. Unlike the other two, I have absolutely no idea where it might connect into physics. But if I believe my own arguments about Kolmogorov compression, then there is a very good chance that somehow it does, and we just do not see it. Certainly the sin-cos breakdown seems like a hint, I agree. I've always found that equation interesting, but now my curiosity is even higher.

        You do realize that your own impedance reformulation of quantum math may provide a new way to look at Euler's equation, yes? Sometimes something as simple as flipping the numerator and denominator provides a whole new way to look at old problems, as you clearly have noticed by using impedance instead of the more traditional conductance. So who knows, perhaps you and Michaele (I confess I have no idea how to pronounce her name) will nail that one!

        (5) You said "... Point particle[s] ... leave us lost in almost meaningless abstraction."

        Yep, especially since QM assures us that point particles do not exist anywhere in the real universe. So why then do we insist on using them in our math, which unavoidably results in infinity artifacts. (By "artifacts" I mean computational results that are not really part of the problem, but instead are just noise generated by the particular method we are using to model the problem.) I am not in the least surprised that you were able to get rid of renormalization costs in your impedance approach, since by flipping your primary fraction upside down you halted the model-induced generation of point-particle infinitesimal artifacts. If you've written or plan to write any software for your model, I would anticipate that such software will prove to be hugely more computationally efficient for the same reason.

        I think a lot more folks need to hear about your impedance reformulation of QM, and to take its potential computational properties seriously. You do realize that more efficient quantum modeling software can be worth lots of money in areas such as pharmaceuticals and materials research? If your impedance reformulation can increase computer based quantum modeling efficiency by eliminating the costly renormalization steps, you could well be sitting on top of a little gold mine there without even realizing it.

        (6) I too would love to see that simpler Standard Model! Simpler versions of it would almost certainly clarify one way or the other how gravity fits in.

        (7a) You are preaching to the choir! I love the Clifford and (more cryptic) Grassmann works. I too have never quite forgiven Gibbs, but in my case more specifically for his bad-programmer artificial deconstruction of the gorgeous and dimensionally unique symmetries of Hamilton's quaternion to create dot and cross products. The easily dimensionally generalized dot products of vectors I'm sort of OK with, but the 3D-locked cross products in which did things like arbitrarily inverts signs are to me a mess that likely covers up something simpler.

        Maxwell did after all write all of his laws in quaternions, and they worked beautifully. It as Heaviside who massively transformed and compressed them into their current vector form. Remarkably, Heaviside then insisted that the much more compact and massively transformed set of equations still be credited to Maxwell. But despite this selfless act of generosity from Heaviside's soul (which perhaps went up, up, up, up to the Heaviside layer? and yes, the ionosphere really was named for that same Heaviside, but only humans in Cats outfits seem to recall that), the conversion had some issues: the quaternion and vector versions of Maxwell's laws are not quite isomorphic, due the Gibbs delinking of the dot and cross products, and to his reinterpretation of the cross product. I suspect that this is the source of certain minor anomalies in out modern use of Maxwell's equations.

        It was the subsequent attempts to make Gibbs' cross product just as easily generalizable to higher dimensions as the dot product that resulted in Grassmann and Clifford algebras. I have some trouble with that. Since the very first cross product had already had its original subtle quaternion symmetries mangled by Gibbs, artifacts and some obscuration had already begun before Grassmann and Clifford tried to generalize the concept further. To me that speaks of the likely loss of more subtle symmetries that exist only in the 3+1 space of quaternions, and at least some insertion of artifact noise into Clifford and Grassmann algebras.

        Alas, many a physics PhD student has crashed their thesis into the stubborn wall of figuring out how quaternions may be relevant to more than just Maxwell's equations. But I'm going to give you a bit of a hint here: Given what you are doing and are trying to do, you really need to look a bit more closely at the true origin of this entire generalization mess, which is the quaternions. And by "mess" I am including not just the dot-product vectors, which at least generalized cleanly, but also the cross-product Clifford algebras, which frankly did not come off nearly as well after the Gibbs-induced Great Split. 3-space is quote unique for its vector-spin equivalence, and only quaternions truly capture that. Clifford algebras are nice, but can never recapture that unique set of 3-space relationship at higher dimensionalities, simply because they do not exist in any of the higher (or lower) dimensionalities. I don't think it's an accident that our space is a 3-space.

        (7b) Regarding both your impedance model of matter and your observation that there are viable mathematical alternatives to curved space, your may find this essay of interest:

        A Classical Reconstruction of Relativity by Declan Andrew Traill.

        My comments on his essay may help explain why I suspect Andrew's ideas are relevant to yours.

        BTW, having looked at his early papers closely, Einstein really was as many have asserted over the years really not that great at math. So as you noted, he tended to use whatever was available and that others could help him with. Oddly, he did not seem to be particularly visual either, since for example it was Minkowski who came up with spacetime. As best I can tell, Einstein was instead sort of like a human physics simulator. That is, he could almost intuitively model and understood how physics would work in a given situation, and then use that understanding to look for and fix problems in the simulation. The hard part for him was the extreme difficulty he tended to have when attempting to convert those insights into words or equations. I cannot help but think of it as a bit like some form of autism, only one focused around physics. A very unique mind, Einstein, which I guess should be no surprise to anyone.

        Cheers,

        Terry

        Fundamental as Fewer Bits by Terry Bollinger (Essay 3099)

        Essayist's Rating Pledge by Terry Bollinger

        Dear Terry,

        I enjoyed very much your essay, and I take the opportunity to say that your pledge is great and we should all adopt it. I think the idea "Fundamental as Fewer Bits", using Kolmogorov complexity, is great, and I am also using it to propose to identify the simplest theory in section 5 of this reference. Of course, this is not an absolute measure, because each equation has behind it implicit definitions and meanings. Your examples E=mc2 and Euler's identity, reveal this relativity when you try to explain them to someone who doesn't know mathematics and physics. But there is a theorem showing that the Kolmogorov complexity of the same data expressed in two different languages differs only by a constant (which is given by the size of the "dictionary" translating from one language into the other). So modulo that constant, Kolmogorov complexity indicates well which theory is simpler. This is a relativity of simplicity if you want, and of fundamentalness, because it depends on the language. But the difference is irrelevant when the complexity of the theory exceeds considerably the length of the dictionary. One may wonder what if the most fundamental unified theory is simpler than any such dictionary? Well, in this case the difference becomes relevant, but I think that if the theory is so simple, then we should use the minimal language required to express it. So if the dictionary is too large, it means we are not using the best formulations of the theory. This means that once we find the unified theory, it may be the most compressed theory, but we can optimize further by reformulating the mathematics behind it. For example, Schrödinger's equation is a partial differential equation, but the language is simplified if we use the Hilbert space formulation. Compression by reformulation occurs also by using group representations for particles, fiber bundle formulation of gauge theories, and Clifford algebras.

        Another idea I liked in your essay is the trampoline effect. I would argue here that I see the trampoline as being again relative, in the following sense. Let's see it as an elastic wall, rather than a trampoline, or if you wish, as a potential well. Once you go beyond the wall, or outside the potential well, the trampoline accelerates you instead of rejecting you back. I would take as an example each major breakthrough. Once the wall between Newtonian mechanics and special relativity was left behind, special relativity reached a new compression level, unifying space and time, energy and momentum, the electric and the magnetic fields in a single tensor etc. Then other trampolines appeared, which separated special relativity from general relativity, and from quantum mechanics. Similarly, when we moved from nonrelativistic quantum mechanics to relativistic QM, the description of particles became simply in terms of representations of symmetry groups, the Poincaré and gauge groups. It is true that at this time we are hitting from decades the wall which separates our present theories from quantum gravity, and there's a similar wall separating them from a unified theory of particles. And at least another wall beyond which we expect to find the unified theory. So my hypothesis, based on previous history, is that the trampoline rejects us back as long as we don't break through, in which case it will accelerate both the discovery and the simplification. And until we will get to the terminus, more complexity may wait us beyond each wall, as usually happened so far, since every time when we found the new simplicity, new phenomena were discovered too. It's a rollercoaster. But I believe at the end there will be a really short equation, and underlying it some simple mathematical structure but initially not so simple to express. We will see.

        Thank you for the great essay, and good luck in the contest!

        Best wishes,

        Cristi Stoica, Indra's net

          Cristi,

          Thank you for such kind remarks, and I'm glad you liked my essay!

          Your first paragraph above is a very good analysis of issues that for reasons both of essay length limits and keeping the focus on a general audience I decided not to put into the essay.

          One way I like to express such issues is that the full Kolmogorov complexity can be found only by treating the functionality of the particular computer, quite literally its microprogramming in some cases, as part of the message. That's really not all that surprising given that one of the main reasons for creating high-level instructions within a processor is to factor our routines that keep showing up in the operating system, or in the programs themselves.

          I like your analysis of a two-language approach. Another way to standardize and ensure complete comparisons is to define a standardized version of the Turing machine, then count everything built on that as part of the message. That way basic machine functions and higher-level microcodes instructions all become part of the full set of factoring opportunities in the message.

          Incidentally, a Turing-based approach also opens up opportunities for very unexpected insights, including at the physics level.

          Why? Because many of the very instructions we have pre-programmed into computers contain deep assumptions about how the universe works. Real numbers are a good example, since their level of precision amounts to an inadvertent invocation of Planck's constant when they are applied to data for length, momentum, energy, or perhaps most importantly, time. If you are trying to be fundamental, a lot more caution is needed on how such issues are represented at the machine level since there are multiple ways to approach numeric representation of external data, and he operations on them.

          Here's an example: Have you ever thought about whether a bundle of extremely long binary numbers might be sorted without having to "see" the entire lengths of the bundles first?

          Standard computers always treat long numbers as temporally atomic, that is, you always treat them as a whole. This means you have to complete processing of each long unit before moving on to the next one, and it's the main reason why we also use shorter bit lengths to speed processing.

          But as it turns out, you can bundles of numbers of any length, even ones infinite in length, by using what are called comparators. I looked into these a long time ago, and they can be blazingly fast. The don't need to see the entire number because our number systems (also parts of the total program, and thus of our assumptions!) require that digits to the right can never add up to more than one unit of the digit we are looking at. That means that once a sort order is found, no number of follow-up bits can ever change what it is.

          But all of this sounds pretty computer-science-abstract and number-crunchy. Could ideas that deep in computing theory really affect the minimum size of a Kolmogorov message about, say, fundamental physics?

          Sure they could. For example, for any bundle of infinite-length integers there are only so many sorted orders possible, and so only so many states needed in the computing machines that track those numbers and their sorted order. What if those states corresponded to states of the quantum numbers for various fermions and the infinite lengths to their progression along a worldline, or alternatively to the various levels of collapse of a quantum wave function?

          I really am just pulling those examples out of a hat, so anyone reading this should please not take them as hints! But that said, such radically different approaches to implementing and interpreting real numeric values in computers are good examples of the kind of thinking that likely will be needed to drop fundamental physics to smaller message sizes.

          That's because the Planck relationships like length-momentum and time-energy argue powerfully that really long numbers with extreme precision can only exist in the real world at high costs in terms of other resources. Operating systems that do not "assume" that infinitely precise locations in space or time to be cost-free likely are closer to reality than are operating systems that inadvertently treat infinitely precise numbers as "givens" or "ideals" that the machine then only approximates. It's really the other way around: Computers that make precision decisions both explicit and cost-based are likely a lot closer to what we see in the quantum model, where quantum mechanics similarly keeps tabs on precision versus costs. Excessive use of real numbers in contrast can become very narrow but very bouncy examples of thee trampoline effect, causing the computation costs of quantum models that use them to soar outward by requiring levels of precision that are go far beyond the those of the natural systems they are intended to model.

          Getting back to your comments about higher-level reformulations in terms of e.g. gauge theories and Clifford algebras: Absolutely! Those very much are examples of the "factoring methods" that, if used properly, often can result in dramatic reductions in size, and thus bring us closer to what is fundamental. The only point of caution is that those methods themselves may need careful examination, both for whether they are the best ones and whether they, much like the real-number examples I just gave, contain hidden assumptions that drives them away from simpler mappings of messages to physics.

          Regarding your second paragraph: Trampolines as multi-scale potential wells, heh, I like that! I think you have a pretty cool conceptual model there. I'm getting this image of navigating a complex terrain of gravity fields that are constantly driving the ship off course, with only a very narrow path providing fast and accurate navigation. I particularly like your multi-scale (fractal even?) structuring, since it looks at the Kolmogorov minimum path at multiple levels of granularity, treating it like a fractal that only looks like a straight line from a distance. That's pretty accurate, and it's part of why a true minimum is hard to find and impossible to prove.

          Thanks again for some very evocative comments and ideas!

          Cheers,

          Terry

          Fundamental as Fewer Bits by Terry Bollinger (Essay 3099)

          Essayist's Rating Pledge by Terry Bollinger

          Heinrich,

          Thank you for such a thoughtful (and cheerful) reply to my critique! Reading your reply also makes me feel better about your essay itself, since it shows a side to your views that perhaps does not come through in your more narrowly focused essay.

          I think the saying that we can agree to disagree works here. But doesn't using a fully classical computer sometimes get a little Bohring?... :)

          Finally, I just have to mention that your first name, Heinrich, stands out for me because it was a very common name in my family eight generations ago, when they first came to the New-To-Europeans-World from Germany. They were from the same area in Europe, near a large lake (can't recall the name) as Bollinger sandstone and Heinrich Bullinger. So, probably some interesting history there. The name was transformed to Henry once they settled in Missouri.

          (Regarding my phrase New-to-the-Europeans-World: I think is quite likely that the native Americans had already noticed both that their world was there, and that in terms of generations of ancestors, it was not even particularly new. They were quite observant about such things, often more so than folks who walk around all day with smart phones in front of them... :)

          Cheers,

          Terry

          Terry,

          Did you see my 17.2.18 post above & 100sec video deriving non-integer spins from my essays mechanism resolving the EPR paradox? (I've just found the 'duplet state' confirmation in the Poincare sphere)

          That all emerged from a 2010 SR model http://fqxi.org/community/forum/topic/1330 finally able to resolve the ecliptic plane & stellar aberration issues and a tranche of others (expanded on in subsequent finalist essays).

          i.e you'll be aware of George Kaplans USNO circ (p6) following IAU discussions.

          (Of course all including editors dismiss such progress as impossible so it's still not in a leading journal!)

          Hope you can look & comment

          Peter

            Dear Terry,

            Thank you for your extended reply. I just wanted to acknowledge the following:

            3. I am glad to see that connections between the variational principles and Kolmogorov Complexity have been discovered already. Applied specifically to the path integral, I believe that there is a more fundamental principle at work. In an amended form of Philip Gibbs' phrasing, it could be stated as "Nothing actual means everything potential", and in a formulation that I have called the default specification principle (and which I think is a bit more precise), it can be stated as "The absence of an explicit specification entails all possible default specification outputs". The idea is that when something is not in some way specified or pinned down, then of all the consequences that could result out performing that specification, all are available as "live possibilities". This has an essentially tautological character, except that it captures a distinction between things which are specified "explicitly" and things which are specified "by default", i.e. they are specified due to background constraints. For instance, if I wait until a throw a regular six-sided die, I know that, say, the number 7 or King of clubs are not among the possible outcomes of a throw. I don't know enough about information theory to be able to tell, but it seems to me that the realization that there is such a distinction, which is essentially ontological in character, still remains to be made. Possibly, if and when it is made, it could help by shifting some of the complexity of, say, a message to the background constraints.

            7. I find the notion of an increase of meaning per message very interesting, in my mind it seems something analogous to a second-order effect, but possibly it could be conceptualized in terms of just the kind of background constraint I mentioned in my previous point. To give an (albeit rather hokey) analogy with the throw of a die: When I say that I hold a die in my hand which I am about to throw, but nothing further about the nature of the die, the possibilities for outcomes of a throw can be large. Without specifying the number of sides or what is on the sides, even the number 7 and the King of clubs are possibilities. Perhaps over time, you learn somehow that my die only contains numbers on its sides, in which case king of clubs is no longer a live possibility, but the number 7 still is. When you finally learn that I only ever throw 6-sided dies, you can then also eliminate the number 7, and thereby simplify the encoding of the possible outcomes. Conversely, by somehow incorporating these background constraints, you could increase the "meaning per message" by referring (as before) to a "die throw" but shifting the complexity to the background instead of having it contained in the message itself. (Incidentally, I will give a talk on the default specification principle at the APS March meeting and plan on filming it, if you are interested, let me know and I'll notify you when upload it).

            6. I honestly did not realize that "paradigm change" has become a dirty phrase, but then I may not have been exposed to its abuse as much as you have. DECROCk is certainly a humorous take, but however it is called, I agree that it will involve discarding ideas which are no longer useful.

            2. I think in Kuhn's model, the "slow accumulation of both stale facts and new facts" is actually still part of normal science. As I understand it, the crisis period refers to the one in which there is a competition between a number of different candidates for a paradigm without a clear favorite.

            1. This was simply meant as a simple straightforward generalization of the example with the sequence in pi you gave: Instead of just pi, consider a set of irrational numbers, instead of a decimal expansion consider all expansions (binary, ternary etc) up to some number that is deemed useful, and instead of just some random number of digits in the expansion consider some standard. Then use this to create a giant look-up table the specification of the addresses within which is less complex than the specification of a given sequence itself. Like I said, this may be more naive or trivial than you might have thought.

            All the best,

            Armin

            Dear Terry,

            Maybe it's in the German genes that we prefer to think it out over trying it out...

            Heinrich

            P.S. From the hints you gave your family once came from Switzerland. Gruezi!

            Peter Jackson, Eckard Blumschein, Wayne R Lundberg, James Lee Hoover, Marc Séguin, and Jeffrey Michael Schmitz:

            This is to let you know that I am aware that all six of you have unanswered postings on my essay-level posting thread. I will strive mightily today Wed 21 Feb to provide at least short replies to all of your postings. My replies will be in the form of direct subthread replies to your postings. I will also try but not promise to assess your essays, unless you have requested me not to. If I assess your posting, it will be as a new post under your essay-level posting thread.

            As many of you have likely noticed, my problem is that I tend to do a pretty deep analysis of each posting and essay, including looking up author papers if they exist. So even when I try hard to be "brief", I tend not to be! I also do most posting composition and editing offline to reduces chances of loss, check spelling better, and make sure my sentences are whole. That too slows the process.

            If you don't believe that I tend to be overly talky... well, take a look at this "brief alert to unanswered authors" that you are reading right now... :)

            Cheers, Terry Bollinger

            "Quantum mechanics is simpler than most people realize. It is no more and no less than the physics of things for which history has not yet been written."

            Dear Terry,

            just one point I overlooked in my previous response. "I think is quite likely that the native Americans had already noticed both that their world was there, and that in terms of generations of ancestors, it was not even particularly new".

            There still exist almost zero-contact Indian tribes deep in the Amazonian jungle having words for exactly one generation up, one generation down. So, there is no word for e.g. grandfather. Hence there is good reason to assume that North-American Indians, prior to colonization, had similar kinship recognition, i.e. I don't think they had any idea that their world "was not particularly new" (famous is Whorff's analysis of the absence of 'time' in Hopi). What NA Indians indeed had (or at least most) were creation myths, that is, principled explanations about the coming in place of their existence, typically beginning with ravens, eggs, deer and coyotes ...today its Big Bang and Inflation...

            Heinrich

            Terry, thanks for a very clear and interesting essay.

            It seems there are two types of information covered in your essay. There is the information required to describe a theory such as the standard model, and there is the information in the state space of the theory. Mostly you are talking about the former, but for example, when you talk about redundancy in symmetry that is about the latter.

            Do you make any distinction between the roles played by these two types of information?

              Terry,

              I've been mulling this over. If I accept the Kolmogorov (Kolmogorov-Chaitin) complexity as the ultimate foundation standard, let me understand:

              You would have me believe that the world is fundamentally made of information bits that are algorithmically compressible. Okay, I'll entertain that notion.

              Except that you used the example of Einstein, E=mc^2, to serve as a minimum Kolmogorov complexity, arguing that mathematical conciseness is the standard.

              The equation, however, is not irreducible. The meaning of the equation is in the expression E = m. The second degree addition tells us that the relations in the equation are dynamic, that energy and mass may take infinite values. The binding energy then was discovered through experiment, setting a practical limit.

              So I find myself moving ever closer to Brian Josephson's premise that meaning itself is fundamental. And meaning seems to be that which contains the requisite first degree information to "Be fruitful and multiply" as the Bible has it. So I suspect that meaning precedes construction. Or compression.

              Enjoyed the essay.

              Best,

              Tom

                Jeff,

                I did a boo-boo and replied to you at the essay posting level instead of directly to your above post. So in case you or anyone interested has not seen my reply, you can either mosey down a couple of posts to the next Author posting, or try this direct link. I also posted an essay assessment under your essay, which I assume you have seen. For anyone else interested, my assessment of Jeff's essay is located here.

                Cheers, Terry

                Peter Jackson wrote on Feb. 17, 2018 @ 17:30 GMT

                https://fqxi.org/community/forum/topic/3099#post_144251

                Dear Terry,

                I was most impressed, even inspired. Your ability to find the right questions is leagues above most who can't even recognize correct answers! Lucid, direct, one of the best here.

                I entirely agree on simplicity as the title of my own essay suggests, but isn't a reason we haven't advanced that our brains can't quite yet decode the complex puzzle (information)?

                But now more importantly. I'd like you to read my essay as two of your sought answers are implicit in an apparent classical mechanism reproducing all QM's predictions and CSHS>2. Most academics (& editors) fear to read, comment or falsify due to cognitive dissonance but I'm sure you're more curious and honest. It simply follows Bell, tries a new starting assumption about pair QAM using Maxwell's orthogonal states and analyses momentum transfers.

                Spin 1/2 & 2 etc emerged early on and is in my last essay (scored 8th but no chocs). Past essays (inc. scored 1st & 2nd) described a better logic for SR which led to 'test by QM'. Another implication was cosmic redshift without accelerating expansion closely replicating Euler at a 3D Schrodinger sphere surface and Susskinds seed for strings.

                By design I'm quite incompetent to express most thing mathematically. My research uses geometry, trig, observation & logic (though my red/green socks topped the 2015 Wigner essay.) But I do need far more qualified help (consortium forming).

                On underlying truths & SM, gravity etc, have you seen how closed, multiple & opposite helical charge paths give toroid... ..but let's take things 2 at a time!

                As motion is key I have a 100 sec video giving spin half (+, QM etc.) which you may need to watch 3 times, then a long one touching on Euler but mainly Redshift, SR, etc. But maybe read the essay first.

                Sorry that was a preamble to mine but you did ask! I loved it, and thank you for those excellent questions and encouragement on our intellectual evolution.

                Of course I may be a crackpot. Do be honest, but I may also crack a bottle of champers tonight!

                Very best

                Peter

                Peter Jackson replied on Feb. 18, 2018 @ 20:16 GMT

                Terry,

                I omitted the link to the Ridiculously Simple; 100 second video glimpse.

                Peter

                Peter Jackson replied on Feb. 18, 2018 @ 20:19 GMT

                ..this time with the first 'h'(ttp); https://youtu.be/WKTXNvbkhhI 100 sec..Classic QM

                Peter Jackson wrote on Feb. 21, 2018 @ 12:35 GMT

                https://fqxi.org/community/forum/topic/3099#post_144803

                Terry,

                Did you see my 17.2.18 post above & 100sec video deriving non-integer spins from my essays mechanism resolving the EPR paradox? (I've just found the 'duplet state' confirmation in the Poincare sphere)

                That all emerged from a 2010 SR model http://fqxi.org/community/forum/topic/1330 finally able to resolve the ecliptic plane & stellar aberration issues and a tranche of others (expanded on in subsequent finalist essays).

                i.e you'll be aware of George Kaplans USNO circ (p6) following IAU discussions.

                (Of course all including editors dismiss such progress as impossible so it's still not in a leading journal!)

                Hope you can look & comment

                Peter

                Hi Peter,

                Wow, what generous comments! I am very pleased in particular that you said I may have inspired you a bit. That makes me feel better than anything else you could have said, because in the end that was the hidden intent of the essay: To encourage folks to look at themselves as capable of more than they ever imagined. Sometimes nothing more than writing up a new idea in a way that people can understand is the best way to help them realize their own potential. There are just too many distractions sometimes, and that in turn keeps us from realizing that we can focus our minds and efforts to develop powerful new ideas. Take that to a community level and wow, the threads and bundles of possible positive futures opens up in ways no one could have anticipated.

                On to other issues! The first is that you accidentally and very innocently stepped on what recently has become a hot button of mine, which is this:

                OMG how can you even kiddingly call yourself a 'crackpot' for believing and advocating an extremely common sense compatible position that Einstein, Bell, and any number of very smart people feel must be correct??

                Entanglement is always an interesting debate, but I don't think even kiddingly using that particular term for self-deprecation is a good idea. It is one of the most overused ad hominem phrases in all of science, and for that reason it is also one of the most damaging mental toxins that limit the overall ability of such communities to increase their collaborative intelligence. Intelligence at the delicate community level simply cannot function well if at the individual level its members can with impunity inject such mental toxins to kill off any cross-community communication that they don't happen to like.

                And that is not even getting into the ethics of using such mental toxins specifically to harm other human beings!

                That said, sigh, I've used that term myself, more than once, although usually with accompanying definitions of the behaviors for which I was using it. Usually it was more frustration for the lack of a better word for describing a certain set of strongly self-defeating behaviors and analytical approaches. Physician heal thyself indeed!

                But let's get back to the issue of entanglement.

                How the heck can you being in the company of no less than Einstein on that point merit anyone calling you names? Forget spin entanglement, Einstein alone had the brilliance back in the early 1900s to see that no quantum probability wave function can be reconciled with classical physics. His very first shot across the bow was a pointed thought experiment that he posed to his audience of fellow quantum physicists: If you have a large wave function, say one a light year across, how do you keep multiple people from finding the same electron as they individually search that wave function?

                The only resolutions Einstein could see were either (a) there was never more than one point-like electron in the wavefunction to begin with (de Broglie's pilot wave model), or (2) the quantum wave function had to collapse "instantly" across its entire lightyear diameter, removing itself at faster-than-light speeds so that no one else could find the same electron. To Einstein, for whom the principle of locality was absolute, that was enough to prove that wave functions as defined then (and now) could not possibly be complete descriptions of physical reality. It was and is an amazingly perceptive argument.

                Having said all that, allow me now to shock a few folks with another disclosure: My position on quantum entanglement is precisely the opposite of those who believe that locality is the primary reality. That is, not only do I accept the reality of quantum entanglement for both experimental and theoretical reasons, I consider space and time as we know them to be secondary to the world of quantum entanglement.

                Our universe emerged from a fully quantum place, and we continue to "mine" what remains of that initially infinite range of undefined futures through the process we call entropy. The two are opposite sides of the same coin: a past that is closed to any further change via accumulation of classical information ("history"), and a future that remains partially open through a sort of mining of the many shreds and fragments of indefinitely broad, undefined futures that existed before the Great Break, and which have not yet been consumed by entropy. We call that two-faced coin space, and it is a place where the original quantum symmetries from before the Great Break now can be seen only within the nooks and crannies of smallness or coldness or indifference (transparency) in which entropy can be held at bay for a while longer. Everywhere else the coin of space displays itself as the eternally changing Hamiltonian of "now". This universe-encompassing Hamiltonian grasps in one hand the statistically irreversible givens of the past, and in the other hand the freedoms of the yet-to-be-defined quantum future, and from them both forges still more pages to add to the ever-expanding annals of entropy.

                And life is there, snatching its opportunity to persist and expand by setting up the givens of the past to ensure a future that ensures their continuity into the future.

                Back to entanglement, again.

                Given all I said earlier about Einstein's amazingly perceptive arguments against entanglement, how can I possibly also believe that entanglement is real?

                Peter, on page 8 of your 2017 FQXi essay you say: "The entanglement experiments of Aspect34 and Weihs et al35 reported unexplained 'rotational inconsistencies' but results followed predictions when ignored, so they were."

                Fact-based turnabout is fair play, I think. So here is my own pro-entanglement anomaly for you and others either to accept or to discount as you see fit:

                ID230 Infrared Single-Photon Detector Hybrid Gated and Free-Running InGaAs/InP Photon Counter with Extremely Low Dark Count

                My main point is that things have, um, moved along quite a bit since the now-ancient days of Aspect. A lot of hard-nosed business folks figured out years ago that arguments against the very existence of such phenomena do not matter much if you can simply build devices that violate Bell's inequality, use them to encrypt critical data transmissions, and last but not least make a lot of bucks by selling them.

                I'll make two additional remarks on your many interesting comments:

                First, spin.

                I like to think of spin as a bit like gearing. The outside gear is the observer turning the entire system around a few times, like a pot on a pottery wheel. The inner gear is the "spin state" or degree of resulting rotation in response to the external maniputations of the observer.

                Spin 1/2 has a half-speed gear inside that doesn't finish one full circle until the outside one spins twice, so the observer would see it lagging noticeably in comparison to her potter's wheel. For spin 1 the inner gear (observed object rotation) and outside gear (rotation of the potter's wheel) are locked together. For spin 2 the object is geared for high speed, circling around twice per observer induced wheel rotation.

                All of this is very easy to visualize, since we've all seen gears e.g. on bicycles that go faster or slower than the driving gear. However, for spin 1/2 I assure you that this simple visualization is not the one that usually gets presented, which is an odd sort of thing that doubles the outer gear instead. I would suggest that this much more continuous view is a better way to understand half spin, and that this continuity could even help provide some insights into why 1/2 is so different. Even in this simple model, for example, it is the only spin that is slower than the driving spin. Spin 1/2 (and also the higher fermion spins of n+1/2, n=1,2,...) also has the very interesting property of causing the object to half-turn in response to one normal turn.

                Think about that in terms of phases. If the opposite sides of the object represent plus and a minus phases of some sort, then half-spins have the potential to match positive and negative phases of adjacent identical fermions in ways that integer phases do not. Could this be related to the zero-probability surfaces that form in xyz space between adjacent antisymmetric fermion wave solutions? I honestly do not know, since one has to be careful how one interprets such models But it certainly smells interesting...

                The second topic is your video. I'm sorry, but I watched it several time and never saw even a hint of anything other than classical Bertlmann's socks propagation of correlated spin, which does not violate Bell's inequality and so doesn't explain why customers do not sue the bejeebers out of the makers of the ID230 for false advertising. Oddly, their market instead is expanding.

                Cheers,

                Terry

                Fundamental as Fewer Bits by Terry Bollinger (Essay 3099)

                Essayist's Rating Pledge by Terry Bollinger