Dear Terry,

Congratulations for the essay contestant pledge that you introduced (goo.gl/KCCujt) --- I think we should all follow it, and I will certainly attempt to from now on. Congratulations also on the truly constructive comments that you have left so far on the threads of many of the participants in this contest. I thought I would use a similar format and comment on your essay!

What I liked:

- Your essay is well written and interesting to read: at the end, I wanted more of it!

- You introduce vivid/memorable expressions to describe your main points: the principle of binary conciseness, the trampoline effect, foundation messages. I specially like the trampoline effect, defined as the bouncing-off of the near-minimum region of Kolmogorov simplicity by adding new ideas that seem relevant, yet in the end just add more complexity without doing much to solve the original simplification goal. I think you will agree that, when you read some of the essays submitted to your typical FQXi contest, you can observe spectacular examples of the trampoline effect. It seems easy to diagnose a trampoline effect in accepted theories that we find lacking, or in alternative theories that we find even more flawed. True wisdom, of course, would be to be able to become aware of the trampoline effect in our own thinking... which is so hard to do!

- You directly address the specific essay contest question, "What is fundamental?" (at least, in the first half of your essay)

- Nicely worded and accessible introduction to the famous equation E = mc²

- Pedagogical presentation of Kolmogorov complexity for the reader not already familiar with the concept

- Interesting parallel between the increased difficulty in reducing Kolmogorov complexity in an already well-compressed description and the increased difficulty in improving an already well-developed theory

- It was interesting to end with challenges to the physics community, although it fits only tangentially with the essay topic (it would make a great essay topic for a future contest!)

- Your challenges #2 and #3 are profound questions: WHY the spin-statistics theorem? WHY the three generations in the Standard Model? There is certainly much to be learned if we can make progress with these fundamental "Why?" questions --- although the particular physics of our particular universe might just be arbitrary at the most fundamental level, forever frustrating our hopes of ultimate unification and simplicity.

What I liked less / constructive criticism:

- You say that the content of foundation messages (data sets expressing structures and behaviors of the universe that exist independently of human knowledge and actions) must reflect only content from the as-is-universe, despite the extensive work that humans must perform to obtain them. But this presupposes that we can have a reasonably access to the "as-is" universe, which many historians and philosophers of science would deny, saying that observations are always more-or-less theory-dependent (no such thing as a pure observation, independent of the previous knowledge of the observer): see for instance the articles "Theory-ladenness" and "Duhem-Quine Thesis" in Wikipedia.

- You say that in physics, the sole criterion for whether a theory is correct is whether it accurately reproduces the data in foundation messages. It is true that reproducing data is an important criterion, but is it the sole one? For example, a modern, evolved, computer assisted epicycles-based Ptolemaic model (with lots and lots of epicycles) could probably reproduce incredibly well the planetary positions data, but we could use other criteria (simplicity, meshing with theories explaining other phenomena) to strongly criticize it and ultimately reject it.

- I am not sure that the map analogy and the associated figure helps clarify the concept of a Kolmogorov minimum. Maybe it's because I was distracted by the labels: Why pi-r-squared in one of the ovals? Why Euler's identity? Why the zeros and ones along the path? Why is the equation E = mc2 written along a path that goes from Newton to Einstein, since it is purely an Einsteinian equation?

- Your short section on the "Spekkens Principle" is very compact and will probably remain obscure to many readers (it was for me). It might have been beneficial to expand it (I understand there was a length limit to the essay...) or to drop it altogether.

- Concerning your challenge no. 1... Like many mathematicians and physicists, I am in awe with Euler's identity, but I am not sure that there is explicit undiscovered physics insight hiding within it. Once you understand that the exponential function is its own derivative, that the exponent i in e to the i*t comes in front when you derive with respect to time, that multiplication by i rotates a vector by 90° in the complex plane and that the velocity vector in uniform circular motion is perpendicular to the position vector, it becomes "evident" that you can model circular uniform motion (hence, the trigonometric circle) with an exponential function with an imaginary argument: Euler's identity then follows from the fact that pi radians corresponds to half a turn, which is the same as multiplying by -1! If there is anything truly remarkable in all this basic math, it is perhaps that the ratio pi (or, more often, 2 times pi) appears so often in the fundamental equations of physics, even in phenomena that do not seem related in any way to circles or rotations.

And finally, a question:

In your expression "principle of binary conciseness", what does the "binary" stand for exactly? The fact that it deals with TWO (or more...) theories that address the same data, or the fact that Kolomogorov complexity is often applied to strings of BINARY digits?

Congratulations once again, and welcome to the FQXi community! I hope you have the time to take a look at my essay and leave comments --- especially constructive criticism, which is unfortunately so hard to get in these contests, because of the fear of rating reprisal.

Marc

    Dear Terry,

    This is a well-written essay for a general science reader (by far the hardest type of essay to write). Looks like you have a good shot at winning. The word "tree" is simple, but a tree is complex. Does a simple equation mean a simple thing? Perhaps a simple equation just fits with how we communicate or think.

    A side note: I thought spin 1/2 is the way it is because of interaction with photons a spin 1.

    All the best,

    Jeff Schmitz

      A nice essay. I think you would be interested in my 2012 FQXi essay titled "A Classical Reconstruction of Relativity" located here:

      https://fqxi.org/community/forum/topic/1363

      And my work on modelling the electron/positron wavefunctions as 3D standing waves, located here: http://vixra.org/pdf/1507.0054v6.pdf

      I also have an essay in this year's contest titled "A Fundamental Misunderstanding" about a Classical explanation for QM entanglement (EPR experiment).

      Regards,

      Declan Traill

        Dear Jeff,

        Thank you for your kind comments! I looked up your interesting short essay and added a posting on it.

        Regarding spin, it's definitely the interaction between identical fermions, e.g. a bunch of tightly packed electrons, that makes them unique. What happens is that the antisymmetric nature of the fermion wave functions cause surfaces of zero probability of finding an electron to form between them. This compresses the electrons, which do not like that at all and fight back by trying to expand the space within these zero-probability cells that form around them. The result is a kind of probability foam that we so casually call "volume" in classical physics. Without this effect, earth would be just a centimeters-ish black hole.

        This Pauli exclusion occurs for any cluster of identical fermions, regardless of electromagnetic or any other kind of charge, and so is completely unrelated to electromagnetism and the spin 1 photons that make electromagnetism possible.

        By far the best short explanation of antisymmetric (spin ½) and symmetric (spin 1) wave functions that I've encountered on the web are these two teaching notes by Simon Connell, a physics professor in South Africa:

        Symmetric / antisymmetric wave functions

        Pauli's exclusion principle

        Cheers,

        Terry Bollinger, Fundamental as Fewer Bits

        Declan,

        Argh! Dang it! I was all ready to dismiss your 2012 essay out-of-hand as "obviously and immediately geometrically self-contradictory"... and then realized you've created a genuinely clever and self-consistent world with this idea, even if I'm still not convinced of it being the same world we live in.

        If I'm reading your idea rightly, what you have created is a rigid, isotropic 3D universe in which gravity becomes something very much like optical density in a gigantic cube of optical glass. In fact, for photons I'm not seeing much difference at all between the variable-index glass cube model and your model. Light would curve near a star because the optical density of the glass would increase near the star, and so forth for all other gravity fields. That's about as close of a match between a model and what is being modeled that you can get.

        But your truly innovative addition to such model is the idea that since matter has a quantum wave length, it is also subject to the same velocity and wavelength shifts in higher-optical-density space as are photons. Photon wavelengths shorten as the photons slow in denser glass, and similarly, so do your mass waves. But mass and total energy depends on these wavelengths, so you are using these changes to implement relativistic masses.

        Once again, that sounds like it should be an immediate contradiction with the extremely well-proven results of SR... except that it is not. You have to compare any two frames relative to each other, not to your "primary" frame of the giant optical glass cube, and that should still give you self-consistent and SR-consistent results.

        To make matters worse, even though you have clearly designated one inertial frame as being in some way "special", that does not necessarily and absolutely mean that your model necessarily contradicts the enormous body of experimental observations that on the exact equivalence of physics across all inertial frames.

        Alas, the problem is not that simple, since it is most definitely possible to create asymmetric frame models that fully preserve SR. You just have to take more of a computer modeling perspective to understand how it works.

        I think I've already noted elsewhere in these 2017 postings that from a computer modeling perspective it's not even all that difficult to create a model in which one inertial frame becomes the "primary" or "physical" inertial frame in which all causality is determined. All other inertial frames then become virtual frames that move within that primary frame. Causality self-consistency is maintained within such virtual frames via asymmetric early ("it already happened") and late ("the event has not yet occurred") binding of causality along their axes of motion relative to the primary frame. Speed of light constraints prevent anyone within such a frame from being aware of any causal asymmetry, since by the time the outcomes of both early (past) and late (future) binding events reach them, both are guaranteed to have occurred by information of the events reach the observer.

        Incidentally, one of the most delightful implications of asymmetric causality binding in virtual frames is the answer it produces for the ancient question of whether out futures are predetermined or "free will". The exceedingly unexpected answer is both, depending on what direction you are facing! For us, if one plausibly assumes that the CMB frame is the primary frame, the axis of predestination versus free will is determined by whether the philosopher is facing toward or away from a particular star in the constellation Pisces, though I don't recall off hand which is which. Direction-dependent philosophy for one of the most profound questions of the universe, I love it!

        Even better is the fact that no one in any of the frames, primary or virtual, can tell by any known test that can do whether they are or are not in the primary frame. Special relativity thus is beautifully maintained, yet at the same time having a single physical frame hugely simplifies causality self-consistency.

        Bottom line: I can't even fault your idea for its use of what is clearly just such a singular frame, because I know that having such a singular frame can very beautifully support every detail of SR. Ouch!

        So, ARGH! Your 2012 model is a lot harder to disprove than I was expecting... and please recall the goal in science is always to destroy your own models to prove that they really, truly can pass muster.

        Well. Wow. I can't rate your 2012 contest model, which I think makes me happy because it would take me a lot of closer examination of your model to comment on it and feel confident. You have a lot of equations and equation specificity there.

        But it's late so I'm calling this a wrap. I won't forget your model. And the key defense you might want to keep in mind, since I'm sure your earlier attempt got tossed out for violating SR, is simply this: Having a primary frame in a physics model is not a sufficient reason to dismiss it because there exist single-frame models can be made fully consistent with all known results of special relativity. Given that such models are possible, any attempt to eliminate a model solely on that criterion is a bogus dismissal. You have to find a true contradiction with SR, one that flatly contradicts known results, rather than just offending people philosophically for making SR more like a computer model and less like an absolutely pristine mathematical symmetry. It's not the beauty of the symmetry that counts in the end, it's whether your model matches with and perfectly predicts observed reality, that is, whether it is Kolmogorov in nature (see my essay again).

        Thank you for helping me tear my hair out in frustration!... :)

        (Actually, seriously: Good work! But still... argh!)

        Cheers,

        Terry Bollinger, Fundamental as Fewer Bits by Terry Bollinger

        Dear Terry

        "The universe indisputably possesses a wide range of well - defined structures and behaviors that exist independently of human knowledge and actions." is what you say. I'm not asking for a proof of that naturalistic dogma (it does not exist), only a minimum level of critical attitude. Hilbert eventually understood that what a point or a line is doesn't fall into the realm of logic/mathematics. And the literature dealing with what a bit information-theoretically is worth multi-gigabytes...

        Heinrich

        Dear Terry,

        You presented your essay from a viewpoint with which I have little familiarity, and as a result I truly enjoyed having familiar ideas examined from a perspective that was novel to me.

        A few comments:

        1. Your example involving the sequence which can be found in the decimal expression of pi reminded me of the fact that most irrational numbers are still unknown to us. But with the irrational numbers we do know, your example gave me the idea that one might try the following cookie cutter approach which requires little creativity to help more efficiently compress a sequence: take a set of irrationals, take their representation in base 2,3 etc. up to 10 (if one wants to incorporate the compression of sequences which contain letters, then go higher) and create a table which contains the numbers and their expansions up to some number of digits, say, 50 million (the larger , the better). It seems that one then has a ready-made 3D "coordinate system" in which the three coordinates are: A symbolic representation of the irrational number, its n-ary expansion, and the position of the first digit of the sequence in that expansion. The sequence could then be compressed by just giving its coordinates. Due to my ignorance in these matters, I am not sure if this is too naive, elementary or unworkable of an idea, but I believe one cannot learn if one does not take the risk of occassionally embarrassing oneself.

        2. Your reconceptualization of theory development in physics as data compression strikes me as an abstraction that could be useful for comparing the historical development of different theories. Perhaps it has some unexpected use and application in the history and philosophy of science. Unfortunately, I know too little about data compression to be able to assess the merits of this possibility, but it seems that you might? Another idea you discussed for which I see connections with the philosophy of physics is the trampoline effect applied to the standard model, which reminds me a bit of Kuhn's crisis phase.

        3. Your discussion of the Kolmogorov minimum at times reminded me of the variational principles. Do you know whether such connections exist?

        4. With regard to your first challenge, I am glad that you, as what appears (to me, at least) a hard-nosed physicist, ask the meaning of the mathematics we use to model the phase of the quantum state. All too often I find that people are not even aware of how little we know about its phyiscal origin. Saying that it is the time-dependent solution to Schrödinger's equation is too me little more than a fig leaf for our ignorance. I admit that my perspective is influenced by the fact that I have thought about this question quite a bit.

        5. With regard to your second challenge, I think that there will be a convergence with respect to what from the Kolmogorov approach would be considered a simple answer and one that might in more qualitative terms be considered philosophically satisfying. I am glad that you called out the all too-convenient method of "solution by denial that a problem exists".

        6. With regard to your third challenge, I believe that a refactoring of the Standard model will not happen before a paradigm change occurs. In my view, what is missing to discover a simpler understanding of the standard model is a conceptual framework which redefines its conceptual building blocks, analogous to how what was missing for the ancient mayas for a simpler understanding of astronomy was the concept of planets orbiting the sun. I am receptive to your call to lay off gravity when trying to simplify our understanding of the standard model, but that is only because I already hold the view (or bias) that if nature wanted gravity to be quantum, it would have given us more (actually, any) experimental evidence that it is quantum.

        7. Your principle of Kolmogorov simplicity reminds me a little of Zeilinger's principle that the most elementary system carry only one bit of information. Any thoughts on the relationship between these principles?

        Overall, I do agree that the way to advance our understanding of fundamental physics is to find simpler reconceptualitions. My background knowledge of Kolmogorov simplicity is too incomplete to be able to tell whether it is the definitive criterion for simplicity, but it certainly seems promising.

          • [deleted]

          Dear Heinrich Luediger,

          I took the liberty to read your "Context" essay before attempting to respond to your comments, to make sure that I understood fully what you are attempting to say. If you have read enough of my posting comments for this year's (2017) contest, you will surely be aware that I hold philosophy as an approach to life in high regard, and that some of my favorite essays this year were written by philosophers.

          My first warning that your essay might be rather unique was when you quoted a line from Kant that eloquently restates what every mother or father of an enquiring child already knows, which is that we humans like to ask "why" in situations where no one has an answer. Here is the Kant line you quoted:

          "... it is the peculiar fate of human reason to bring forth questions that are equally inescapable and unanswerable."

          From that simple observation you somehow (I do not yet see how) inferred this:

          "... we may read Kant's disturbing assertion as: human knowledge is without false floor, irreducible and hence not tolerating questions."

          I would estimate that well over 95% of readers would instead interpret that line from Kant as a gentle and basically humble reminder of how deeply ingrained curiosity is in most of us, and that the hard questions that such curiosity engenders are a good thing, rather than something to be discouraged. That you instead interpreted his comment as an assertion that people should stop asking questions is very unexpected.

          Thus I was genuinely curious to find out why you interpreted this line in this way, and so read your essay in detail to find out why.

          As best I can understand your worldview from that careful reading, you believe sincerely that special relativity, general relativity, and quantum mechanics are all unreal mathematical fantasies whose complex, incomprehensible mathematical structures are used by a small group of people, positivist scientists mostly, in positions of power and privilege. In contrast you believe that only the older Newtonian physics that is more accessible to your direct senses is valid. Finally, you believe that the same group that uses these false mathematical frameworks to maintain positions of privilege are also very worried that people such as yourself might join together to ask hard questions that would uncover the falseness of their mathematical fantasies, and so undermine their positions. You believe therefore that this same group works actively keep suppress people like you even from speaking about the falseness of their QM, SR, and GR frameworks.

          Let me specific about which lines in your essay led me to the above inferences:

          Pages 2-3: "Since both SR/GR and QM 5 are not associated with phenomena whatsoever, modern physics, by having taken us into the never-Here and never-

          Now, has become speechless, i.e. cannot translate logic and mathematics back to meaning other than by fantastic speculation and daring artistic impression."

          Page 3: "Hence it doesn't come as a surprise that mathematically driven physics moves tons of data just to remain void of experience. In other words, much of modern physics stands in false, namely affirmative-logical, relations to the rest of human knowledge."

          Pages 3-4: "So, I'm absolutely convinced that classical physics has not been falsified in the sense of contradicting human experience."

          Page 4: "Of course I'm not denying that there are instrumental observations that don't agree with classical physics, but that is not what theories primarily are about. Rather they are meant to 'make observable' novel domains of experience and in order not to 'sabotage' established domains of experience they are to be incommensurable, i.e. orthogonal, and thus additive."

          Page 4: "Positive, that is, logical knowledge does not permit rhetorical questions for the reason of creating strings or networks of affirmations and precipitating as unscientific whatever is not tractable by its analytical methodology. And by successively entraining us into its network we are becoming ants in an ant colony, fish in a fish school and politically-correct repliants of ever-newer, the less intuitive the better, opinions."

          The next-to-last quote above is to me the most fascinating. I was genuinely scratching my head as to how you were handling instrumental observations that do not agree with classical physics, of which there are shall we say, quite a few? I see that you do not deny the existence of such observations -- I was actually a bit surprised by that -- but that you instead seem to interpret them as ultimately irrelevant data that have very little impact on everyday Newtonian-framework reality and observation, and so do not really mean much... except to the positivists, who jumped on them collectively (additively) to create huge nonsensical mathematical fantasies that make bizarre and incomprehensible predictions that are unrelated to reality.

          However, I think it is the last quote above that is the most evocative of how you feel about what you perceive as the situation, and your level of anger about it. You seem convinced in that quote that this group has dedicated itself to ensuring that even that tiny remaining group of true, reality-believing inquirers such as yourself, the ones who still believe in the readily observable reality of the Newtonian world of physics, will be scooped up relentlessly, utterly isolated, driven to silence, and made into nothing more than mindless, unquestioning ants.

          Such a perspective helps make more comprehensible your unexpected view of the simple observation from Kant, the one about the incessant and unanswerable curiosity of most humans. I suspect (but am not sure) that you are reading Kant's line not as some gently intended general observation on the nature of curiosity in both children and adults, but as some sort of subtle warning from Kant to his followers that there exist people such as yourself who understand what he and his followers are really up to -- creating indecipherable scientific fantasies that they can then use to build up a power base -- and that this group needs to be shut down to keep them from asking unanswerable questions that would expose the unreal nature of their mathematical fantasies.

          I'll end by pointing out that I think you have a serious inconsistency in your beliefs, one that leaves you with two choices.

          You say you do not accept the reality of quantum theory, yet your daily actions powerfully contradict that assertion. Even as you read this text you are reaping enormous personal benefits of from these supposedly imaginary mathematical frameworks.

          Why? Well, are you or are you not using a personal computer, laptop, or cell phone to read this posting, and to post your own ideas?

          The problem is that semiconductor chips on which all of these devices depend cannot even exist within classical physics. They can only be understood and applied usefully by applying material and quantum theory. So, if you insist that only objects you can see with your own sense are real, look at what you are doing right now on your electronic devices. Ask anyone you can find with a solid-state electrical engineering background how such device work. Take the time and effort to let them teach you the basic design of devices that you can see are real and right in front of you, both at the laptop level and by using a Newtonian microscope to look at the complexity of the resulting silicon chips. Let your own senses convince you, with the help of someone you can trust--and surely you can find at least one electrical engineer whom you know well enough on a personal basis that you trust them to be honest about how those clearly real chips were designed and built?

          There are other examples. Do you have lights that turn on at night? Einstein was the one who created quantum mechanics when he explained why such sensors cannot be explained by classical waves.

          Do you recall the old cathode-ray tubes? Were you aware that the electrons that write images on the screens of such devices travel fast enough that you cannot design such devices without taking special relativity into account?

          But if you insist that none of this is real, I must ask: Shouldn't you then stop buying and using all such devices? Their very existence compromises your fundamental premise that they are based upon mathematics that are not real, and are designed only to perpetuate power. How then can you continue using them?

          The only other alternative I can suggest is that you examine more closely both why your feel there is a conspiracy.

          For whatever it's worth. I assure you as someone whose friends will testify to my honest and who has worked in high tech and science areas for decades that until I read your essay today, I had never before encountered the idea that QM, SR, and GR might be fantasies that some group of people uses to maintain power and suppress questions. The people I have known just found these mathematical constructs to be incredibly useful for building things (QM hugely, but also SR) and for understanding observational data (GR for astronomy). They would have been horrified (and literally unable to do their jobs) if someone had taken those tools away from them.

          Since you seem to be a thought leader for this idea that QM, SR, and GR are part of a large, centuries-old mathematical power conspiracy, I don't seriously expect you to be persuaded to abandon your belief in a conspiracy to promulgate false mathematics as physics. But I can attest to you that from my decades-long personal experiences at many levels of science and applied technology that I simply have not encountered anything that corresponds in to the kind of false math or false intent that you describe. So, I at least want to point out to you the option of changing your mind.

          Sincerely,

          Terry Bollinger

          Fundamental as Fewer Bits (Essay 3099)

          Essayist's Rating Pledge by Terry Bollinger

            Dear Armin,

            Thank you for such a thoughtful and detailed set of comments! I'll take out of order so I can address #3 first:

            ----------

            #3. Wow, good catch! Not only are variational principles relevant to straightening out the Kolmogorov path, I had to cut that section out due to length constraints!

            The variation of variational :) that I was originally planning to use began with an explanation of functionals (paths or trajectories) from Feynman's Quantum ElectroDynamics (QED). I then talked about the way to tell when you were close to the optimal path was that nearby paths would have very similar phases, causing the overall bundle of paths to reinforce each other. Finally, that has to be translated into the idea of similar data sets or messages are also mutually reinforcing.

            That last point is where it got too complicated, too diverse, and frankly too new. Data sets can sometimes match up in a fairly direct way, e.g. when comparing two genes by seeing how well their halves combine in solution. But in other cases you would need first to find just the right "space" in which to compare the data set, an idea that is closely related to data visualization. Finally, in the case of messages in the more conventional sense of known programs, you get into the complicated and historically rather unsatisfying field of evolutionary programming, albeit with an interesting twist that might well be worth exploring. The idea would be to create a set of transformation operators that all guarantee the program will still provide the same outputs (data set), use the operators to create as huge and dense of a cloud of such equivalent programs as possible, then look for regions in the cloud of programs in which subsets end up all being very similar. Those regions would nominally represent the "least action" regions, and thus the core of the real message.

            The biggest problem I see with the cloud idea is that unless the variational program generator is designed carefully it could easily create artifacts--e.g. areas of varying "program density"--that could mess up the search. For efficiency you would probably want to start with some kind of sparse Monte Carlo generation to look for "interesting regions", then start increasing the program (message ) densities within those regions to see if the trend holds, and to find more details.

            The overall process would not be terribly some other forms of evolutionary programs that also create equivalent or slightly different variations. However, here the focus would be on creating functionally identical programs, not variations, and then finding new ways to shorten or optimize them. The quality criterion would also be unusual and more automated, looking for message subsets that are common across messages and thus more likely to represent the key parts of the message.

            ----------

            #7. Again, good catch! Just a few day ago I added an extended reply to Noson Yanofsky in which I did some exploration of the idea of that over time, the amount of meaning per message increases. By "time" I should note that I mean not just the past few centuries or even millennia, but over the history of the entire universe. The end result for more common message types would be just one bit per message, but even in that case the meaning per bit -- the impact on the physical world -- would continue to increase over time.

            ----------

            #6. I like and agree without your point that it is way past time for the Standard Model to undergo a good discontinuous extended community reorganization of conceptual knowledge, or DECROCK. :) And yes, I just now made up that phrase and acronym because I cannot bring myself to slide two ten cent coins on a table after decades of hearing that once-noble phrase overused and misused for sales and research funding purposes. And besides, decrock -- let's make it a verb instead of acronym, so DECROCK has now been officially deprecated after one just one sentence of existence; sorry about that DECROCK, such are modern times! -- sounds like someone tipping over a crockpot to dump out aging bits of this and that that have been simmering for way too long. Dumping is must as much of a part of decrocking as creativity, since one of the critical features of such an event is that the explosion of creativity is different from ordinary, individual-level creating. Decrocking creativity is instead a community-wide crystallization effect in which previously disparate bits of data and isolated concepts suddenly start fitting together smooth, pushing out and displacing the older, less useful ideas that had been obscuring and blocking the crystallization process much like water that is too dirty can slow the formation of sugar crystals that otherwise might have formed spontaneously. Such a "sudden fitting of the pieces" happened both conceptually and quite literally in the case of the plate tectonics decrocking that took place in the early 1970s in the US. (In many other countries it happened years earlier.)

            (Belatedly initiating a Google deconfliction search... hmm... oh wow, really?... oh well, good enough, it's a very minor conflict community-wise, and it's not an verb...)

            So: It's way past time for the Standard Model to undergo a deep-dip decrocking! And as an extra benny, you get to keep your two dimes and shifty fingers in your pockets.

            ----------

            #5. I too hope that folks will begin to realize that spin statistics is a very deep and important issue, one that I would judge is playing some hidden and critical role in preventing a deeper consolidation of the Standard Model. This is like literally a Nobel Prize and worldwide fame just waiting to happen for anyone who can find it.

            #4. I hope also that someone can make some progress on that wonderful, beautiful little equation:

            [math]e^{i\pi}+1=0[/math]

            #2. Applying Kolmogorov minimization to histories of theories may be both doable and interesting, since such histories are data with structure. I would hesitate however to characterize the trampoline effect as similar to the slow accumulation of both stale facts and new facts that collectively lead to a new synthesis. The trampoline effect is pathological, creating something more akin to a huge boil full of, uh, we'll euphemistically call it fluid, that contains only expansions and variations of pathogenic tangents that lack the kind of new universe-inspired facts that cause a real Kuhn crisis to decrock the past and crystalize a brand new fabric of deeper comprehension.

            #1. You are saying something interesting there I think, but I have to confess that didn't quite get the idea?

            Cheers,

            Terry

            Fundamental as Fewer Bits by Terry Bollinger (Essay 3099)

            Essayist's Rating Pledge by Terry Bollinger

            Hello Terry,

            imo your contestant pledge is right on target, makes specific some of the concerns and disappointments i've felt in exploring many of the threads, and particularly the offers to barter good scores. Only point on which i hesitate is your contention that one should avoid rating an essay highly because its conclusions are agreeable to a given reader's perspective. After all the rationalizations are done most folks just do what they feel like doing, and to accept that reality seems to yield a less complex world view.

            Many thanks for the short and clear explanation of Kolmogorov complexity. Agree it is a good metric.

            Your three challenges seem for the most part well chosen and relevant to the present muddle in particle physics theory.

            the first, the Euler identity, seems perhaps the most difficult, as the compression is greatest there. Expressing it in terms of sin and cos suggests amplitude and phase of the wavefunction. And the presence of '1' suggests unitarity. Beyond that there seems to be only our desire to see what connections might 'pop out' in a deeper understanding of the physics, for which we as yet have no clear perpsective.

            the second, the quest for a simple explanation of fermion-boson statistics, also remains to be had. My sense again is that we need a deeper understanding of the wavefunction. Point particle quarks and leptons with intrinsic internal properties leave us lost in almost meaningless abstraction.

            and the third, to 'refactor' SM without adding gravity or complexity... Certainly to simplify, to reduce rather than increasing complexity in our models, is an essential aspect. Agree with the hope that such a simplification would have a natural place for gravity, that it would not be necessary to put it in 'by hand' so to speak.

            It seems to me that to meet your challenges will require improved models of the wavefunction.

            Finally i think it is good to keep in mind that the geometric interpretation of Clifford algebra, geometric algebra, has shown the equivalence of GR in curved space with 'gauge theory gravity' in flat space. Introduction of the concept of curved space came not from the physicists, from Einstein in particular, but from the math folks. Einstein was looking for math tools to express his physics understanding. Geometric interpretation was lost with the early death of Clifford and ascendance of the vector formalism of Gibbs, was not rediscovered until the work of Hestenes in the 1960s. What was available to Einstein was the tensor calculus. History is written by the winners, and Einstein's true perspective has perhaps been distorted by those who most readily embraced the formalism he adopted, the math folks and their acceptance of Riemann's view of his creation.

              Dear Terry (if that's you),

              Thanks for giving so much thought to my essay!

              To begin with: already the comment I left on your site should make clear that I'm not under the impression of an ongoing conspiracy, but rather believe that much of science has got "lost in math", to quote Sabine Hossenfelder. However, other than Hossenfelder I take the title of her book literally, namely that certain branches of physics have ended up in a blind alley by having moved (in Kantian terms) beyond possible experience. Hence my comment was triggered by your plain assertion that the universe 'indisputably' exists independent of mankind. I was simply shocked to find eclipsed the knowledge and (logically negative) experiences of people I guess we equally admire (Hilbert, Goedel, Tarski, etc. and also Wittgenstein).

              Though I admit that my essay is fairly provocative, and obviously arousing your dissent, you shouldn't claim of the essay what espressis verbis it doesn't. You say that I interpret Kant's view (of the peculiar fate of human knowledge) as the assertion "that people should stop asking questions", whereas I say that "...for us to be human the scientific-rhetorical question, while it has no answer, is yet the condition sine qua non,...". So, what I say is that the question is very important, but that we should let fare all hope that it can be answered for the reason of being made up from incommensurables, i.e. containing a priori elements. Hence the question is the ground from where to think beyond it.

              I happily use my computer for the reason that it is not quantum but wonderfully deterministic. The behavior of electronic components has been derived from Bohr's model of the atom. The foundations of the electronic band structure were developed by Bloch, Bethe, Peierls and A. Herries Wilson between 1928 and 1931, who all were students of Sommerfeld or Heisenberg. So, much quantum, but little mechanics there.

              Last, in my essay I say that modern physics offers explanations and models for instrumental observations deviating from classical physics. And that's absolutely fine with me unless these mathematical devices are being reified (as e.g. space-time or configuration space), for then they begin to 'predict' things beyond possible experience.

              You see, no conspiracy only lost in math...

              Heinrich

              Greetings Peter and Michaele,

              Thanks you for this marvelous and extremely interesting set of comments! I did not know of the existence of viXra.org, which seems to have the same free-access goals arXiv.org originally intended to provide. Once I found it (with some difficulty; Google Scholar does not index it) and your spot there I downloaded a large sampling of your papers.

              Each number (n) indicates your comment paragraph to which I am responding:

              (1) That's a good catch on my Pledge. The italicized part of my line about not making the conclusion everything shows my intent was what you just said it should be, but my second line sort of contradicted that. I've updated the Pledge to v1.3 to fix the second line; please take a look and see if it works.

              (2) Thanks! To be honest, looking at Kolmogorov more closely for the purposes of this contest helped me understand it better, too. Recognizing that the Kolmogorov minimum model is isomorphic to a formal model for lossless data compression was fun, sort of like a little "aha!" light going off in my head.

              (3) That is encouraging feedback on my three challenges; thanks!

              (4) The Euler equation challenge was in some ways the most interesting to me, in no small part because it is a pure and pristine outcome of the argument in the essay. Unlike the other two, I have absolutely no idea where it might connect into physics. But if I believe my own arguments about Kolmogorov compression, then there is a very good chance that somehow it does, and we just do not see it. Certainly the sin-cos breakdown seems like a hint, I agree. I've always found that equation interesting, but now my curiosity is even higher.

              You do realize that your own impedance reformulation of quantum math may provide a new way to look at Euler's equation, yes? Sometimes something as simple as flipping the numerator and denominator provides a whole new way to look at old problems, as you clearly have noticed by using impedance instead of the more traditional conductance. So who knows, perhaps you and Michaele (I confess I have no idea how to pronounce her name) will nail that one!

              (5) You said "... Point particle[s] ... leave us lost in almost meaningless abstraction."

              Yep, especially since QM assures us that point particles do not exist anywhere in the real universe. So why then do we insist on using them in our math, which unavoidably results in infinity artifacts. (By "artifacts" I mean computational results that are not really part of the problem, but instead are just noise generated by the particular method we are using to model the problem.) I am not in the least surprised that you were able to get rid of renormalization costs in your impedance approach, since by flipping your primary fraction upside down you halted the model-induced generation of point-particle infinitesimal artifacts. If you've written or plan to write any software for your model, I would anticipate that such software will prove to be hugely more computationally efficient for the same reason.

              I think a lot more folks need to hear about your impedance reformulation of QM, and to take its potential computational properties seriously. You do realize that more efficient quantum modeling software can be worth lots of money in areas such as pharmaceuticals and materials research? If your impedance reformulation can increase computer based quantum modeling efficiency by eliminating the costly renormalization steps, you could well be sitting on top of a little gold mine there without even realizing it.

              (6) I too would love to see that simpler Standard Model! Simpler versions of it would almost certainly clarify one way or the other how gravity fits in.

              (7a) You are preaching to the choir! I love the Clifford and (more cryptic) Grassmann works. I too have never quite forgiven Gibbs, but in my case more specifically for his bad-programmer artificial deconstruction of the gorgeous and dimensionally unique symmetries of Hamilton's quaternion to create dot and cross products. The easily dimensionally generalized dot products of vectors I'm sort of OK with, but the 3D-locked cross products in which did things like arbitrarily inverts signs are to me a mess that likely covers up something simpler.

              Maxwell did after all write all of his laws in quaternions, and they worked beautifully. It as Heaviside who massively transformed and compressed them into their current vector form. Remarkably, Heaviside then insisted that the much more compact and massively transformed set of equations still be credited to Maxwell. But despite this selfless act of generosity from Heaviside's soul (which perhaps went up, up, up, up to the Heaviside layer? and yes, the ionosphere really was named for that same Heaviside, but only humans in Cats outfits seem to recall that), the conversion had some issues: the quaternion and vector versions of Maxwell's laws are not quite isomorphic, due the Gibbs delinking of the dot and cross products, and to his reinterpretation of the cross product. I suspect that this is the source of certain minor anomalies in out modern use of Maxwell's equations.

              It was the subsequent attempts to make Gibbs' cross product just as easily generalizable to higher dimensions as the dot product that resulted in Grassmann and Clifford algebras. I have some trouble with that. Since the very first cross product had already had its original subtle quaternion symmetries mangled by Gibbs, artifacts and some obscuration had already begun before Grassmann and Clifford tried to generalize the concept further. To me that speaks of the likely loss of more subtle symmetries that exist only in the 3+1 space of quaternions, and at least some insertion of artifact noise into Clifford and Grassmann algebras.

              Alas, many a physics PhD student has crashed their thesis into the stubborn wall of figuring out how quaternions may be relevant to more than just Maxwell's equations. But I'm going to give you a bit of a hint here: Given what you are doing and are trying to do, you really need to look a bit more closely at the true origin of this entire generalization mess, which is the quaternions. And by "mess" I am including not just the dot-product vectors, which at least generalized cleanly, but also the cross-product Clifford algebras, which frankly did not come off nearly as well after the Gibbs-induced Great Split. 3-space is quote unique for its vector-spin equivalence, and only quaternions truly capture that. Clifford algebras are nice, but can never recapture that unique set of 3-space relationship at higher dimensionalities, simply because they do not exist in any of the higher (or lower) dimensionalities. I don't think it's an accident that our space is a 3-space.

              (7b) Regarding both your impedance model of matter and your observation that there are viable mathematical alternatives to curved space, your may find this essay of interest:

              A Classical Reconstruction of Relativity by Declan Andrew Traill.

              My comments on his essay may help explain why I suspect Andrew's ideas are relevant to yours.

              BTW, having looked at his early papers closely, Einstein really was as many have asserted over the years really not that great at math. So as you noted, he tended to use whatever was available and that others could help him with. Oddly, he did not seem to be particularly visual either, since for example it was Minkowski who came up with spacetime. As best I can tell, Einstein was instead sort of like a human physics simulator. That is, he could almost intuitively model and understood how physics would work in a given situation, and then use that understanding to look for and fix problems in the simulation. The hard part for him was the extreme difficulty he tended to have when attempting to convert those insights into words or equations. I cannot help but think of it as a bit like some form of autism, only one focused around physics. A very unique mind, Einstein, which I guess should be no surprise to anyone.

              Cheers,

              Terry

              Fundamental as Fewer Bits by Terry Bollinger (Essay 3099)

              Essayist's Rating Pledge by Terry Bollinger

              Dear Terry,

              I enjoyed very much your essay, and I take the opportunity to say that your pledge is great and we should all adopt it. I think the idea "Fundamental as Fewer Bits", using Kolmogorov complexity, is great, and I am also using it to propose to identify the simplest theory in section 5 of this reference. Of course, this is not an absolute measure, because each equation has behind it implicit definitions and meanings. Your examples E=mc2 and Euler's identity, reveal this relativity when you try to explain them to someone who doesn't know mathematics and physics. But there is a theorem showing that the Kolmogorov complexity of the same data expressed in two different languages differs only by a constant (which is given by the size of the "dictionary" translating from one language into the other). So modulo that constant, Kolmogorov complexity indicates well which theory is simpler. This is a relativity of simplicity if you want, and of fundamentalness, because it depends on the language. But the difference is irrelevant when the complexity of the theory exceeds considerably the length of the dictionary. One may wonder what if the most fundamental unified theory is simpler than any such dictionary? Well, in this case the difference becomes relevant, but I think that if the theory is so simple, then we should use the minimal language required to express it. So if the dictionary is too large, it means we are not using the best formulations of the theory. This means that once we find the unified theory, it may be the most compressed theory, but we can optimize further by reformulating the mathematics behind it. For example, Schrödinger's equation is a partial differential equation, but the language is simplified if we use the Hilbert space formulation. Compression by reformulation occurs also by using group representations for particles, fiber bundle formulation of gauge theories, and Clifford algebras.

              Another idea I liked in your essay is the trampoline effect. I would argue here that I see the trampoline as being again relative, in the following sense. Let's see it as an elastic wall, rather than a trampoline, or if you wish, as a potential well. Once you go beyond the wall, or outside the potential well, the trampoline accelerates you instead of rejecting you back. I would take as an example each major breakthrough. Once the wall between Newtonian mechanics and special relativity was left behind, special relativity reached a new compression level, unifying space and time, energy and momentum, the electric and the magnetic fields in a single tensor etc. Then other trampolines appeared, which separated special relativity from general relativity, and from quantum mechanics. Similarly, when we moved from nonrelativistic quantum mechanics to relativistic QM, the description of particles became simply in terms of representations of symmetry groups, the Poincaré and gauge groups. It is true that at this time we are hitting from decades the wall which separates our present theories from quantum gravity, and there's a similar wall separating them from a unified theory of particles. And at least another wall beyond which we expect to find the unified theory. So my hypothesis, based on previous history, is that the trampoline rejects us back as long as we don't break through, in which case it will accelerate both the discovery and the simplification. And until we will get to the terminus, more complexity may wait us beyond each wall, as usually happened so far, since every time when we found the new simplicity, new phenomena were discovered too. It's a rollercoaster. But I believe at the end there will be a really short equation, and underlying it some simple mathematical structure but initially not so simple to express. We will see.

              Thank you for the great essay, and good luck in the contest!

              Best wishes,

              Cristi Stoica, Indra's net

                Cristi,

                Thank you for such kind remarks, and I'm glad you liked my essay!

                Your first paragraph above is a very good analysis of issues that for reasons both of essay length limits and keeping the focus on a general audience I decided not to put into the essay.

                One way I like to express such issues is that the full Kolmogorov complexity can be found only by treating the functionality of the particular computer, quite literally its microprogramming in some cases, as part of the message. That's really not all that surprising given that one of the main reasons for creating high-level instructions within a processor is to factor our routines that keep showing up in the operating system, or in the programs themselves.

                I like your analysis of a two-language approach. Another way to standardize and ensure complete comparisons is to define a standardized version of the Turing machine, then count everything built on that as part of the message. That way basic machine functions and higher-level microcodes instructions all become part of the full set of factoring opportunities in the message.

                Incidentally, a Turing-based approach also opens up opportunities for very unexpected insights, including at the physics level.

                Why? Because many of the very instructions we have pre-programmed into computers contain deep assumptions about how the universe works. Real numbers are a good example, since their level of precision amounts to an inadvertent invocation of Planck's constant when they are applied to data for length, momentum, energy, or perhaps most importantly, time. If you are trying to be fundamental, a lot more caution is needed on how such issues are represented at the machine level since there are multiple ways to approach numeric representation of external data, and he operations on them.

                Here's an example: Have you ever thought about whether a bundle of extremely long binary numbers might be sorted without having to "see" the entire lengths of the bundles first?

                Standard computers always treat long numbers as temporally atomic, that is, you always treat them as a whole. This means you have to complete processing of each long unit before moving on to the next one, and it's the main reason why we also use shorter bit lengths to speed processing.

                But as it turns out, you can bundles of numbers of any length, even ones infinite in length, by using what are called comparators. I looked into these a long time ago, and they can be blazingly fast. The don't need to see the entire number because our number systems (also parts of the total program, and thus of our assumptions!) require that digits to the right can never add up to more than one unit of the digit we are looking at. That means that once a sort order is found, no number of follow-up bits can ever change what it is.

                But all of this sounds pretty computer-science-abstract and number-crunchy. Could ideas that deep in computing theory really affect the minimum size of a Kolmogorov message about, say, fundamental physics?

                Sure they could. For example, for any bundle of infinite-length integers there are only so many sorted orders possible, and so only so many states needed in the computing machines that track those numbers and their sorted order. What if those states corresponded to states of the quantum numbers for various fermions and the infinite lengths to their progression along a worldline, or alternatively to the various levels of collapse of a quantum wave function?

                I really am just pulling those examples out of a hat, so anyone reading this should please not take them as hints! But that said, such radically different approaches to implementing and interpreting real numeric values in computers are good examples of the kind of thinking that likely will be needed to drop fundamental physics to smaller message sizes.

                That's because the Planck relationships like length-momentum and time-energy argue powerfully that really long numbers with extreme precision can only exist in the real world at high costs in terms of other resources. Operating systems that do not "assume" that infinitely precise locations in space or time to be cost-free likely are closer to reality than are operating systems that inadvertently treat infinitely precise numbers as "givens" or "ideals" that the machine then only approximates. It's really the other way around: Computers that make precision decisions both explicit and cost-based are likely a lot closer to what we see in the quantum model, where quantum mechanics similarly keeps tabs on precision versus costs. Excessive use of real numbers in contrast can become very narrow but very bouncy examples of thee trampoline effect, causing the computation costs of quantum models that use them to soar outward by requiring levels of precision that are go far beyond the those of the natural systems they are intended to model.

                Getting back to your comments about higher-level reformulations in terms of e.g. gauge theories and Clifford algebras: Absolutely! Those very much are examples of the "factoring methods" that, if used properly, often can result in dramatic reductions in size, and thus bring us closer to what is fundamental. The only point of caution is that those methods themselves may need careful examination, both for whether they are the best ones and whether they, much like the real-number examples I just gave, contain hidden assumptions that drives them away from simpler mappings of messages to physics.

                Regarding your second paragraph: Trampolines as multi-scale potential wells, heh, I like that! I think you have a pretty cool conceptual model there. I'm getting this image of navigating a complex terrain of gravity fields that are constantly driving the ship off course, with only a very narrow path providing fast and accurate navigation. I particularly like your multi-scale (fractal even?) structuring, since it looks at the Kolmogorov minimum path at multiple levels of granularity, treating it like a fractal that only looks like a straight line from a distance. That's pretty accurate, and it's part of why a true minimum is hard to find and impossible to prove.

                Thanks again for some very evocative comments and ideas!

                Cheers,

                Terry

                Fundamental as Fewer Bits by Terry Bollinger (Essay 3099)

                Essayist's Rating Pledge by Terry Bollinger

                Heinrich,

                Thank you for such a thoughtful (and cheerful) reply to my critique! Reading your reply also makes me feel better about your essay itself, since it shows a side to your views that perhaps does not come through in your more narrowly focused essay.

                I think the saying that we can agree to disagree works here. But doesn't using a fully classical computer sometimes get a little Bohring?... :)

                Finally, I just have to mention that your first name, Heinrich, stands out for me because it was a very common name in my family eight generations ago, when they first came to the New-To-Europeans-World from Germany. They were from the same area in Europe, near a large lake (can't recall the name) as Bollinger sandstone and Heinrich Bullinger. So, probably some interesting history there. The name was transformed to Henry once they settled in Missouri.

                (Regarding my phrase New-to-the-Europeans-World: I think is quite likely that the native Americans had already noticed both that their world was there, and that in terms of generations of ancestors, it was not even particularly new. They were quite observant about such things, often more so than folks who walk around all day with smart phones in front of them... :)

                Cheers,

                Terry

                Terry,

                Did you see my 17.2.18 post above & 100sec video deriving non-integer spins from my essays mechanism resolving the EPR paradox? (I've just found the 'duplet state' confirmation in the Poincare sphere)

                That all emerged from a 2010 SR model http://fqxi.org/community/forum/topic/1330 finally able to resolve the ecliptic plane & stellar aberration issues and a tranche of others (expanded on in subsequent finalist essays).

                i.e you'll be aware of George Kaplans USNO circ (p6) following IAU discussions.

                (Of course all including editors dismiss such progress as impossible so it's still not in a leading journal!)

                Hope you can look & comment

                Peter

                  Dear Terry,

                  Thank you for your extended reply. I just wanted to acknowledge the following:

                  3. I am glad to see that connections between the variational principles and Kolmogorov Complexity have been discovered already. Applied specifically to the path integral, I believe that there is a more fundamental principle at work. In an amended form of Philip Gibbs' phrasing, it could be stated as "Nothing actual means everything potential", and in a formulation that I have called the default specification principle (and which I think is a bit more precise), it can be stated as "The absence of an explicit specification entails all possible default specification outputs". The idea is that when something is not in some way specified or pinned down, then of all the consequences that could result out performing that specification, all are available as "live possibilities". This has an essentially tautological character, except that it captures a distinction between things which are specified "explicitly" and things which are specified "by default", i.e. they are specified due to background constraints. For instance, if I wait until a throw a regular six-sided die, I know that, say, the number 7 or King of clubs are not among the possible outcomes of a throw. I don't know enough about information theory to be able to tell, but it seems to me that the realization that there is such a distinction, which is essentially ontological in character, still remains to be made. Possibly, if and when it is made, it could help by shifting some of the complexity of, say, a message to the background constraints.

                  7. I find the notion of an increase of meaning per message very interesting, in my mind it seems something analogous to a second-order effect, but possibly it could be conceptualized in terms of just the kind of background constraint I mentioned in my previous point. To give an (albeit rather hokey) analogy with the throw of a die: When I say that I hold a die in my hand which I am about to throw, but nothing further about the nature of the die, the possibilities for outcomes of a throw can be large. Without specifying the number of sides or what is on the sides, even the number 7 and the King of clubs are possibilities. Perhaps over time, you learn somehow that my die only contains numbers on its sides, in which case king of clubs is no longer a live possibility, but the number 7 still is. When you finally learn that I only ever throw 6-sided dies, you can then also eliminate the number 7, and thereby simplify the encoding of the possible outcomes. Conversely, by somehow incorporating these background constraints, you could increase the "meaning per message" by referring (as before) to a "die throw" but shifting the complexity to the background instead of having it contained in the message itself. (Incidentally, I will give a talk on the default specification principle at the APS March meeting and plan on filming it, if you are interested, let me know and I'll notify you when upload it).

                  6. I honestly did not realize that "paradigm change" has become a dirty phrase, but then I may not have been exposed to its abuse as much as you have. DECROCk is certainly a humorous take, but however it is called, I agree that it will involve discarding ideas which are no longer useful.

                  2. I think in Kuhn's model, the "slow accumulation of both stale facts and new facts" is actually still part of normal science. As I understand it, the crisis period refers to the one in which there is a competition between a number of different candidates for a paradigm without a clear favorite.

                  1. This was simply meant as a simple straightforward generalization of the example with the sequence in pi you gave: Instead of just pi, consider a set of irrational numbers, instead of a decimal expansion consider all expansions (binary, ternary etc) up to some number that is deemed useful, and instead of just some random number of digits in the expansion consider some standard. Then use this to create a giant look-up table the specification of the addresses within which is less complex than the specification of a given sequence itself. Like I said, this may be more naive or trivial than you might have thought.

                  All the best,

                  Armin

                  Dear Terry,

                  Maybe it's in the German genes that we prefer to think it out over trying it out...

                  Heinrich

                  P.S. From the hints you gave your family once came from Switzerland. Gruezi!