Philip,

Your question about whether there is a distinction between descriptive (which I interpret as more "English like") data and data that can be reduced though symmetry groups.

The best answer I can give is that (a) I really don't know, and (b) I nonetheless rather strongly suspect that even the most random-looking descriptive parts of a theory are just finer-scale compositions of symmetry operations. That is because I have difficulty visualizing data compression processes that do not at some level invoke symmetries. Saying that two pieces of data are really one is, after all, just another way of stating a symmetry.

I read your essay and found your approach intriguing and resonant with some of my own perspectives. I was struck in particular by this description:

"In category theory a generalisation can be formulated using coequivalence and epimorphisms. The assimilation of information is an algebraic process of factorisation and morphisms. In algebraic terms then, the ensemble of all possibilities forms a freely generated structure in a universal algebra. Information about the world that forms part of life experience defines substructures and epimorphisms onto further algebraic structures that represent the possible universes that conform to the observed information."

The image that brought to mind for me was Kolmogorov compression with a focus on free algebras, and applied not just to observed data in our universe, but to the definition of all possible universes. Intriguing!

I note from the book chapter below that there seems to have been some coverage of algebraic generalizations of quantum mechanics (or at least of Hilbert space) in some of the side branches of physics, even if they are not dominant topics in the mainstream:

Landsman N.P. (2009) Algebraic Quantum Mechanics. In: Greenberger D., Hentschel K., Weinert F. (eds) Compendium of Quantum Physics. Springer, Berlin, Heidelberg .

Cheers,

Terry

Fundamental as Fewer Bits by Terry Bollinger (Essay 3099)

Essayist's Rating Pledge by Terry Bollinger

You're new here, aren't you? :-)

Being also retired from the DoD, you must have experienced the difficulties making legacy software work with rapidly advancing technology and updates to Windows. You're suffering from PTSD.

The world is catching up with us, Terry.

Oh yes indeedy! The stories either of us could tell... DISA alone...

But in recent years I had the true privilege of working almost entirely with (a) Leading-edge commercial tech (I saw Google's Earth tech before Google owned it, and some amazing drones long before anyone had them at home); and (b) AI and robotics research. In short, I got spoiled!

Fundamental as (Literally) Finding the Cusp of Meaning

Terry Bollinger, 2018-02-25 Feb

NOTE: The purpose of a mini-essay is to capture some idea, approach, or even a prototype theory that resulted from idea sharing by FQXi Essay contestants. This mini-essay was inspired primarily by two essays:

The Perception of Order by Noson S Yanofsky

The Laws of Physics by Kevin H Knuth

Relevant quotes:

Yanofsky (in a posting question): "I was wondering about the relationship between Kolmogorov Complexity and Occam's razor? Do simpler things really have lower KC?"

Knuth: "Today many people make a distinction between situations which are determined or derivable versus those which are accidental or contingent. Unfortunately, the distinction is not as obvious as one might expect or hope."

Bollinger: "...the more broadly a pattern is found in diverse types of data, the more likely it is to be attached deeply within the infrastructure behind that data. Thus words in Europe lead back 'only' back to Proto-Indo-European, while the spectral element signatures of elements on the other side of the visible universe lead all the way back to the shared particle and space physics of our universe. In many ways, what we really seem to be doing there is (as you note) not so much looking for 'laws' as we are looking for points of shared origins in space and time of such patterns."

Messages, Senders, Receivers, and Meaning

All variations of information theory include not just the concept of a message, but also of a sender who creates that message, and of a receiver who receives that message. The sender and received share a very special relationship, which is that they both understand the structure of the message in a way that assigns to it yet another distinct concept, which is that of meaning.

Meaning is the ability to take specific, directed (by the sender) action as the result of receiving the message. Meaning, also called semantics, should never be confused with the message itself, for two reasons. The first is that a message in isolation is nothing more than a meaningless string of bits or other characters. In fact, if the message has been fully optimized -- that is, if it is near its Kolmogorov minimum -- it will look like random noise (the physical incarnation of entropy) to any observer other than the sender and receiver. The second is that the relationship between messages and meaning is highly variable. Depending on how well the sender and receiver "understand" each other, the same meaning can be invoked by messages that vary wildly in length.

This message-length variability is a common phenomenon in human relationships. Couples who have lived together for decades often can convey complex meaning by doing nothing more than subtly raising an eyebrow in a particular situation. The very same couple in the distant past might well have argued (exchanged messages) for an hour before reaching the same shared perspective. Meaning and messages are not the same thing!

But the main question here is this: What makes the sender and receiver so special?

That is, how does it come to be that they alone can look at a sequence of what looks like random bits or characters, and from it implement meaning, such as real-world outcomes in which exquisitely coordinated movements by the sender and receiver accomplish joint goals that neither could have accomplished on their own?

In short: How does meaning, that is, the ability to take actions that forever alter the futures of worlds both physical and abstract, come to be attached to a specific subset of all the possible random bit or character strings that could exist?

Information Theory at the Meta Level

The answer to how senders and receivers assign meaning to messages is that at some earlier time they received an earlier set of messages that dealt specifically with how to interpret this much later set of messages. Technologists call such earlier deployments of message-interpretation messages protocols, but that is just one name for them. Linguists for example call such shared protocols languages. Couples who have been together for many years just call their highly custom, unique, and exceptionally powerful set of protocols understanding each other.

But it doesn't stop there. Physicists also uncover and identify shared protocols, protocols that they had no part in creating. They have however slowly learned how to interpret some of them, and so can now read some of the messages that these shared protocols enable. Physicists call such literally universal protocols the "laws" of physics, and use them for example to receive messages literally from the other side of the universe. For example, these shared protocols enable to look at the lines in light spectra and, amazingly, discern how the same elements that we see on earth can also be entrained within the star-dismantling heat and power of a quasar polar plasma jet billions of light years distant in both space and time.

Protocols as Meaning Enablers

While the word "protocol" has a mundane connotation as the rules and regulations by which either people or electronic equipment interact in clear, understandable ways (share information), I would like to elevate the stature of this excellent word by asserting that in terms of the meta-level at which all forms of information theory first acquire their senders and receivers, a protocol is a meaning enabler. That is, to create and distribute a protocol is to create meaning. They enable previously isolated components of the universe, at any scale from that of fundamental particles to light from distant quasars, to enable receivers to alter their behaviors and adopt new sets of coordinated, "future selective" behaviors that no longer leave the future entirely open to random chance. This in turn means that the more widely a protocol is distributed and used, the "smarter" the universe as a whole becomes. The enhancements can vary enormously is scale and scope, from the tiny sound-like handshakes that enable electrons to pair up and create superconductive materials, through the meaning exchanged by an aging couple, and up to scales that are quite literally universal, such as the shared properties of electrons. The fact that those shared electron properties define a protocol can be seen by imagining what would happen if electrons on the other side of the universe did not have the same quantum numbers and properties as the electrons we know. The protocol would be broken, and the light that we see would no longer contain a message that we understand.

Historically such protocol deficiencies, that is, a lack or misunderstanding of the protocols that enable us to assign meaning to data, is the norm rather than the exception. Even in the case I mentioned earlier of how the electrons-photons-and-elements protocol enabled us to know what elements are in a quasar on the other side of the universe, there was a time in the 1800s when scientists mourned that we would never be able to know the composition of distant stars, which by that time they had realized were forever unreachable by any means of transportation that they could envision. It was not until the electrons-photons-and-elements protocol was deciphered that the availability of this amazing information became known.

And even then that new information created its own mysteries! The element helium should have and would have been named "helion" had it been known on earth at the time of its discovery in solar spectra. That is because "-ium" indicates a metal (e.g. titanium, while "-on" indicates a gas (e.g. "argon). In this case the newly uncovered electron-photon-element protocol sent us a message we did not yet understand!

Many more such messages are still awaiting protocol, with biology, especially at the biochemical level, being a huge and profound area in need of more protocols, of more ways to interpret with meaning the data we see. Thus for example, despite our having successfully unearthed the protocol for how DNA codes amino acids and proteins at the connection level, we remain woefully lacking in protocols for understanding how the non-protein components of DNA really work, or even of how those amino acids, once strung together, almost magically fold themselves into a working protein.

Naturally Occurring Protocols

To understand the full importance of protocols, however, it is vital as Kevin Knuth strongly advocates in his essay that we get away from the human-centric view that calls such discoveries of meaning "laws" in the human sense. In particular, the emergence and expansion of meaning-imbuing protocols is not limited just to relationships between humans (the aging couple) or ending with human-only receivers (we blew it for helion). The largest and most extensive protocols exist entirely independently of humans, in domains that include physics and especially biology.

In the case of physics, the protocols that count most are the shared properties and allowed operations on those properties that enable matter and energy to interact in a huge variety of extremely interesting, and frankly bizarrely unlikely, ways. Kevin Knuth dives into some of these anthropic issues in his essay, primarily to point out how remarkable and, at this time at least, inexplicable they are. But in any case they exist, almost literally like fine-tuned machinery custom made to enable still more protocols, and thus still more meaning, to emerge over time.

The First Open-Ended Protocol: Biochemistry

Chemistry is one such protocol, with carbon-based biochemistry as an example in which the layering of protocols -- the emergence of compounds and processes whose very existence depends on earlier protocols, such as proteins out of amino acids -- is essentially unlimited.

It is flatly incorrect to view computer software and networks as the first example of open-ended protocols that can be layered to create higher and higher levels of meaning. The example of truly open-ended protocols capable of supporting almost unlimited increases in meaning was the remarkable cluster of basic protocols centered around the element carbon. Those elemental protocols -- their subtleties include far more than just carbon, though carbon is literally the "backbone" upon which the higher-level protocols obtain the stability they require to exist at all --enabled the emergence of layer upon layer of chemical compounds of increasing complexity and sophistication. As exploited by life in particular, these compounds grow so complex that they qualify as exceptionally powerful machines capable of mechanical action (cutting and splicing DNA), energy conversion (photosynthesis), lens-like quantum calculation (chlorophyll complexes), and information storage and replication (DNA again).

Each of these increasingly complex chemical machines also enable new capabilities, which in turn enable new, more sophisticated protocols, that is, new ways of interpreting other chemicals as messages. This interplay can become quite profound, and has the same ability to "shorten" messages that is seen in human computer networking. Fruit for example responds to gas ethylene by ripening faster, a protocol created to create enticing (at first!) smells to attract seed-spreading animals. The brevity of the message, the shortness of the ethylene molecule, is a pragmatic customization by plants to enable easy spreading of the message.

Humans do this also. When after an extended effort (think of Yoda after lifting Luke Skywalker's space ship out of the swamp) we inhale deeply through our nose, we are self-dosing with the two-atom vasodialator nitric oxide, which our nasal cavities generate slowly over time for just such purposes.

Cones Using Shared Protocols (Cusps)

To understand Kevin Knuth's main message, it's time to take this idea of protocols to the level of physics, where it recursively becomes a fundamental assertion about the nature of fundamental assertions.

Minkowski, the former professor of Albert Einstein who more than anyone else created the geometric interpretation of Einstein's originally algebraic work, invented the four-dimensional concept of the light cone to describe the maximum limits for how mass, energy, and information spread out over time. A 4D light "cone" does not look like a cone to our 3D-limited human senses. Instead, it appears like a cone, but like a ball of included space whose spherical surface expands outward at the speed of light. Everything within that expanding ball has potential access to -- that is, detailed information about -- whatever event created that particular cone. The origin of the light cone becomes the cusp of an expanding region that can share all or some subset of the information first generated at that cusp. Note that the cusp itself has a definite location in both space and time, and so qualifies as a well-defined event in spacetime, to use relativistic terminology.

Protocols are a form of shared information, and so form a subset of the types of information that can be shared by such light cones. The cusp of the light cone becomes the origin of the protocol, the very first location at which it exists. From there it spreads at speeds limited by the speed of light, though most protocols are far less ambitious and travel only slowly. But regardless of how quickly or ubiquitously a new protocol spreads, it must always have a cusp, an origin, an event in spacetime at which it comes into being, and thereby creates new meaning within the universe. Whether that meaning is trivial, momentous, weak, powerful, inaccurate, or spot-on remains to be determined, but in general it is the protocols that enable better manipulations of the future that will tend to survive. Meaning grows, with stronger meanings competing against and generally overcoming weaker ones, though as in any ecology the final outcomes are never fixes or certain. The competitive multi-scale ecosystem of meaning, the self-selection of protocols as they vie for receivers who will act upon the messages that they enable, is a fascinating topic in itself, but one for some other place and time.

In an intentional double entendre, I call these regions of protocol enablement via the earlier spread of protocols within a light cone "cones using shared protocols", or cusps. (I hate all-cap acronyms, don't you?) A protocol cusp is both the entire region of space over which the protocol applies or is available, but it is also the point in spacetime -- the time and location -- at which the protocol originated.

Levels of Fundamentality as Depths of Protocol Cusps

And that is where Kevin Knuth's focus on the locality and contingency of many "fundamental" laws comes into play. What we call "laws" are really just instances where we are speculating, with varying levels of confidence, that certain repeated patterns are messages with a protocol that we hope will give them meaning.

Such speculations can of course be incorrect. However, in some instances they prove to be valid, at least the degree that we can prove it from the data. Thus the existence of the Indo-European language group was at first just a speculation, but one that proved remarkably effective at interpreting words in many languages. From it the cusp or origin of this truly massive "protocol" for human communications was given a name: Proto-Indo-European. The location of this protocol cusp in space was most likely the Pontic-Caspian steppe of Eastern Europe, and the time was somewhere between 4,500 BCE and 2,500 BCE.

Alphabets have cusps. One of the most amazing and precisely located examples is the Korean phonetic alphabet, the Hangul, which was create in the 1400s by Sejong the Great. It is a truly masterful work, one of the best and most accessible phonetic alphabets ever created.

Live is full of cusps! One of the earliest and most critical cusps was also one of the simplest: The binary choice between the left and right chiral (mirror-image) subsets of amino acids, literally to prevent confusion as proteins are constructed from them. Once this choice was made it became irrevocable for the entire future history of life, since any organism that went against was faced with instant starvation. Even predators cooperate in such situations. The time and origin of this cusp remains a deep mystery, one which some (the panspermia hypothesis) would assign to some other part of the galaxy.

The coding of amino acids by DNA is another incredibly important protocol, one whose features are more easily comparable to the modern communications network concept of a protocol. The DNA-amino protocol is shared with minor deviations by all forms of life, and is a very sophisticated. It has been shown to perform superbly at preventing the vast majority of DNA mutation from damaging the corresponding proteins. The odds of that property popping up randomly in the DNA to amino acid translation mechanism are roughly one million to one. I recall from as recently as my college years reading works that disdained this encoding as random and an example of the "stupidity" of nature. It is not, though its existence does provide a proof of how easily stupidity can arise, especially when accompanied by arrogance.

The Bottom Line for Fundamentality

In terms of Kevin Knuth's concepts of contingency and context for "fundamental" laws (protocols) and rules, the bottom line in all of this is surprisingly simple:

The fundamentality of a "law" (protocol for extracting meaning from data) depends on two factors: (1) How far back in time its cusp (origin) resides, and (2) how broadly the protocol is used.

Thus the reason physics gets plugged so often as having the most fundamental rules and "laws" is because its cusp at the same time as the universe itself, presumably in the big bang, and because its protocols are so widely and deeply embedded that they enable us to "read" messages from the other side of the universe.

Nearly all other protocol cusps, including those of life, are of a more recent vintage. But as Kevin Knuth points out in his essay, and as gave examples of through the very existence of physics-enable open protocols in biochemistry, deeper anthropic mysteries remain afoot, since strictly in terms of what we can see, the nominally "random" laws of physics were in fact direct predecessor steps necessary for life to begin creating its own upward-moving layers of protocols and increased meaning.

It was a huge mistake to think that DNA-to-amino coding was "random."

And even if we haven't a clue why yet, it is likely also a huge mistake to assume that the protocols of physics leading so directly and perfectly into the protocols of life. We just do not understand yet what is going on there, and we likely need to do a better job of fully acknowledging this deeply mysterious coincidence of continuity before we can make any real progress in resolving it.

    FQXi Essay Contestant Pledge

    Author: Terry Bollinger. Version 1.3, 2018-02-15

    ----------------------------------------

    When evaluating essays from other FQXi Contest participants, I pledge that I will rate and comment on essays based only on the following criteria:

    -- My best, most accurate judgement of the quality of the essay, without regard to how my ratings and comments on that essay could affect my own contest status.

    -- How well the essay makes its argument to back up its answer.

    -- How accurately and reliably an essay uses reference materials.

    -- How focused the essay is on answering the question as posed and intended by FQXi. (This is secondary to criteria above.)

    Furthermore, I will consciously strive to:

    -- Avoid rating an essay low just because it has a novel approach.

    -- Avoid rating an essay low because I disagree with its answer. Instead, I will focus how well the essay argues for that answer.

    -- Avoid rating an essay high solely because I like its conclusion. Even if I agree, my rating will reflect the overall essay quality.

    -- Avoid ratings inflation. If an essay does very poorly at arguing its conclusion, I pledge to give it the appropriate low rating, versus an inflated "just being nice" number such as a 5 or 6.

    -- Avoid reprisal behavior. I pledge that I will never knowingly assign unfair point ratings or make false comments about another essay as a form of reprisal against another contestant who gave my essay low ratings or negative comments.

    -- Avoid rudeness towards other contestants. If other contestants become abusive, I will appeal to FQXi to intervene, rather than attempt to respond in kind on my own.

    ...btw, the offer to work on a javascript part of the problem still stands.

    • [deleted]

    Hi Terry,

    I read your mini-essay and like it.

    I consider such mini-essays and / or addenda as very helpful - after one has read dozens of different essays with different ideas and at least I would need a somewhat more compact summary of the main ideas of the many different authors.

    A couple of thoughts about your mini-essay:

    'Protocols' sounds like a rather mechanical term to catch the distinction between message and meaning. It is really a big puzzle how 'meaning' can arise from rather mechanical processes. 'Meaning' traditionally is connected to awareness of the orderedness of the external reality - and additionally the orderedness of the internal reality of a subject that is capable of being aware of something. With this, the circle of meaning is closed. I suspect that 'meaning' is somewhat a similar tautology than the one I describe in my own addendum to my essay: meaning self-confirms itself in the same manner as my purported idea of fundamental truths do.

    I think you are totally on the right track to suspect that 'meaning' has exactly the meaning we ascribe to it: by finding some meaning in nature, we find a certain truth that speaks to us through nature. By finding some meaning that we epistemologically have facilitated by means of our preferences to emotionally conclude something, we may gain some truth or some falsehood about this 'something'. In summary: whereas meaning about the external reality is more likely to be stable and pointing to some objective truths, the meaning of some more subjective conclusions about very specific circumstances that do not really justify to make some general rule out of them, we are more in danger to conclude something that could be objectively false or at least incomplete.

    Your example with the aging couple is to the point, since it shows that the problem of subjective conclusions and their real meaning is solved over time by compressing the message as far as possible: Highten an eyebrow then has a very precise meaning - regardless of whether or not the couple loves one another or is in permanent confrontation. In either case one's emotions are perfectly understood by the other via the compressed message that has a well-suited meaning for the couple.

    Interestingly this could be a complementary example of 'internalizing some external reality as a model, as a set of symbols' as is done by modelling some abilities for perception of the brain for the sake to understand the latter in information-theoretic terms. The hightened eyebrow does the complementary, it *externalizes* not a model, but a precise emotional state by means of a compressed and very specific symbol / action. Together with the model one has about the emotional landscape of the partner, one can even reliably deduce how to further interpret the hightening of the eyebrow, since the latter can be interpreted in general as dislikening something, and the internal model can further specify what the dislikening specifically is all about in the actual situation.

    Another interesting aspect of protocols seem to be for me that they limit or exclude other possibilities. This is what we all want to achieve by searching for some more fundamental level of nature. Limiting the options that are left makes it easier to determine the more fundamental level.

    Just a couple of thoughts :-)

    Best wishes,

    Stefan Weckbach

    A reply of mine to this comment was unfortunately edited in a mutilating manner. I merely recall that you mentioned fractional calculus. Maybe, I should have a look at this because for instance half differentiation implies boundaries.

    Eckard

    Gordon,

    Thank you for supporting the Pledge!

    Your title is intriguing; look at my signature line and its single-concept definition of QM and you can see why. My queue on this last day is long, but I will follow your link and a look at your essay.

    Cheers,

    Terry

    Fundamental as Fewer Bits by Terry Bollinger (Essay 3099)

    Essayist's Rating Pledge by Terry Bollinger

    "Quantum mechanics is simpler than most people realize. It is no more and no less than the physics of things for which history has not yet been written."

    Gordon,

    Wow! That is one of the best arguments for locality that I think I've seen. I like your Bell-ish style of writing and focus on specifics. You are of course in very good company, since both Einstein and Bell were localists.

    I can't do a detailed assessment today -- too many equations that would need careful examination to assess your argument meaningfully -- but what I've seen at a quick look seems pretty solid.

    That said, there is an expanding class of pro-entanglement data anomalies that you need somehow to take into account:

    ID230 Infrared Single-Photon Detector Hybrid Gated and Free-Running InGaAs/InP Photon Counter with Extremely Low Dark Count

    This field has moved way beyond the Aspect studies. A lot of hard-nosed business folks figured out years ago that arguments against the existence of entanglement don't matter much if they can simply build devices that violate Bell's inequality. Which they did, and now they sell them to some very smart, physics-savvy customers who use them on a daily basis to encrypt some critical data transmissions. Many of these customers would be, shall we say, upset in interesting ways if some company sold them equipment that did not work.

    Again, thanks for a well-argued essay! I'll try (no promises though) to take a closer look at your essay at some later (post-commenting-close) date. Again assuming the equations are solid, yours is the kind of in-depth analysis needed to sharpen everyone's thinking about such topics.

    Cheers,

    Terry

    Terry -

    That was wonderfully clear and readable, not to mention vast in scope - an excellent summary of what I think are the key issues here. I agree with pretty much everything, except - there's a basic missing piece to your concept of meaning. Naturally, it happens to be what I've been trying to articulate in my essays.

    You write, "To create and distribute a protocol is to create meaning." This describes the aspect of information-processing that's well-understood: data gets transferred from sender to receiver and decoded through shared protocols - a very good term for the whole range from laws of physics to human philosophies. But this concept of meaning takes it for granted that the underlying data is distinguishable: that there are physical contexts - for both sender and receiver - in which the 1's and 0's (or any of the many different kinds of information that actually constitute the physical world), make an observable difference.

    This is hard not to take for granted, I know - both because such contexts are literally everywhere we look, and because it's very difficult to describe them in general terms. But I've argued both on logical grounds and empirically, from "fine-tuning", that it takes an extremely special kind of universe to make any kind of information physically distinguishable.

    The physical world is essentially a recursive system in which information that's distinguished (measured) in one context gets communicated out to help set up other contexts, to distinguish more information. Quite a number of distinct protocols are apparently needed to make this work, and I've tried to sort some of them out in my current essay, to suggest how they might have emerged. In my 2017 essay I compared the way this system works with the other two basic recursive systems that make up our world, biological evolution and human communication.

    Regarding biological and human systems, you're right that there's "natural selection" for meanings that "enable better manipulations of the future." But while this also applies to the evolution of human theories about the physical world, I don't think it's quite right for the generation of meaning in the physical world itself. Rather, the meanings that get selected are the ones that keep on enabling the future itself - that is, that constantly set up new situations in which the same protocol-system can operate to create new meaning.

    I don't mean to detract at all from your remarkable mini-essay - I give it a 10. But please fix your next-to-last sentence. I think you mean that it's a mistake to suppose the protocols of physics just happen to support the protocols of life. That's a complex issue... that can't become clear, I think, until we have some idea where the protocols of physics come from.

    Thanks for your many eye-opening contributions to this contest - once again, I'm in awe.

    Conrad

    Terry,

    In our (Feb 17th) string above we didn't resolve the non integer spin video matter; 100 sec video Classic QM. It's just occurred that you were after a POLAR spin 1/2, 2 etc! Now that's not quite what the original analysis implied, but, lest it may have been, YES, the 3 degrees of freedom also produce that.

    Just one y axis rotation with each polar rotation gives spin 1/2; Imagine the polar axis horizontal. Now rotate around the vertical axis to switch the poles horizontally. HALF a polar rotation at the same time brings your start point back.

    Now a y axis rotation at HALF that rate means it takes TWO rotations of the polar axis to t return to the start point.

    Occam never made a simpler razor! It's a unique quality of a sphere that there's no polar axis momentum loss from y or z axis rotations.

    Was there anything else? (apart from confusing random number distributions explained in Phillips's essay with real 'action at a distance'!) Of course tomography works but within strict distance limits. Just checked through Karen's list again and can't find one the DFM doesn't qualify for apart from a few particle physics bits. Can you check & see if I can stop digging now and leave those to the HEP specialists!?

    Peter

    PS; Not sure if that link hasn't suddenly died!

      Hi,

      This is a wonderful essay, with Deep fundamental knowledge. I am impressed.

      Nothing to ask for now.

      Ulla Mattfolk https://fqxi.org/community/forum/topic/3093

        The Illusion of Mathematical Formality

        Terry Bollinger, 2018-02-26 Feb

        Abstract. Quick: What is the most fundamental and least changing set of concepts in the universe? If you answered "mathematics," you are not alone. In this mini-essay I argue that far from being eternal, formal statements are actually fragile, prematurely terminated first-steps in perturbative sequences that derive ultimately from two unique and defining features of the physics of our universe: multi-scale, multi-domain sparseness and multi-scale, multi-domain clumping. The illusion that formal statements exist independently of physics is enhanced by the clever cognitive designs of our mammalian brains, which latch on quickly to first-order approximations that help us respond quickly and effectively to survival challenges. I conclude by recommending recognition of the probabilistic infrastructure of mathematical formalisms as a way to enhance, rather than reduce, their generality and analytical power. This recognition makes efficiency into a first-order heuristic for uncovering powerful formalisms, and transforms the incorporation of a statistical method such Monte Carlo into formal systems from being a "cheat" into an integrated concept that helps us understand the limits and implications of the formalism at a deeper level. It is not an accident, for example, that quantum mechanics simulations benefit hugely from probabilistic methods.

        ----------------------------------------

        NOTE: A mini-essay is my attempt to capture and make more readily available an idea, approach, or prototype theory that was inspired by interactions with other FQXi Essay contestants. This mini-essay was inspired by:

        1. When do we stop digging, Conditions on a fundamental theory of physics by Karen Crowther

        2. The Crowther Criteria for Fundamental Theories of Physics

        3. On the Fundamentality of Meaning by Brian D Josephson

        4. What does it take to be physically fundamental by Conrad Dale Johnson

        5. The Laws of Physics by Kevin H Knuth

        Additional non-FQXi references are listed at the end of this mini-essay.

        ----------------------------------------

        Background: Letters from a Sparse and Clumpy Universe

        Sparseness6 occurs when some space, such as a matrix or the state of Montana, is occupied by only a thin scattering of entities, e.g. non-zero numbers in the matrix or people in Montana . A clump is compact group of smaller entities (often clumps themselves of some other type) that "stick together" well enough to persist over time. A clump can be abstract, but if it is composed of matter we call it an object. Not surprisingly, sparseness and clumping tend to be closely linked, since clumps often are the entities that occupy positions in some sparse space.

        Sparseness and clumping occur at multiple size scales in our universe, using a variety of mechanisms, and when life is included, at varying levels of abstraction. Space itself provides a universal basis for creating sparseness at multiple size scales, yet the very existence of large expanses of extremely "flat" space is still considered one of the greatest mysteries in physics, an exquisitely knife-edged balancing act between total collapse and hyper expansion.

        Clumping is strangely complex, involving multiple forces at multiple scales of size. Gravity reigns supreme for cosmic-level clumping, from involvement (not yet understood) in the 10 billion lightyear diameter Hercules-Corona Borealis Great Wall down to kilometer scale gravel asteroids that just barely hold together. From there a dramatically weakened form of the electromagnetic force takes over, providing bindings that fall under the bailiwick of chemistry and chemical bonding. (The unbridled electric force is so powerful it would obliterate even large gravitationally bond objects.) Below that level the full electric force reigns, creating the clumps we call atoms. Next down in scale is yet another example of a dramatically weakened force, which is the pion-mediated version of the strong force that holds neutrons and protons together to give us the chemical elements. The protons and neutrons, as well as other more transient particles, are the clumps created by the full, unbridled application of the strong force. At that point known clumping end... or do they? The quarks themselves notoriously appear to be constructed from still smaller entities, since for example they all use multiples of a mysterious 1/3 electric charge, bound together by unknown means at unknown scales. How exactly the quarks have such clump-like properties remains a mystery.

        Nobel Laureate Brian Josephson1 speculates that at least for higher level domains such as biology and sociology, the emergence of a form of stability that is either akin to or leads to clumping always the result of two or more entities that oppose and cancel each other in ways that create or leave behind a more durable structure. This intriguing concept can be translated in a surprisingly direct way to the physics of clumping and sparseness in our universe. For example, the mutually cancelling of positive and negative charges of an electron and a proton can combine to leave enduring and far less reactive result, a hydrogen atom, that in turn supports clumping through a vastly moderated presentation of the electric forces that it largely cancels More generally, the hydrogen atom is an example of incomplete cancellation, that is, cancellation of only a subset of the properties of two similar but non-identical entities. The result qualifies as "scaffolding" in the Josephson sense due to its relative neutrality, which allows it for example to be a part of chemical compounds that would be instantly shredded by the full power of the mostly-cancelled electric force. Physics has many examples of this kind of incomplete cancellation, ranging from quarks that mutually cancel the overwhelming strong force to leave milder protons and neutrons, protons and electrons that then cancel to leave charge-free hydrogen atoms, unfilled electron states that combine to create stable chemical bonds, and hydrogen and hydroxide groups on amino acids that combine to enable the chains known as proteins. At higher levels of complexity, almost any phenomenon that reaches an equilibrium state tends to produce a more stable, enduring outcome. The equilibrium state that compression-resistant matter and ever-pulling gravity reach at the surface of a planet is another more subtle example, one that leads to a relatively stable environment that is conducive to, for example, us.

        Bonus Insert: Space and gravity as emerging from hidden unified-force cancellations

        It is interesting to speculate whether the flatness of space could itself be an outcome of some well-hidden form of partial cancellation.

        If so, it would mean that violent opposing forces of some type of which we are completely unaware (or have completely misunderstood) largely cancelled each other out except for a far milder residual, that being the scaffolding that we call "flat space." This would be a completely different approach to the flat space problem, but one that could have support from existing data if that data were examined from Josephson's perspective of stable infrastructure emerging from more mutual cancellation by far more energetic forces.

        The forces that cancelled would almost certainly still be present in milder forms, however, just as the electric force continues to show up in milder forms in atoms. Thus if the Josephson effect -- ah, sorry, that phrase is already taken -- if the Josephson synthesis model applies to space itself, then the mutually cancelling forces that led to flat space may well already be known to us, just not in their most complete and ferocious forms. Furthermore, if these space-generating forces are related to the known strong and electric forces -- or more likely, to the Standard Model combination of them with the weak force -- then such a synthesis would provide and entirely new approach to unifying gravity with the other three forces.

        Thus the full hypothesis in summary: Via Josephson synthesis, it is speculated that ordinary xyz space is a residual structural remnant, a scaffolding, generated by the nearly complete cancellation of two oppositely signed versions of the unified weak-electric-strong of the Standard Model. Gravity then becomes not another boson force, but a topological effect applied by matter to the the "surface of cancellation" of the unified Standard Model forces.

        Back to Math: Is Fundamental Physics Always Formal?

        In her superb FQXi essay When do we stop digging? Conditions on a fundamental theory of physics, Karen Crowley2 also created an exceptionally useful product for broader use, The Crowther Criteria for Fundamental Theories of Physics.3 It is a list of nine succinctly stated criteria that in her assessment need to be met by a physics theory before it can qualify as fundamental.

        There was however one criterion in her list about which I uncertain, which was the fourth one:

        CC#4. Non-perturbative: Its formalisms should be exactly solvable rather than probabilistic.

        I was ambivalent when I first read that one, but I was also unsure why I felt ambivalent. Was it because one of the most phenomenally accurate predictive theories in all of physics, Feynman's Quantum ElectroDynamics or QED, is also so deeply dependent on perturbative methods? Or was it the difficulty that many fields and methods have in coming up with closed equations? I wanted to understand why, if exactly solvable equations were the "way to go" in physics for truly fundamental results, why then were some of the most successful theories in physics perturbative? What all does that work really imply?

        As it turns out, both the multi-scale clumpiness and sparseness of our universe are relevant to this question because they lurk behind such powerful mathematical concepts as renormalization. Renormalization is not really as exotic or even as mathematical is it is in, say, Feynman's QED theory. What it really amounts to is an assertion that our universe is, at many levels, "clumpy enough" that many objects (and processes) within it can be approximated when viewed from a distance. That "distance" may be real space or some other more abstract space, but the bottom line is that this sort of approximation option is a deep component of whatever is going on. I say that in part because we are ourselves as discrete, independently mobile entities are very much part of this clumpiness, as are the large, complex molecules that make up our bodies... as are the atoms that enable molecules... as are the nucleons that enable atoms... and as are the fundamental fermions that make up nucleons.

        This approximation-at-a-distance even shows up in everyday life and cognition. For example, let's say you need an AA battery. What do you think first? Probably you think "I need to go to the room where I keep my batteries." But your navigation to that room begins as a room to room navigation. You don't worry yet about exactly where in that room the batteries are, because that has no effect on how you navigate to the room. In short, you will approximate the location of the battery until you navigate closer to it.

        The point is that the room is itself clumpy in a way that enables you to do this, but the process itself is clearly approximate. You could in principle super-optimize your walking path so that it minimizes your total effort to get to the battery, but such a super-optimization would be extremely costly in terms of the thinking and calculations needed, and yet would provide very little benefit. So, when the cost-benefit ratio grows too high, we approximate rather than super-optimize, because the clumpy structure of our universe makes such approximations much more cost-beneficial overall.

        What happens after your reach the room? You change scale!

        That is, you invoke a new model that tells you how to navigate the draws or containers in which you keep the AA batteries. This scale is physically smaller, and again is approximate, enabling tolerance for example of highly variable locations of the batteries within a drawer or container.

        This works for the same reason that in Feynman's QED is incredibly accurate and efficient for modeling an electron probabilistically. The electron-at-a-distance can be safely and very efficiently modeled as a point particle with a well-defined charge, even though that is not really correct. That is the room-to-room level. As you get closer to the electron, that model must be replace by a far more complex one that involves rapid creation and annihilation of charged virtual particle pairs that "blur" the charge of the electrons in strange and peculiar ways. That is the closer, smaller, dig-around-in-the-drawers-for-a-battery level of approximation. In both cases, the overall clumpiness of our universe makes these special forms of approximation both very accurate and computationally efficient.

        At some deeper level, one could further postulate that this may be more than just a way to model reality. It is at least possible (I personally think it probable) that this is also how the universe actually works, even if we don't quite understand how. I say that because it is always a bit dangerous to assume that just because we like to model space as a given and particles as points within it, those are in the end just models, ones that actually violate quantum mechanics in the sense of postulating points that cannot exist in real space due the quantum energy cost involved. A real point particle would require infinite energy to isolate, so a model that invokes such particles to estimate reality really should be viewed with a bit of caution as a "final" model.

        So bottom line: While Karen Crowley's Criterion #4 makes excellent sense as a goal, our universe seems weirdly wired for at least some forms of approximation. I find that very counterintuitive, deeply fascinating, and likely important in some way that we flatly do not yet understand.

        Perturbation Versus Formality in Terms of Computation Costs

        Here is a hypothesis:

        In the absence of perturbative opportunities, the computational costs of fully formal methods for complete, end-to-end solutions trends towards infinity.

        The informal proof is that full formalization implies fully parallel combinatorial interaction of all components of a path (functional) in some space, that being XYZ space in the case of approaching an electron. The computational cost of this fully parallel optimization then increases both with decreasing granularity of the path segment sizes used, and with path length. The granularity is the most important parameter, with the cost rapidly escalating towards infinity as the precision (inverse of segment length) decreases towards the limit of representing the path as an infinitely precise continuum of infinitely precise points.

        Conversely, the ability to use larger segments instead of infinitesimals depends on the scale structure of the problem. If that scale structure enables multiscale renormalization, then the total computational cost remain at least roughly proportional to the level of precision desired. If no such scale structure is available, the cost instead escalates towards infinity.

        But isn't the whole point of closed formal solutions is that they remain (roughly) linear in computational cost versus the desired level of precision?

        Yes... but what if the mathematical entities we call "formal solutions" are actually nothing more than the highest-impact granularities of what are really just perturbative solutions made possible by the pre-existing structure of our universe?

        Look for example at gravity equations, which treat stars and planets as point-like masses. However, that approximation completely falls apart at the scale of a planet surface, and so is only the first and highest-level step in what is really a perturbative solution. It's just that our universe is pre-structured in a way that makes many such first steps so powerful and so broadly applicable that it allows us to pretend they are complete, stand-alone formal solutions.

        A More Radical Physics Hypothesis

        All of this leads to a more radical hypothesis about formalisms in physics, which is this:

        All formal solutions in physics are just the highest, most abstract stages of perturbative solutions that are made possible by the pre-existing clumpy structure of our universe.

        But on closer examination, even the above hypothesis is incomplete. Another factor that needs to be taken into account is the neural structure of human brains, and how they are optimized.

        The Role of Human Cognition

        Human cognition must rely on bio-circuitry that has very limited speed, capacity, and accuracy. It therefore relies very heavily in the mathematical domain on using Kolmogorov programs to represent useful patterns that we see in the physical world, since a Kolmogorov program only needs to be executed to the level of precision actually needed.

        Furthermore, it is easier and more compact to process suites of such human-brain-resident Kolmogorov programs as the primary data components for reasoning about complexity, as opposed to using their full elaborations into voluminous data sets that are more often than not beyond neural capacities. In addition to shrinking data set sizes, reasoning at the Kolmogorov program level has the huge advantage that such program capture in direct form at least many of the regularities in such data sets, which in turn allows much more insightful comparisons across programs.

        We call this "mathematics."

        The danger in not recognizing mathematics as a form of Kolmogorov program creation, manipulation, and execution is that as biological intelligences, we are by design inclined to accept such programs as representing the full, to-the-limit forms of the represented data sets. Thus the Greeks assumed the Platonic reality of perfect planes, when in fact the physical world is composed of atoms that make such planes flatly impossible. The world of realizable planes is instead emphatically and decisively perturbative, allowing the full concept of "a plane" to exist only as unobtainable limit of the isolated, highest-level initial calculations. The reality of such planes falls apart completely when the complete, perturbative, multi-step model is renormalized down to the atomic level.

        That is to say, exactly as with physics, the perfect abstractions of mathematics are nothing more than top-level stages of perturbative programs made possible by the pre-existing structure of our universe.

        The proof of this is that whenever you try to compute such a formal solution, you are forced to deal with issues such as scale or precision. This in turn means that the abstract Kolmogorov representations of such concept never really represent their end limits, but instead translate into huge spectra of precision levels that approach the infinite limit to whatever degree is desired, but only at a cost that increases with the level of precision. The perfection of mathematics is just an illusion, one engendered by the survival-focused priorities of how our limited biological brains deal with complexity.

        Clumpiness and Mathematics

        The bottom line is this even broader hypothesis:

        All formal solutions in both physics and mathematics are just the highest, most abstract stages of perturbative solutions that are made possible by the pre-existing "clumpy" structure of our universe.

        In physics, even equations such as E=mc2 that are absolutely conserved at large scales cannot be interpreted "as is" at the quantum level, where virtual particle pairs distort the very definition of where mass is located. E=mc2 thus more accurately understood as a high-level subset of a multi-scale perturbative process, rather than as a complete, stand-alone solution.

        In mathematics, the very concept of an infinitesimal is a limit that can never be reached by calculation or by physical example. That makes the very foundations of real mathematics into a calculus not of real values, but of sets of Kolmogorov programs for which the limits of execution are being intentionally ignored. Given the indifference and often lack even of awareness of the implementation spectra that are necessarily associated with all such formalisms, is it really that much of a surprise how often unexpected infinities plague problems in both physics and math? Explicit awareness of this issue changes the approach and even the understanding of what is being done; math in general becomes a calculus of operators, of programs, rather than of absolute limits and concepts.

        One of the most fascinating implications of the hypothesis that all math equations ultimately trace back to the clumpiness and sparseness of the physical universe is that heuristic methods can become integral parts of such equations. In particular they should be usable in contexts where a "no limits" formal statement overextends computation in directions that have no real impact on the final solution. This makes methods such as Monte Carlo into first-order options for expressing a situation correctly. As one example, papers by Jean Michel Sellier7 show how the carefully structured "signed particle" applications of Monte Carlo methods can dramatically reduce the computation costs of quantum simulation. Such syntheses of both theory (signed particles and negative probabilities) with statistical methods (Monte Carlo) promise not only to provide practical algorithmic benefits, but also to provide deeper insights into the nature of quantum wavefunctions themselves.

        Possible Future Expansions of this Mini-Essay

        As a mini-essay, my time is growing short for posting here. Most of the above arguments are my original stream-of-thought arguments that led to my overall conclusion. But as my abstract shows, I have a great many more thoughts to add, but likely not enough time to add them. I will therefore post this following link to a public Google Drive folder I've set up for FQXi-related postings.

        If this is OK with FQXi -- basically if they do not strip out the URL below, and I'm perfectly fine if they do -- then I may post updated versions of this and other mini-essays in this folder in the future:

        Terry Bollinger's FQXi Updates Folder

        ----------------------------------------

        Non-FQXi References

        6. Lin, H. W., Tegmark, M., and Rolnick, D. Why does deep and cheap learning work so well? Journal of Statistical Physics, Springer,168:1223-1247 (2017).

        7. Jean Michel Sellier. A Signed Particle Formulation of Non-Relativistic Quantum Mechanics. Journal of Computational Physics, 297:254-265 (2015).

        • [deleted]

        An Exceptionally Simple Space-As-Entanglement Theory

        Terry Bollinger, 2018-02-26 Feb

        Abstract. There has been quite a bit of attention in recent years to what has been called the holographic universe. This concept, which originated somehow from string theory (!), postulates that the universe is some kind of holographic image, rather than the 3D space we see. Fundamental to this idea is space as entanglement, that is, that the fabric of space is built out of the mysterious "spooky action" links the Einstein so disdained. In keeping with its string theory origins, the holographic universe also dives down to the Planck foam level. The point of this mini-essay is that except for the point about space being composed of entanglements between particles, none of this complexity is needed: there are no holograms, and there is no need for the energetically impossible Planck foam. All your need is group entanglement of the conjugate of particle spin, which is an overlooked "ghost direction" orthogonal to spin. Particles form a mutually relative consensus on these directions (see Karl Coryat Pillar #3) that allows them to ensure conservation of angular momentum, and that consensus becomes xyz space. Instead of a complicated hologram, its structure is that of an exceptionally simple direct-link web that interlinks all of the participating particles. It is no more detailed than it needs to be, and that number is determined solely by how many particles participate in the overall direction consensus. Finally, it is rigid in order to protect and preserve angular momentum, since the overriding goal in all forms of a quantum entanglement is absolute conservation of some quantum number.

        ----------------------------------------

        NOTE: A mini-essay is my attempt to capture an idea, approach, or prototype theory inspired by interactions with other FQXi Essay contestants. This mini-essay was inspired by:

        1. The Four Pillars of Fundamentality by Karl Coryat

        ----------------------------------------

        Introduction

        For this mini-essay I think the original text gives the thought pretty well "as is," so I am simply quoting it below. My thanks again to Karl Coryat for a fun-to-read and very stimulating essay.

        A quote from my assessment Karl Coryat's Pillar #3

        If space is the fabric of relations, if some vast set of relations spread out literally across the cosmos, defining the cosmos, are the true start of reality instead of the deceptive isolation of objects that these relations then make possible, what are the components of that relation? What are the "bits" of space?

        I don't think we know, but I assure you it's not composed of some almost infinite number of 10-35 meter bubbles of Planck foam. Planck foam is nothing more than an out-of-range, unbelievably extrapolated extremum created by pushing to an energetically impossible limit the rules of observation that have physical meaning only at much lower energies. I suspect that the real components of space are much simpler, calmer, quieter, less energetic, and well, space-like than that terrifying end-of-all-things violence that is so casually called "Planck foam."

        I'll even venture a guess. You heard it here first... :)

        My own guess is that the units of space are nothing more radical than the action (Planck) conjugation complements of the angular momenta of all particles. That is, units of pure direction, which is all that is left after angular momentum scarfs up all of the usual joule-second units of action, leaving only something that at first glance looks like an empty set. On closer examination, though, a given spin must leave something behind to distinguish itself from other particle spins, and that "something" is the orientation of the spin in 3-space, a ghostly orthogonality to the spin plane of the particle. But more importantly, it would have to be cooperatively, relationally shared with every other particle in the vicinity and beyond, so that their differences remain valid. Space would become a consensus fabric of directional relationships, one in which all the particles have agreed to share the same mutually relative coordinate system -- that is, to share the same space[/]. This direction consensus would be a group-level form of entanglement, and because entanglement is unbelievably unforgiving about conservation of conserved quantum numbers such as spin, it would also be extraordinarily rigid, as space should be. Only over extreme ranges would it bend much, to give gravity, which thus would not be an ordinary quantum force like photon-mediated electromagnetism. It would also be loosely akin to the "holographic" concept of space as entanglement, but this version is hugely simpler and much more direct, since neither holography, nor higher dimension, nor Planck-level elaborations are required. The entanglements of the particles just create a simple, easily understood 3-space network linking all nodes (particles).

        But space cannot possibly be compose of such a sparse, incomplete network, right?

        After all, space is also infinitely detailed as well as extremely rigid, so there surely are not enough particles in the universe to define space in sufficient detail! Many would in fact argue that this is precisely why any phenomenon that creates space itself must operate at the Planck scale of 10-35 meters, so that the incredible detail needed for 3-space can be realized.

        Really? Why?

        If only 10 objects existed in the universe, each a meter across, why would you need a level of detail that is, say, 20 orders of magnitude more detailed for them to interact meaningfully and precisely with each other? You would still be able to access much higher levels of relational detail, but only by asking for more detail, specifically by applying a level of energy proportional to the level of detail you desired. Taking things to the absolute limit first is an incredibly wasteful procedure, and incidentally, it is emphatically not what we see in quantum mechanics, where every observation has a cost that depends on the level of detail desired, and even then only at the time of the observation. There are good and deeply fundamental quantum reasons why the Large Hadron Collider (LHC) that found the Higgs boson is 8.6 km in diameter!

        The bottom line is that in terms of as-needed levels of detail, you can build up a very-low-energy universal "directional condensate" space using the spins of nothing more than the set of particles that exist in that space. It does not matter how sparse or dense those particles are, since you only need to make space "real" for the relationships that exist between those particles. If for example your universe has only two particles in it, you only need one line of space (Oscillatorland!) to define their relationship. Defining more space outside of that line is not necessary, for the simple reason that no other objects with which to relate exist outside of that line.

        So regardless of how space comes to be -- my example above mostly shows what is possible and what kinds of relationships are required -- its very existence makes the concept of relations between entities as fundamental as it gets. You don't end with relations, you start with them.

        Conclusions

        Quite few people who are reading this likely do not even believe in entanglement! So I am for you the cheerful ultimate heretic, the fellow who not only believe fervently in the reality of entanglement, but would make it literally into the very fabric of space itself. Sorry about that, but I hope you can respect that I have my reasons, just as I very much respect localism. Two of my top physicist favorites of all time, Einstein and Bell, were both adamant localists!

        If you are a holographic universe type, I hope you will at least think about some of what I've said here. I developed these ideas in isolation from your community, and frankly was astonished when I finally realized its existence. I deeply and sincerely believe that you have a good and important idea there, but history had convoluted it in very unfortunate ways. Take a stab at my much simpler 3D web approach, and I think interesting things could start popping out fairly quickly.

        If you are MOND or dark matter enthusiast, think about the implications of space being a direct function of the presence or absence of matter. One of my very first speculations on this topic was that as this fabric of entanglement thins, you could very well get effects relevant to the anomalies that both MOND and dark matter attempt to explain.

        Finally, I gave this fabric a name a long time, a name with which I pay respect to a very great physicists who literally did not get respect: Boltzmann. I call this 3D fabric of entanglements the Boltzmann fabric, represented (I can't do it here) by a lower-case beta with a capital F subscript. His entropic concepts of time become cosmic through this fabric.