Hi,
This is a wonderful essay, with Deep fundamental knowledge. I am impressed.
Nothing to ask for now.
Ulla Mattfolk https://fqxi.org/community/forum/topic/3093
Hi,
This is a wonderful essay, with Deep fundamental knowledge. I am impressed.
Nothing to ask for now.
Ulla Mattfolk https://fqxi.org/community/forum/topic/3093
The Illusion of Mathematical Formality
Terry Bollinger, 2018-02-26 Feb
Abstract. Quick: What is the most fundamental and least changing set of concepts in the universe? If you answered "mathematics," you are not alone. In this mini-essay I argue that far from being eternal, formal statements are actually fragile, prematurely terminated first-steps in perturbative sequences that derive ultimately from two unique and defining features of the physics of our universe: multi-scale, multi-domain sparseness and multi-scale, multi-domain clumping. The illusion that formal statements exist independently of physics is enhanced by the clever cognitive designs of our mammalian brains, which latch on quickly to first-order approximations that help us respond quickly and effectively to survival challenges. I conclude by recommending recognition of the probabilistic infrastructure of mathematical formalisms as a way to enhance, rather than reduce, their generality and analytical power. This recognition makes efficiency into a first-order heuristic for uncovering powerful formalisms, and transforms the incorporation of a statistical method such Monte Carlo into formal systems from being a "cheat" into an integrated concept that helps us understand the limits and implications of the formalism at a deeper level. It is not an accident, for example, that quantum mechanics simulations benefit hugely from probabilistic methods.
----------------------------------------
NOTE: A mini-essay is my attempt to capture and make more readily available an idea, approach, or prototype theory that was inspired by interactions with other FQXi Essay contestants. This mini-essay was inspired by:
1. When do we stop digging, Conditions on a fundamental theory of physics by Karen Crowther
2. The Crowther Criteria for Fundamental Theories of Physics
3. On the Fundamentality of Meaning by Brian D Josephson
4. What does it take to be physically fundamental by Conrad Dale Johnson
5. The Laws of Physics by Kevin H Knuth
Additional non-FQXi references are listed at the end of this mini-essay.
----------------------------------------
Background: Letters from a Sparse and Clumpy Universe
Sparseness6 occurs when some space, such as a matrix or the state of Montana, is occupied by only a thin scattering of entities, e.g. non-zero numbers in the matrix or people in Montana . A clump is compact group of smaller entities (often clumps themselves of some other type) that "stick together" well enough to persist over time. A clump can be abstract, but if it is composed of matter we call it an object. Not surprisingly, sparseness and clumping tend to be closely linked, since clumps often are the entities that occupy positions in some sparse space.
Sparseness and clumping occur at multiple size scales in our universe, using a variety of mechanisms, and when life is included, at varying levels of abstraction. Space itself provides a universal basis for creating sparseness at multiple size scales, yet the very existence of large expanses of extremely "flat" space is still considered one of the greatest mysteries in physics, an exquisitely knife-edged balancing act between total collapse and hyper expansion.
Clumping is strangely complex, involving multiple forces at multiple scales of size. Gravity reigns supreme for cosmic-level clumping, from involvement (not yet understood) in the 10 billion lightyear diameter Hercules-Corona Borealis Great Wall down to kilometer scale gravel asteroids that just barely hold together. From there a dramatically weakened form of the electromagnetic force takes over, providing bindings that fall under the bailiwick of chemistry and chemical bonding. (The unbridled electric force is so powerful it would obliterate even large gravitationally bond objects.) Below that level the full electric force reigns, creating the clumps we call atoms. Next down in scale is yet another example of a dramatically weakened force, which is the pion-mediated version of the strong force that holds neutrons and protons together to give us the chemical elements. The protons and neutrons, as well as other more transient particles, are the clumps created by the full, unbridled application of the strong force. At that point known clumping end... or do they? The quarks themselves notoriously appear to be constructed from still smaller entities, since for example they all use multiples of a mysterious 1/3 electric charge, bound together by unknown means at unknown scales. How exactly the quarks have such clump-like properties remains a mystery.
Nobel Laureate Brian Josephson1 speculates that at least for higher level domains such as biology and sociology, the emergence of a form of stability that is either akin to or leads to clumping always the result of two or more entities that oppose and cancel each other in ways that create or leave behind a more durable structure. This intriguing concept can be translated in a surprisingly direct way to the physics of clumping and sparseness in our universe. For example, the mutually cancelling of positive and negative charges of an electron and a proton can combine to leave enduring and far less reactive result, a hydrogen atom, that in turn supports clumping through a vastly moderated presentation of the electric forces that it largely cancels More generally, the hydrogen atom is an example of incomplete cancellation, that is, cancellation of only a subset of the properties of two similar but non-identical entities. The result qualifies as "scaffolding" in the Josephson sense due to its relative neutrality, which allows it for example to be a part of chemical compounds that would be instantly shredded by the full power of the mostly-cancelled electric force. Physics has many examples of this kind of incomplete cancellation, ranging from quarks that mutually cancel the overwhelming strong force to leave milder protons and neutrons, protons and electrons that then cancel to leave charge-free hydrogen atoms, unfilled electron states that combine to create stable chemical bonds, and hydrogen and hydroxide groups on amino acids that combine to enable the chains known as proteins. At higher levels of complexity, almost any phenomenon that reaches an equilibrium state tends to produce a more stable, enduring outcome. The equilibrium state that compression-resistant matter and ever-pulling gravity reach at the surface of a planet is another more subtle example, one that leads to a relatively stable environment that is conducive to, for example, us.
Bonus Insert: Space and gravity as emerging from hidden unified-force cancellations
It is interesting to speculate whether the flatness of space could itself be an outcome of some well-hidden form of partial cancellation.
If so, it would mean that violent opposing forces of some type of which we are completely unaware (or have completely misunderstood) largely cancelled each other out except for a far milder residual, that being the scaffolding that we call "flat space." This would be a completely different approach to the flat space problem, but one that could have support from existing data if that data were examined from Josephson's perspective of stable infrastructure emerging from more mutual cancellation by far more energetic forces.
The forces that cancelled would almost certainly still be present in milder forms, however, just as the electric force continues to show up in milder forms in atoms. Thus if the Josephson effect -- ah, sorry, that phrase is already taken -- if the Josephson synthesis model applies to space itself, then the mutually cancelling forces that led to flat space may well already be known to us, just not in their most complete and ferocious forms. Furthermore, if these space-generating forces are related to the known strong and electric forces -- or more likely, to the Standard Model combination of them with the weak force -- then such a synthesis would provide and entirely new approach to unifying gravity with the other three forces.
Thus the full hypothesis in summary: Via Josephson synthesis, it is speculated that ordinary xyz space is a residual structural remnant, a scaffolding, generated by the nearly complete cancellation of two oppositely signed versions of the unified weak-electric-strong of the Standard Model. Gravity then becomes not another boson force, but a topological effect applied by matter to the the "surface of cancellation" of the unified Standard Model forces.
Back to Math: Is Fundamental Physics Always Formal?
In her superb FQXi essay When do we stop digging? Conditions on a fundamental theory of physics, Karen Crowley2 also created an exceptionally useful product for broader use, The Crowther Criteria for Fundamental Theories of Physics.3 It is a list of nine succinctly stated criteria that in her assessment need to be met by a physics theory before it can qualify as fundamental.
There was however one criterion in her list about which I uncertain, which was the fourth one:
CC#4. Non-perturbative: Its formalisms should be exactly solvable rather than probabilistic.
I was ambivalent when I first read that one, but I was also unsure why I felt ambivalent. Was it because one of the most phenomenally accurate predictive theories in all of physics, Feynman's Quantum ElectroDynamics or QED, is also so deeply dependent on perturbative methods? Or was it the difficulty that many fields and methods have in coming up with closed equations? I wanted to understand why, if exactly solvable equations were the "way to go" in physics for truly fundamental results, why then were some of the most successful theories in physics perturbative? What all does that work really imply?
As it turns out, both the multi-scale clumpiness and sparseness of our universe are relevant to this question because they lurk behind such powerful mathematical concepts as renormalization. Renormalization is not really as exotic or even as mathematical is it is in, say, Feynman's QED theory. What it really amounts to is an assertion that our universe is, at many levels, "clumpy enough" that many objects (and processes) within it can be approximated when viewed from a distance. That "distance" may be real space or some other more abstract space, but the bottom line is that this sort of approximation option is a deep component of whatever is going on. I say that in part because we are ourselves as discrete, independently mobile entities are very much part of this clumpiness, as are the large, complex molecules that make up our bodies... as are the atoms that enable molecules... as are the nucleons that enable atoms... and as are the fundamental fermions that make up nucleons.
This approximation-at-a-distance even shows up in everyday life and cognition. For example, let's say you need an AA battery. What do you think first? Probably you think "I need to go to the room where I keep my batteries." But your navigation to that room begins as a room to room navigation. You don't worry yet about exactly where in that room the batteries are, because that has no effect on how you navigate to the room. In short, you will approximate the location of the battery until you navigate closer to it.
The point is that the room is itself clumpy in a way that enables you to do this, but the process itself is clearly approximate. You could in principle super-optimize your walking path so that it minimizes your total effort to get to the battery, but such a super-optimization would be extremely costly in terms of the thinking and calculations needed, and yet would provide very little benefit. So, when the cost-benefit ratio grows too high, we approximate rather than super-optimize, because the clumpy structure of our universe makes such approximations much more cost-beneficial overall.
What happens after your reach the room? You change scale!
That is, you invoke a new model that tells you how to navigate the draws or containers in which you keep the AA batteries. This scale is physically smaller, and again is approximate, enabling tolerance for example of highly variable locations of the batteries within a drawer or container.
This works for the same reason that in Feynman's QED is incredibly accurate and efficient for modeling an electron probabilistically. The electron-at-a-distance can be safely and very efficiently modeled as a point particle with a well-defined charge, even though that is not really correct. That is the room-to-room level. As you get closer to the electron, that model must be replace by a far more complex one that involves rapid creation and annihilation of charged virtual particle pairs that "blur" the charge of the electrons in strange and peculiar ways. That is the closer, smaller, dig-around-in-the-drawers-for-a-battery level of approximation. In both cases, the overall clumpiness of our universe makes these special forms of approximation both very accurate and computationally efficient.
At some deeper level, one could further postulate that this may be more than just a way to model reality. It is at least possible (I personally think it probable) that this is also how the universe actually works, even if we don't quite understand how. I say that because it is always a bit dangerous to assume that just because we like to model space as a given and particles as points within it, those are in the end just models, ones that actually violate quantum mechanics in the sense of postulating points that cannot exist in real space due the quantum energy cost involved. A real point particle would require infinite energy to isolate, so a model that invokes such particles to estimate reality really should be viewed with a bit of caution as a "final" model.
So bottom line: While Karen Crowley's Criterion #4 makes excellent sense as a goal, our universe seems weirdly wired for at least some forms of approximation. I find that very counterintuitive, deeply fascinating, and likely important in some way that we flatly do not yet understand.
Perturbation Versus Formality in Terms of Computation Costs
Here is a hypothesis:
In the absence of perturbative opportunities, the computational costs of fully formal methods for complete, end-to-end solutions trends towards infinity.
The informal proof is that full formalization implies fully parallel combinatorial interaction of all components of a path (functional) in some space, that being XYZ space in the case of approaching an electron. The computational cost of this fully parallel optimization then increases both with decreasing granularity of the path segment sizes used, and with path length. The granularity is the most important parameter, with the cost rapidly escalating towards infinity as the precision (inverse of segment length) decreases towards the limit of representing the path as an infinitely precise continuum of infinitely precise points.
Conversely, the ability to use larger segments instead of infinitesimals depends on the scale structure of the problem. If that scale structure enables multiscale renormalization, then the total computational cost remain at least roughly proportional to the level of precision desired. If no such scale structure is available, the cost instead escalates towards infinity.
But isn't the whole point of closed formal solutions is that they remain (roughly) linear in computational cost versus the desired level of precision?
Yes... but what if the mathematical entities we call "formal solutions" are actually nothing more than the highest-impact granularities of what are really just perturbative solutions made possible by the pre-existing structure of our universe?
Look for example at gravity equations, which treat stars and planets as point-like masses. However, that approximation completely falls apart at the scale of a planet surface, and so is only the first and highest-level step in what is really a perturbative solution. It's just that our universe is pre-structured in a way that makes many such first steps so powerful and so broadly applicable that it allows us to pretend they are complete, stand-alone formal solutions.
A More Radical Physics Hypothesis
All of this leads to a more radical hypothesis about formalisms in physics, which is this:
All formal solutions in physics are just the highest, most abstract stages of perturbative solutions that are made possible by the pre-existing clumpy structure of our universe.
But on closer examination, even the above hypothesis is incomplete. Another factor that needs to be taken into account is the neural structure of human brains, and how they are optimized.
The Role of Human Cognition
Human cognition must rely on bio-circuitry that has very limited speed, capacity, and accuracy. It therefore relies very heavily in the mathematical domain on using Kolmogorov programs to represent useful patterns that we see in the physical world, since a Kolmogorov program only needs to be executed to the level of precision actually needed.
Furthermore, it is easier and more compact to process suites of such human-brain-resident Kolmogorov programs as the primary data components for reasoning about complexity, as opposed to using their full elaborations into voluminous data sets that are more often than not beyond neural capacities. In addition to shrinking data set sizes, reasoning at the Kolmogorov program level has the huge advantage that such program capture in direct form at least many of the regularities in such data sets, which in turn allows much more insightful comparisons across programs.
We call this "mathematics."
The danger in not recognizing mathematics as a form of Kolmogorov program creation, manipulation, and execution is that as biological intelligences, we are by design inclined to accept such programs as representing the full, to-the-limit forms of the represented data sets. Thus the Greeks assumed the Platonic reality of perfect planes, when in fact the physical world is composed of atoms that make such planes flatly impossible. The world of realizable planes is instead emphatically and decisively perturbative, allowing the full concept of "a plane" to exist only as unobtainable limit of the isolated, highest-level initial calculations. The reality of such planes falls apart completely when the complete, perturbative, multi-step model is renormalized down to the atomic level.
That is to say, exactly as with physics, the perfect abstractions of mathematics are nothing more than top-level stages of perturbative programs made possible by the pre-existing structure of our universe.
The proof of this is that whenever you try to compute such a formal solution, you are forced to deal with issues such as scale or precision. This in turn means that the abstract Kolmogorov representations of such concept never really represent their end limits, but instead translate into huge spectra of precision levels that approach the infinite limit to whatever degree is desired, but only at a cost that increases with the level of precision. The perfection of mathematics is just an illusion, one engendered by the survival-focused priorities of how our limited biological brains deal with complexity.
Clumpiness and Mathematics
The bottom line is this even broader hypothesis:
All formal solutions in both physics and mathematics are just the highest, most abstract stages of perturbative solutions that are made possible by the pre-existing "clumpy" structure of our universe.
In physics, even equations such as E=mc2 that are absolutely conserved at large scales cannot be interpreted "as is" at the quantum level, where virtual particle pairs distort the very definition of where mass is located. E=mc2 thus more accurately understood as a high-level subset of a multi-scale perturbative process, rather than as a complete, stand-alone solution.
In mathematics, the very concept of an infinitesimal is a limit that can never be reached by calculation or by physical example. That makes the very foundations of real mathematics into a calculus not of real values, but of sets of Kolmogorov programs for which the limits of execution are being intentionally ignored. Given the indifference and often lack even of awareness of the implementation spectra that are necessarily associated with all such formalisms, is it really that much of a surprise how often unexpected infinities plague problems in both physics and math? Explicit awareness of this issue changes the approach and even the understanding of what is being done; math in general becomes a calculus of operators, of programs, rather than of absolute limits and concepts.
One of the most fascinating implications of the hypothesis that all math equations ultimately trace back to the clumpiness and sparseness of the physical universe is that heuristic methods can become integral parts of such equations. In particular they should be usable in contexts where a "no limits" formal statement overextends computation in directions that have no real impact on the final solution. This makes methods such as Monte Carlo into first-order options for expressing a situation correctly. As one example, papers by Jean Michel Sellier7 show how the carefully structured "signed particle" applications of Monte Carlo methods can dramatically reduce the computation costs of quantum simulation. Such syntheses of both theory (signed particles and negative probabilities) with statistical methods (Monte Carlo) promise not only to provide practical algorithmic benefits, but also to provide deeper insights into the nature of quantum wavefunctions themselves.
Possible Future Expansions of this Mini-Essay
As a mini-essay, my time is growing short for posting here. Most of the above arguments are my original stream-of-thought arguments that led to my overall conclusion. But as my abstract shows, I have a great many more thoughts to add, but likely not enough time to add them. I will therefore post this following link to a public Google Drive folder I've set up for FQXi-related postings.
If this is OK with FQXi -- basically if they do not strip out the URL below, and I'm perfectly fine if they do -- then I may post updated versions of this and other mini-essays in this folder in the future:
Terry Bollinger's FQXi Updates Folder
----------------------------------------
Non-FQXi References
6. Lin, H. W., Tegmark, M., and Rolnick, D. Why does deep and cheap learning work so well? Journal of Statistical Physics, Springer,168:1223-1247 (2017).
7. Jean Michel Sellier. A Signed Particle Formulation of Non-Relativistic Quantum Mechanics. Journal of Computational Physics, 297:254-265 (2015).
To link to the above mini-essay, please copy and paste either of the following links:
[link:fqxi.org/community/forum/topic/3099#post_146091]The Illusion of Mathematical Formality[/link]
An Exceptionally Simple Space-As-Entanglement Theory
Terry Bollinger, 2018-02-26 Feb
Abstract. There has been quite a bit of attention in recent years to what has been called the holographic universe. This concept, which originated somehow from string theory (!), postulates that the universe is some kind of holographic image, rather than the 3D space we see. Fundamental to this idea is space as entanglement, that is, that the fabric of space is built out of the mysterious "spooky action" links the Einstein so disdained. In keeping with its string theory origins, the holographic universe also dives down to the Planck foam level. The point of this mini-essay is that except for the point about space being composed of entanglements between particles, none of this complexity is needed: there are no holograms, and there is no need for the energetically impossible Planck foam. All your need is group entanglement of the conjugate of particle spin, which is an overlooked "ghost direction" orthogonal to spin. Particles form a mutually relative consensus on these directions (see Karl Coryat Pillar #3) that allows them to ensure conservation of angular momentum, and that consensus becomes xyz space. Instead of a complicated hologram, its structure is that of an exceptionally simple direct-link web that interlinks all of the participating particles. It is no more detailed than it needs to be, and that number is determined solely by how many particles participate in the overall direction consensus. Finally, it is rigid in order to protect and preserve angular momentum, since the overriding goal in all forms of a quantum entanglement is absolute conservation of some quantum number.
----------------------------------------
NOTE: A mini-essay is my attempt to capture an idea, approach, or prototype theory inspired by interactions with other FQXi Essay contestants. This mini-essay was inspired by:
1. The Four Pillars of Fundamentality by Karl Coryat
----------------------------------------
Introduction
For this mini-essay I think the original text gives the thought pretty well "as is," so I am simply quoting it below. My thanks again to Karl Coryat for a fun-to-read and very stimulating essay.
A quote from my assessment Karl Coryat's Pillar #3
If space is the fabric of relations, if some vast set of relations spread out literally across the cosmos, defining the cosmos, are the true start of reality instead of the deceptive isolation of objects that these relations then make possible, what are the components of that relation? What are the "bits" of space?
I don't think we know, but I assure you it's not composed of some almost infinite number of 10-35 meter bubbles of Planck foam. Planck foam is nothing more than an out-of-range, unbelievably extrapolated extremum created by pushing to an energetically impossible limit the rules of observation that have physical meaning only at much lower energies. I suspect that the real components of space are much simpler, calmer, quieter, less energetic, and well, space-like than that terrifying end-of-all-things violence that is so casually called "Planck foam."
I'll even venture a guess. You heard it here first... :)
My own guess is that the units of space are nothing more radical than the action (Planck) conjugation complements of the angular momenta of all particles. That is, units of pure direction, which is all that is left after angular momentum scarfs up all of the usual joule-second units of action, leaving only something that at first glance looks like an empty set. On closer examination, though, a given spin must leave something behind to distinguish itself from other particle spins, and that "something" is the orientation of the spin in 3-space, a ghostly orthogonality to the spin plane of the particle. But more importantly, it would have to be cooperatively, relationally shared with every other particle in the vicinity and beyond, so that their differences remain valid. Space would become a consensus fabric of directional relationships, one in which all the particles have agreed to share the same mutually relative coordinate system -- that is, to share the same space[/]. This direction consensus would be a group-level form of entanglement, and because entanglement is unbelievably unforgiving about conservation of conserved quantum numbers such as spin, it would also be extraordinarily rigid, as space should be. Only over extreme ranges would it bend much, to give gravity, which thus would not be an ordinary quantum force like photon-mediated electromagnetism. It would also be loosely akin to the "holographic" concept of space as entanglement, but this version is hugely simpler and much more direct, since neither holography, nor higher dimension, nor Planck-level elaborations are required. The entanglements of the particles just create a simple, easily understood 3-space network linking all nodes (particles).
But space cannot possibly be compose of such a sparse, incomplete network, right?
After all, space is also infinitely detailed as well as extremely rigid, so there surely are not enough particles in the universe to define space in sufficient detail! Many would in fact argue that this is precisely why any phenomenon that creates space itself must operate at the Planck scale of 10-35 meters, so that the incredible detail needed for 3-space can be realized.
Really? Why?
If only 10 objects existed in the universe, each a meter across, why would you need a level of detail that is, say, 20 orders of magnitude more detailed for them to interact meaningfully and precisely with each other? You would still be able to access much higher levels of relational detail, but only by asking for more detail, specifically by applying a level of energy proportional to the level of detail you desired. Taking things to the absolute limit first is an incredibly wasteful procedure, and incidentally, it is emphatically not what we see in quantum mechanics, where every observation has a cost that depends on the level of detail desired, and even then only at the time of the observation. There are good and deeply fundamental quantum reasons why the Large Hadron Collider (LHC) that found the Higgs boson is 8.6 km in diameter!
The bottom line is that in terms of as-needed levels of detail, you can build up a very-low-energy universal "directional condensate" space using the spins of nothing more than the set of particles that exist in that space. It does not matter how sparse or dense those particles are, since you only need to make space "real" for the relationships that exist between those particles. If for example your universe has only two particles in it, you only need one line of space (Oscillatorland!) to define their relationship. Defining more space outside of that line is not necessary, for the simple reason that no other objects with which to relate exist outside of that line.
So regardless of how space comes to be -- my example above mostly shows what is possible and what kinds of relationships are required -- its very existence makes the concept of relations between entities as fundamental as it gets. You don't end with relations, you start with them.
Conclusions
Quite few people who are reading this likely do not even believe in entanglement! So I am for you the cheerful ultimate heretic, the fellow who not only believe fervently in the reality of entanglement, but would make it literally into the very fabric of space itself. Sorry about that, but I hope you can respect that I have my reasons, just as I very much respect localism. Two of my top physicist favorites of all time, Einstein and Bell, were both adamant localists!
If you are a holographic universe type, I hope you will at least think about some of what I've said here. I developed these ideas in isolation from your community, and frankly was astonished when I finally realized its existence. I deeply and sincerely believe that you have a good and important idea there, but history had convoluted it in very unfortunate ways. Take a stab at my much simpler 3D web approach, and I think interesting things could start popping out fairly quickly.
If you are MOND or dark matter enthusiast, think about the implications of space being a direct function of the presence or absence of matter. One of my very first speculations on this topic was that as this fabric of entanglement thins, you could very well get effects relevant to the anomalies that both MOND and dark matter attempt to explain.
Finally, I gave this fabric a name a long time, a name with which I pay respect to a very great physicists who literally did not get respect: Boltzmann. I call this 3D fabric of entanglements the Boltzmann fabric, represented (I can't do it here) by a lower-case beta with a capital F subscript. His entropic concepts of time become cosmic through this fabric.
To link to the above mini-essay, please copy and paste either of the following links:
[link:fqxi.org/community/forum/topic/3099#post_146100]An Exceptionally Simple Space-As-Entanglement Theory[/link]
It's Time to Get Back to Real String Theory
Terry Bollinger, 2018-02026 Feb
Abstract. There is a real string theory. It is experimentally accessible and verifiable, at scales comparable to ordinary baryons and mesons, as opposed to the energetically impossible Planck foam version of string theory. It has perhaps 16 or so solutions, most likely, as opposed to the 10500 vacuums of Planck foam string theory. It was abandoned in 1974. It's time we got back to it.
----------------------------------------
NOTE: A mini-essay is my attempt to capture an idea, approach, or prototype theory inspired by interactions with other FQXi Essay contestants. This mini-essay was inspired by:
A well-founded formulation for quantum chromo- and electro-dynamics by Wayne R Lundberg
----------------------------------------
A Long Quote from my Lundberg Essay Assessment
Most folks aren't aware of it, but nucleons like protons and neutrons have additional spin states that appear like heavier particles built from the same set of quarks. Thus in addition to uud forming a spin 1/2 proton, the same three quarks can also form a heavier particle with spin 3/2 (1 added unit of spin) and spin 5/2 (2 added units of spin). These three variations form a lovely straight line when plotted as mass versus spin, which in turn implies a fascinatingly regular relationship between mass and nucleon spin.
These lines are called Regge trajectories, and back in the late 1960s and early 1970s they looked like a promising hint for how to unify the particle zoo. Analyses of Regge trajectories indicated string-like stable resonance states were creating the extreme regularity of the Regge trajectories. These "strings" consisted of something very real, the strong force, and their vibrations were highly constrained by something equally real, the quarks that composed the nucleons (and also mesons, which also have Regge trajectories). These boson-like resonances of a string-like incarnation of the strong force were highly unexpected, extremely interesting, and experimentally accessible. Theorists were optimistic.
Then it all went to Planck.
Specifically, the following paper caught on like wildfire (slow wildfire !) and ended up obliterating any hope or future funding for understanding the quite real, experimentally accessible, proton-scale, strong-force-based string vibrations behind Regge trajectories. They did this by proposing what I like to call the Deep Dive:
Scherk, J. & Schwarz, J. H., Dual Models for Non-Hadrons, Nuclear Physics B, Elsevier, 1974, 81, 118-144.
So what was the Deep Dive, and why did they do it?
Well, it "went down" like this: Scherk and Schwarz noticed that the overall signature of some of the proton-sized strong-force vibrations behind Regge trajectories were very similar to the spin 2 signatures of the (still) hypothetical gravitons that were supposed to unify gravity with other three forces of the Standard Model. Since the emerging Standard Model was having breathtaking success in that time period for explaining the particle zoo, quantum gravity and the Planck-scale foam were very popular at the time... and very tempting.
So, based as best I can tell only on the resemblance of these very real vibration modes in baryons and mesons to gravitons, Scherk and Schwarz made their rather astonishing, revelation-like leap: They decided that the strong-force-based vibrations behind Regge trajectories were in fact gravitons, which have nothing to do with the strong force and are most certainly not "composed" of the strong force. The Planck-scale vibrations of string theory are instead composed of... well, I don't know what, maybe intense gravity? I've never been able to get an answer out of a string theorist on that question of "what is a string made of?" This is not an unfair question, since for example the original strings behind Regge trajectories are "composed" of the strong force, and have quite real energies associated with their existences.
I still don't even quite get even the logic behind the Deep Dive, since gravity had exactly zero to do with either the substance of the strings (a known force) or the nature of the skip-rope-like, quark-constrained vibrations behind Regge trajectories. Nonetheless they did it. They took the Deep Dive, and it only ended up costing physics the following:
... 20 orders of magnitude of and shrinking size, since protons are about 10-15 meters across, and the gravitons were nominally at the Planck foam scale of 10-35 (!!!), which is a size scale that is inaccessible to any conceivable direct measurement process in the universe; plus:
... 20 orders of magnitude of increased energy costs, which is similarly universally inaccessible to any form of direct measurement; plus:
... a complete liberation from all of those annoying but experimentally validated vibration constraints that were imposed in real nucleons and mesons by the presence of quarks and the strong force. That's a cost, not a benefit, since it explodes the range of options that have to be explored to find a workable theory. Freeing the strings from... well... any appreciable experimental or theoretical constraints... enabled them instead to take on the nearly infinite number of possible vibration modes that a length or loop of rope gyrating wildly in outer space would have; and finally:
... just to add yet a few more gazillion unneeded and previously unavailable degrees of freedom, a huge increase in the number of available spatial dimensions, always at least 9 and often many more.
And they wonder why string theory has 10500 versions of the vacuum... :)
Oh... did I also mention that the Deep Dive has cost the US (mainly NSF plus matching funds from other institutions) well over half a billion dollars, with literally not a single new experimental outcome, let alone any actual working new process or product, as a consequence?
This was only to be expected, since the Deep Dive plunged all research into real string-like vibrations down into the utterly inaccessible level of the Planck foam. Consequently, the only product of string theory research has been papers. This half a billion dollars' worth of papers has built on itself, layer by layer of backward citations and references, for over 40 years. In many cases, the layers of equations are now so deep that no human mind could possibly verify them. Errors only amplify over time, and if there is no way to stop their propagation by catching them though experiments, it's the same situation as trying to write an entire computer operating system in one shot, without having previously executed and validated its individual components.
In short, what the US really got for its half billion dollars was a really deep stack of very bad programming. Our best hope for some eventual real return on string theory investments is that at least a few researchers were able to get in some real, experimentally meaningful research in all of that, to produce some real products that don't depend on unverifiable non-realities.
To link to the above mini-essay, please copy and paste either of the following links:
[link:fqxi.org/community/forum/topic/3099#post_146108]It's Time to Get Back to Real String Theory[/link]
Giovanni,
Well... hmm, it's Feb 27 but this is still working, at least for a while.
Thank you for your very kind remarks! I'll be sure to read your essay, as I try to do whenever anyone posts, even though the rating periods is over.
(Or can we still post, just not rate? Sigh. I must read the rules again...)
Cheers,
Terry
Peter,
Thank you for the follow-up, but at 12:30 AM I'm not quite sure I followed all of that? I assume you did see my long posting at your site? I'll try to read your posting above again when I'm awake... :/ zzz
Cheers,
Terry
Ulla,
Thank you for your generous and kind remarks! It's past the rating period now, but I'll be sure to take a look at your essay tomorrow (today?)
Cheers,
Terry
Dear Terry,
there is no hurry to read my essay, if you want to do it. The forum remains open until the nomination of the winners (and even beyond), although I fear it will be very little frequented from now on.
Mine is the modest contribution of a non-specialist. Read it without obligation, when you have time.
Regarding the scoring system, I know it enough, having participated in the last three contests. I feel able to say (and I'm not the only one) that it works pretty badly and it's the worst aspect of the contest. The problem is that almost no one of us uses a rigorous and correct voting pledge as yours and the score is given often by sympathy, or resentment, or to return a high mark, or because absurd alliances and consortia come out..
As a rule, I have never asked anyone to score my essay, but I have certainly sometimes been influenced by the requests of others, or by a too high rating that I received, or by the desire not to disappoint someone, and I certainly ended up by evaluating too high some essays that perhaps did not deserve it, or that I simply could not understand. My mistake, no doubt.
Fortunately, I rarely participate in the scoring and unfortunately, having difficulties with English, even in discussions, but others do not so, and this way of doing negatively affects the final ranking of the community. Thus, some objectively mediocre essays often end up in the upper part of the ranking, while others objectively valid end up in undeservedly low positions. Your own essay, in my opinion one of the best, if not the best, deserved to end up in a position much higher than that it had (after blasts of 1 or 2 given without adding any motivation). But I also think of other contributions, like that of Karl Coryat, that you have appreciated and discussed in detail. Or of even more neglected essays, like that of A. Losev, which seemed to me very interesting and original. Or the suggestive one by Joe Becker (founder of the Unicode system!), who may have been penalized, as well as by his very shy and humble attitude, by his clearly holistic and metaphysical perspective (but similar to that of a great visionary scientist and philosopher like Leibniz). Or that of Bastiaansen, which certainly offers food for thought. But there are certainly many others, perhaps even lower scored, but certainly valid, which I forgot or I have not even read, because there are 200 essays and time is lacking..
You will ask me: why do you put this in my thread, instead of writing it in a more appropriate and general context? In fact, these considerations may be out of context here and I apologize for this. But they came to me immediately after the closing of the community vote, while I was reading some of your posts. Moreover I have a little hope that your tireless, qualified, very correct contribution to this year's contest-forum can serve to make the FQXi community better, avoiding the risk of becoming a confused and scientifically sterile ground of personalism and preconceptions.
Thanks again for all your contributions and, in particular, for the latest precious mini-essays, which will be for me a material for reading and reflection, in the coming days or weeks.
Cheers,
Giovanni
Giovanni,
I have finally figured out how to finds posts like yours! I simply search by date, e.g. "Feb. 27" for this one. It has been very hard for my poor brain to find entries when they show up in the middle of the blogs, both in mine and in others.
Thank you for your positive and constructive comments! Also, thanks for that bit of info on how just the ratings close, not the commenting. I for one will be more likely to show up, not less. The ratings part is designed like a Hunger Games incentive program, so having it gone makes me feel like a more unfettered form synergistic interactions is now possible.
I am particularly appreciative of your quick list of essays worth examining. I plan to look at them, hopefully all of them! I keep finding unexpectedly interesting points in so many of these essays.
Finally, please feel very free to post in my essay thread anytime you want to. It never even occurred to me that it might not be the right "spot" for you to do so. (Come to think of it, considering some of the humongous posts that I've put on other folks' threads, I guess it's sort of a given that I'm not too worried about people cross-posting, isn't it?)
Cheers,
Terry
Terry,
Going back to what spawned string theory and Len Susskinds thoughts an even simpler interpretation in another direction seems to yield a whole lot more useful stuff without infinite recursion; i.e. here; VIDEO Time Dependent Redshift. Are we locked in a circular one way street without the exit of helicical paths?
My present classic QM derivation emerged from a test of the model and SR components, via the 2015 top scorer;The Red/Green Sock Trick.
Might it not be time to step back and review other routes?
Peter
Terry: NB: your time is avaluable to me; so no need to rush! Seeking to minimize misunderstandings -- see below -- from the get-go, your comments follow [with some editing for efficiency] -- with some bolding for clarity (and sometimes emphasis).
TB: "Your title is intriguing; look at my signature line and its single-concept definition of QM and you can see why."
GW: Here it is: "(i) Quantum mechanics is simpler than most people realise. (ii) It is no more and no less than the physics of things for which history has not yet been written."
We agree: 'Quantum mechanics is simpler than most people realise.' I would add: It's little more than an advanced [and experimentally-supported] probability/prevalence theory. But please, for me, translate your 2nd sentence (ii) into a few more words: "(ii) It is no more and no less than the physics of things for which history has not yet been written = ..." ??
TB: "My queue on this last day is long."
GW: Rightly so! But (NB) the threads can remain open for years!!
TB: "But I will follow your link and a look at your essay."
GW: Please take your time with the essay and communicate directly by email (it's in the essay) when you have difficulties; especially if you're rusty with delta-functions in ¶13. I am here for critical feedback and questions, etc. And I cannot be offended.
TB: "Wow! That is one of the best arguments for locality that I think I've seen. I like your Bell-ish style of writing and focus on specifics."
GW: Tks.
TB: "You are of course in very good company, since Einstein was a localist."
GW: Yes; without doubt!
TB: "And Bell was a localist."
GW: ??? Not from my readings! For me, a true localist would have reviewed his theorem and spotted the error. Further, here's Bell's dilemma from as late as 1990:
'I cannot say that AAD is required in physics. I can say that you cannot get away with no AAD. You cannot separate off what happens in one place and what happens in another. Somehow they have to be described and explained jointly. That's the fact of the situation; Einstein's program fails ... Maybe we have to learn to accept not so much AAD, but the inadequacy of no AAD. ... That's the dilemma. We are led by analyzing this situation to admit that, somehow, distant things are connected, or at least not disconnected. ... I don't know any conception of locality that works with QM. So I think we're stuck with nonlocality ... I step back from asserting that there is AAD and I say only that you cannot get away with locality. You cannot explain things by events in their neighbourhood. But, I am careful not to assert that there is AAD,' after Bell* (1990:5-13); emphasis added.
*Bell, J. S. (1990). "[link:www.quantumphil.org./Bell-indeterminism-and-nonlocality.pdf||Indeterminism and nonlocality.]" Transcript of 22 January 1990, CERN Geneva. Driessen, A. & A. Suarez (1997). Mathematical Undecidability, Quantum Nonlocality and the Question of the Existence of God. A. 83-100.
TB: "I can't do a detailed assessment today -- too many equations that would need careful examination to assess your argument meaningfully -- but what I've seen at a quick look seems pretty solid."
GW: PLEASE: Do not get bogged down; send me emails when you have difficulties. For me, your time is precious!
TB: That said, there is an expanding class of pro-entanglement data anomalies that you need somehow to take into account:
ID230 Infrared Single-Photon Detector Hybrid Gated and Free-Running InGaAs/InP Photon Counter with Extremely Low Dark Count
GW: Terry: My theory expects "entanglement" to be strengthened with better equipment; and you [thankfully] next supply the supporting evidence!
TB: "This field has moved way beyond the Aspect studies. A lot of hard-nosed business folks figured out years ago that arguments against the existence of entanglement don't matter much if they can simply build devices that violate Bell's inequality. Which they did, and now they sell them to some very smart, physics-savvy customers who use them on a daily basis to encrypt some critical data transmissions."
GW: We agree, 100%.
TB: "Many of these customers would be, shall we say, upset in interesting ways if some company sold them equipment that did not work."
GW: NBB Why wouldn't it work? My theory would be kaput if it didn't!
TB: "Again, thanks for a well-argued essay! I'll try (no promises though) to take a closer look at your essay at some later (post-commenting-close) date. Again assuming the equations are solid, yours is the kind of in-depth analysis needed to sharpen everyone's thinking about such topics."
GW: Please take you time; every word of criticism is like a kiss from wife.
Tingling in anticipation; with my thanks again; Gordon
More realistic fundamentals: quantum theory from one premiss
REPOSTED TO CORRECT FORMATTING ERROR NOT PRESENT IN PREVIEW! Adding: my comments below are to mimimize some apparent misunderstandings.
Terry: NB: your time is valuable to me; so no need to rush! Seeking to minimize misunderstandings -- see below -- from the get-go, your comments follow [with some editing for efficiency] -- with some bolding for clarity (and sometimes emphasis).
TB: "Your title is intriguing; look at my signature line and its single-concept definition of QM and you can see why."
GW: Here it is: "(i) Quantum mechanics is simpler than most people realise. (ii) It is no more and no less than the physics of things for which history has not yet been written."
We agree: 'Quantum mechanics is simpler than most people realise.' I would add: It's little more than an advanced [and experimentally-supported] probability/prevalence theory. But please, for me, translate your 2nd sentence (ii) into a few more words: "(ii) It is no more and no less than the physics of things for which history has not yet been written = ..." ??
TB: "My queue on this last day is long."
GW: Rightly so! But (NB) the threads can remain open for years!!
TB: "But I will follow your link and a look at your essay."
GW: Please take your time with the essay and communicate directly by email (it's in the essay) when you have difficulties; especially if you're rusty with delta-functions in ¶13. I am here for critical feedback and questions, etc. And I cannot be offended.
TB: "Wow! That is one of the best arguments for locality that I think I've seen. I like your Bell-ish style of writing and focus on specifics."
GW: Tks.
TB: "You are of course in very good company, since Einstein was a localist."
GW: Yes; without doubt!
TB: "And Bell was a localist."
GW: ??? Not from my readings! For me, a true localist would have reviewed his theorem and spotted the error. Further, here's Bell's dilemma from as late as 1990:
'I cannot say that AAD is required in physics. I can say that you cannot get away with no AAD. You cannot separate off what happens in one place and what happens in another. Somehow they have to be described and explained jointly. That's the fact of the situation; Einstein's program fails ... Maybe we have to learn to accept not so much AAD, but the inadequacy of no AAD. ... That's the dilemma. We are led by analyzing this situation to admit that, somehow, distant things are connected, or at least not disconnected. ... I don't know any conception of locality that works with QM. So I think we're stuck with nonlocality ... I step back from asserting that there is AAD and I say only that you cannot get away with locality. You cannot explain things by events in their neighbourhood. But, I am careful not to assert that there is AAD,' after Bell* (1990:5-13); emphasis added.
*Bell, J. S. (1990). "Indeterminism and nonlocality." Transcript of 22 January 1990, CERN Geneva. Driessen, A. & A. Suarez (1997). Mathematical Undecidability, Quantum Nonlocality and the Question of the Existence of God. A. 83-100.
TB: "I can't do a detailed assessment today -- too many equations that would need careful examination to assess your argument meaningfully -- but what I've seen at a quick look seems pretty solid."
GW: PLEASE: Do not get bogged down; send me emails when you have difficulties. For me, your time is precious!
TB: That said, there is an expanding class of pro-entanglement data anomalies that you need somehow to take into account:
ID230 Infrared Single-Photon Detector Hybrid Gated and Free-Running InGaAs/InP Photon Counter with Extremely Low Dark Count
GW: Terry: My theory expects "entanglement" to be strengthened with better equipment; and you [thankfully] next supply the supporting evidence!
TB: "This field has moved way beyond the Aspect studies. A lot of hard-nosed business folks figured out years ago that arguments against the existence of entanglement don't matter much if they can simply build devices that violate Bell's inequality. Which they did, and now they sell them to some very smart, physics-savvy customers who use them on a daily basis to encrypt some critical data transmissions."
GW: We agree, 100%.
TB: "Many of these customers would be, shall we say, upset in interesting ways if some company sold them equipment that did not work."
GW: NBB Why wouldn't it work? My theory would be kaput if it didn't!
TB: "Again, thanks for a well-argued essay! I'll try (no promises though) to take a closer look at your essay at some later (post-commenting-close) date. Again assuming the equations are solid, yours is the kind of in-depth analysis needed to sharpen everyone's thinking about such topics."
GW: Please take you time; every word of criticism is like a kiss from wife.
Tingling in anticipation; with my thanks again; Gordon
More realistic fundamentals: quantum theory from one premiss.
Terry, while I liked her essay and criteria a lot, I'm sure that there are more than really necessary. Especially when you consider that the mathematical uniqueness criteria can only be filled by a cosmology with 11 dimensions, one of which is a cyclic variable. I don't know of any other besides my own theta-mass-time.
WRL
"The World's Most Famous Equation" is also one of the most misunderstood. First, it was first derived by Poincaré, not Einstein, and it is better written as
E0 = mc2
"Thus the 20 digit sequence could in principle be replaced by a short binary program that generates and indexes pi". Which would consume more memory than simply storing the original 20 digit string.
"In physics the sole criterion for whether a theory is correct is whether it accurately reproduces the data in foundation messages." A theory can be refuted without even running a single experiment. We have other criteria to evaluate data, including internal consistency checks.
"The implication is that a better way to think of physics is not as some form of axiomatic mathematics, but as a type of information theory". It is neither.
Challenge 2. Bosons are in reality virtual combinations of Fermions that arise in the formalism when one switches from a non-local real picture to the approximate local picture of QFT. All the properties of bosons are derived from the properties of fermions, including spin. E.g. for photons the available spin states are
(+-1/2) - (+-1/2) = 0,1,-1,0.
The Standard Model needs to postulate the properties of bosons: mass, spin, charge. I can derive those properties from first principles.
"There are after powerful theoretical reasons for arguing that gravity is not identical in nature to the other forces of the Standard Model. That reason is the very existence of Einstein's General Theory of relativity, which explains gravity using geometric concepts that bear no significant resemblance to the quantum field models used for other forces". Gravity can be formulated non-geometrically. So there is nothing special about it regarding this. On the other hand the gauge theory used in QFT for the other interactions can be given a geometrical treatment with the gauge derivatives playing a role similar to the covariant derivatives in GR, and the field potentials playing a role similar to the Christoffel symbols.
Juan Ramón González Álvarez,
Thank you for your interesting comments! It took me a while to realize that your essay was back in 2012 (must have been an interesting year!), and that FQXi grants forward commenting access to all prior participants. That's good to know.
Poincaré was amazing! His math was so advanced in comparison to that of Einstein (who had to get his wife's help even to do the somewhat repetitious math of his SR paper) that I wonder how well Einstein could have followed it. Einstein's path to E=mc2 was in any case very different and kind of weird? Einstein just did not think.
In sharp contrast, Poincaré's more Maxwell-based argument in "La théorie de Lorentz et le principe de reaction" ("The Theory of Lorentz and the Principle of Reaction") is lucid, mathematically clear, and makes beautiful use of the work of both Maxwell and Lorentz. More than his equation per se, I like Poincaré's straightforward assertion that:
"... if any sort of device produces electromagnetic energy and radiates it in a particular direction, that device must recoil just as a cannon does when it fires a projectile."
----------
Regarding 20 digits from pin, sure, a full array at either end would be huge. If you wanted to be exact on the analogy, though, you would instead take the processing-storage tradeoff and spend huge amounts of processing time to re-generate the pi sequence up to that point. It would be an insane way to compress data for any practical use, but of course that's not the point. The issue is that you have to be very careful about saying "this is the most compressed form of any data." So even if it took a month to generate the 20 digits, the short program for doing it that way still fully qualifies as a more compact way of telling someone at a remote site how to obtain those 20 digits.
----------
I do like and feel there is some real conceptual merit to thinking of boson as "combinations" in some sense of two mutually-canceling charged fermions, that is, the photon is in "some sense" a combination state of the positron and electron. But the math reigns in the end, as with any conceptual model.
For example: In your reply to Challenge 2, the spin set created by combining the spin ½ electron and the spin ½ positron is indeed {0,-1,+1}, but photons are of course always spin magnitude 1, not spin 0. You perhaps are talking about their measured spins at a detector? In any case, the question is not whether you can express photons as fermion pairs, but how that would induce the symmetric-antisymmetric relationship that so sharply distinguishes fermions from bosons. If you feel that the composite-fermion approach can lead to that, I'd suggest you try to provide a detailed argument for why.
----------
I've downloaded your 2012 essay and briefly scanned it. I have this sneaking suspicion from that and your assertions above, some unexplained, that your immediate reaction to quite a few ideas in physics is extreme skepticism? I'll try to look at your essay more closely as time permits, with the qualification that I have a long queue of both comments and essays from 2017 that I need to get to first.
Thanks again,
Terry
Gordon,
Good comments, wow. I've had some difficulty (external factors) getting back to my queue, and this is not a complete reply. But two quick items:
-- When I say "QM is the physics of that for which history has not yet been written," probably the best way to explain it is Feynman's integral-of-all-possible-histories QED concept. What that concept says is remarkably simple: If you track every possible way that an event could happen from its start to all points in the future that could be touched by that event, then add them all together using particle phase along those paths, you end up (voila!) with, well, the quantum wave function for the event. The paths whose phases match up reinforce each other, and give the highest probability outcomes.
That is, every wave function can also be interpreted as a "bundle" of possible histories, but only if the wave function has not yet been "collapsed". And by "collapse", I really mean only this: Until you poke the wave function hard enough to force it to say which of those many possible histories in the wave function has to produce an actual event or particle. Extracting such information, such as by letting an electron wave function hit a photodetector, creates history. There really is no meaningful distinction between the two: information is history.
In most presentations the "history" or path implications of collapsing a wave function is not emphasized, in part because I think people are uncomfortable with the idea that some parts of the past have not yet been set. But if you detect a photon whose wave function is a hundred light years in diameter (happens all the time!), you are inevitably also setting a "history" for that photon that causes it to land on earth and not on some distant star. For pilot wave folks this is flat-out trivial: The "real" particle was always headed to earth! For folks like me who respect but cannot accept the pilot wave model, it gets... complicated, and requires a rather ragged-edge concept of when the past finally gets set.
-- Bell: Argh, I don't recall the reference, but I can assure you with something like 99% confidence that Bell was trying to disprove entanglement. He was a pilot wave person and proud of it, saying it helped him come up with his theorem (that part at least I think is from Speakable and Unspeakable).
The reason he comes over as the opposite is, I'm pretty sure, a case where he was leaned over very hard to not seem biased. He truly did not want to be one of those people who adamantly finds what they want to find; he wanted the data to speak for itself.
Enough, it's late...
Cheers,
Terry
Biomolecular Renormalization: A New Approach to Protein Chemistry
Terry Bollinger, 2018-03-05
Abstract. In every cell in your body, hundreds of proteins with very diverse purposes float in the same cytosol fluid, and yet somehow rapidly and efficiently carry out their equally diverse tasks including synthesis, analysis, demolition, replication, and movement. Based on an earlier 2017 FQXi Essay contest mini-essay on the importance of renormalization and the Nature paper below, I propose here that the many protein chemistry pathways that go on simultaneously in eukaryotic and prokaryotic cells are enabled, made efficient, and kept isolated by a multi-scale biomolecular renormalization process that breaks each interaction into scale-dependent steps. I conclude by discussing ways in which this concept could be applied both to understanding and creating new biomolecules.
----------------------------------------
NOTE: A mini-essay is my attempt to capture an idea, approach, or prototype theory inspired by interactions with other FQXi Essay contestants. This mini-essay was inspired by:
1. What does it take to be physically fundamental by Conrad Dale Johnson
2. What if even the Theory of Everything isn't fundamental by Paul Bastiaansen
3. The Laws of Physics by Kevin H Knuth
4. The Crowther Criteria for Fundamental Theories of Physics
5. The Illusion of Mathematical Formality by Terry Bollinger (mini-essay)
Non-FQXi References
6. Extreme disorder in an ultrahigh-affinity protein complex, March 2018, Nature 555(7694):61-66. Article in ResearchGate project Novel interaction mechanisms of IDPs
----------------------------------------
Background: Scale-Dependent Protein Interactions
In the March 2018 Nature paper Extreme disorder in an ultrahigh-affinity protein complex, the authors provide a fascinating and extremely detailed description of how certain classes of "intrinsically disordered proteins" (IDPs) can bind together based initially on large-scale charge interactions that are then followed by complex and remarkably disorderly bindings at smaller size scales. The purpose of this essay is not to analyze this specific paper in detail -- this excellent paper does that very well for its intended biochemistry audience -- but to show how an external set of physics-derived, scale-dependent renormalization framework can be used not only to provide an alternative way to look at the interactions of these proteins, but to understand a broad range of large-molecule interacts in a new and potentially more unified and analytical fashion. This broader framework could in principle lead to new approaches to both understanding and designing proteins and enzymes for specific objectives, such as how to bind to a wide range of flu viruses.
The Importance of Approximation-At-A-Distance
The initial approach of two IDP proteins via simple, large-scale difference of electrical charge appears to be an example of biological multi-scale physics-style "renormalization." By that I mean that the proteins are interacting in a hierarchical fashion in which large, protein level charge attractions initiate the process while the proteins are still at some distance from each other and details are irrelevant due to charge blurring. This is the central concept of renormalization in, say, the QED theory of electron charge: You can at large distances (scales) approximate the electron charge as a simple point, much as you are approximating the complex protein charge as a "lump charge" in first stage.
As the proteins approach, more detailed patterns grow close enough to become visible, and the initial lump-protein-charge model fails. One must at this point "renormalize," that is, drop down to a smaller, more detail scale that allows analysis in terms of smaller patterns within smaller regions of the protein. In the case of the dynamic and exceptionally disorganized IDP proteins, these later stages result in surprisingly strong bindings between the proteins. More will be discussed later about this intriguing feature, which I believe can be reinterpreted as a more complicated process that only appears to be random and disorganized from an outside perspective. It is at least possible, based on a renormalization analysis, that this "randomness" is actually a high-density, multi-level transfer of data. This transfer would be enabled by the large number of mobile components of the protein behaving more cogs and wheels in a complicated machine than as truly random parts. Alternatively, however, if binding truly is the top priority for the proteins, the moving parts could also accomplish that without using the resulting bindings as data.
Broadening the Model: Multi-Level Attraction and Rejection
However, even more interesting than detailed binding when proteins grow closer is the possibility that the interactions at that level reject rather than encourage further interactions. Such cases might also be very common, possibly even dominant. You would have a "dating service" that allows the proteins to spend a small amount of time and mobility resource to check out a potential match, but then quickly (and this is important) realize at low cost the match will not work. Amplify such low-cost rejections by huge numbers of protein types and individual instances, and the result is a very substantial overall increase in cellular efficiency.
If however the next level of charge-pattern detail does encourage closer attraction, the result would be to head down the path of repeated downward renormalization of scale, as individual sheets and strands move close enough to "see" more detail. If the proteins were exact matches to begin with, then renormalization (which in this contex just means "scaling down to see greater levels of charge pattern detail") would proceed all the way down to the atomic charge level. The "dating service" would be a success, and the match accomplished. But more importantly, it would be accomplished with high efficiency by avoiding getting into too much detail too quickly.
Broader Implications of Multi-Scale Protein Interactions
There are a number of very interesting potentials in such a renormalization interpretation of protein-to-protein binding. Importantly, most of these potentials apply to pretty much any form of large-bio-molecule binding, including emphatically DNA) and (to me even more interesting) enzymatic creation of novel molecules. These potentials include:
o Efficient, low-time-cost, multi-stage elimination of non-matches.
Proteins (or DNA) would be able to approach at the first scale level based on gross charge, then quickly realize there is no match, and so head off to find the right "machinery" for their tasks. The efficiency issue is huge: Repeated false matches at high levels of detail would be very costly, causing the entire cell to become very inefficient.
o Increased probability of correct protein surface matchups.
Or, conversely: Lower probabilities of protein matchup errors. A huge but somewhat subtle advantage of multi-scale attraction is that it gives each new level of smaller detail a chance to "reorient" its components to find a better local match. One way to think of this advantage is that the earlier larger-scale attractions are much like trip directions that tell you which interstate highway to take. You don't need detail at that level, since there will in general be only one interstate (one "large group area match") that gets you to the general region you need for a more detailed matchup. Only after you "take that interstate" and approach more closely do the detailed "maps" show up and become relevant.
o Complex "switch setting" during the multi-scale matchup process.
Since proteins are not just static structures but nano-scale machines that can have complex levels of local group mobility (more later on the implications of that), such lower-scale matchups can be more than just details showing up at the finer scales. They can also re-orient groups and structures, which in turn can potentially "activate" or "change the mode" of one or both proteins, much like turning a switch once you get close enough to do so. These "switches" would of course themselves be multi-scale, ranging e.g. from early large-scale reorientations of entire beta sheets down to later fine-scale rotations of side groups of individual amino acids. What is particularly interesting about this idea is that you potentially could program remarkably complex sequences in time and space of how such switches would be reset. There is potential in multi-scale, multi-time switch setting for a remarkable degree of relevant information to be passed between proteins.
o Multi-scale adjustment of both specificity and "stickiness".
As with gecko feet, if the goal of the protein is aggressive "grabbing" of a range of some broad class of proteins, this can be programmed in fairly easily via the multi-scale model. It works like this: If the purpose of the protein is to bind and entire class of targets based on overall large-scale charge structure (and please note the relevance of this idea to e.g. ongoing efforts for universal flu vaccines), then the next lower level of scale in the protein should be characterized by extreme mobility of the groups that provide matching, so that they can quickly rotate and translate into positions that allow them to match essentially any pattern in the target molecule.
Conversely, if certain patterns at lower scales indicate that the target is wrong, then those parts of the program should present a rigid, immobile charge pattern upon closer approach. Mobility of groups thus becomes a major determinant of both of how specific the protein is, and how tightly it will bind to the target.
o Energetic activation of low-probability chemical reactions.
This is more the enzyme interpretation, but it's relevant because multi-scale provides a good way to "lock in" just the right sequence of group repositions to create highly unlikely binding scenarios. Imagine first large groups then increasingly smaller and more specific groups all converging in attraction down to a point where some small group of atoms is forced into an uncomfortable positioning that normally would never occur. (This is a version of the multi-level-switch scenario, actually.)
At that point a good deal of energy is available due the higher-level matchups that have already occurred; the target atoms are under literal pressure to approach in ways that are not statistically likely. And so you get a reaction that is part of the creation of some very unlikely molecule. This is really quite remarkable given the simplicity and generally low-overall-energy level of amino acid based sequences, yet it comes about fairly easily via the multi-level model.
Another analogy can be used here: Imagine trying to corral wild horses who have a very low probability of walking into your corral spontaneously. Multi-scale protein matchup energetics then are like starting with large-scale events, say helicopters with loudspeakers, as the first and largest-scale way of driving the horses into a certain region. After the horses get within a certain smaller regions, the encirclement process is then scaled down (renormalized) to use smaller ground vehicles. The process continues until "high energy" processes such as quickly setting up physical barriers come into play, ending with full containment.
o Enablement and isolation of diverse protein reaction systems within the same cytosol medium.
The idea that in terms of interactions, molecules can both immediately reject and reject at low cost interactions not relevant to their purpose is another way of saying that even if a huge variety of molecules with very diverse purposes are distributed within the same cytosol, they can behave in effect as if they do not "see" any other molecules except the ones with which they are designed to react. These subnetworks thus can focus on their tasks with efficiency and relative impunity against cross-reactions.
There is a fascinating and I think rather important corollary to this idea of multi-scale enabled isolation of protein chemistry subnetworks, which is this: It only works if the proteins are pre-structured to stay isolated. That is, on average I would guess that high levels of mutual invisibility between protein reaction subnetworks is not likely, and that the subnetworks must in advance agree to certain "multi-scale protocols" about how to distinguish them from each other. This distinction would begin and be most critical at the largest and most efficient scales of charge blurring, the same levels that your paper abstract describes.
So, a prediction even: Careful analysis of the charge profiles of the many types of proteins found in eukaryotic (and prokaryotic, likely more accessible but not your main bailiwick) will reveal that multi-scale isolation of multiple subnetworks of interactions that are based first on high-level, "blurred" charge profiles between the proteins, with additional isolations at lower scales. It will be show statistically that the overall level of isolation between the subnetworks is extremely unlikely without all such reaction paths sharing the charge-profile equivalent of a registry in which each reaction subgroup has its own multi-scale "charge address".
o Possible insights into the protein folding problem.
Finally, it is worth noting that the hierarchical guidance concept that underlies biomolecular renormalization could well have relevance to the infamous multi-decade protein folding problem, which is this: How does a simple string of amino acids fold itself into a large and precisely functioning protein "machine"? This feat is roughly equivalent to a long chain of about twenty different link types somewhat magically folding itself into some form of complicated machine with moving parts.
Either directly through multi-scale attractions or indirectly through helper molecules, it is at least plausible that biomolecular renormalization may play a role in this folding process. With regard to helper molecules, one intriguing hypothesis (nothing more) is that previously folded proteins of that same type could provide some form of multi-scale guidance for how to fold new proteins.
While an intriguing idea, it is also frankly unlikely for the following reason: Such assistance would almost certainly require the existence of some class of "form transfer" helper molecules that would look at the existing molecules and from them find and present that information to the folding process. It is hard to imagine that such a system could exist and not have already been noticed.
Nonetheless, the concept of folding-begets-folding has an intriguing appeal from a simple information transfer perspective. And in one area it would resolve a very interesting resolution to a long-term mystery of large biomolecules, prions. Prions are proteins that have folded or refolded into destructive forms. Once established, these incorrectly folded proteins show a remarkable, even heretical ability to reproduce themselves by somehow "encouraging" correctly folded proteins to instead adopt the deleterious prion folding.
Folding-begets-folding would help to explain this mysterious process by making it a broken version of some inherent mechanism that cells have for reproducing the folding structures of proteins. Whether any of this is possible, and whether if so it is related to biomolecular renormalization, is an entirely open question.
Conclusions and Future Directions
As a concept, biomolecular renormalization appears to have good potential as a framework not only for understanding known and recently uncovered protein behaviors, but also to provide a more theory-based approach to designing proteins and enzymes. It may also provide insights into cell-level biological processes that previously have seemed opaque or mysterious under other forms of analysis.