Mr. Bollinger

In your essay you cherish simplicity (and for good reasons) and I admire the elegance of your style. Let me ask something though: how do you recognize that simplicity in this contest? Should that simplicity be interpreted in just one way?

I am asking this because I noticed a confusing comment of yours regarding an essay that more or less is pointing (in terms of that simplicity recalled by you), in the same direction.

Thank you again for the elegant and personal approach of simplicity emergent from your essay (please don't tell me is not emergentпЃЉ) )

Joyfully and respectfully Silviu

Dear Terry Bollinger,

Thanks for a most enjoyable essay. I first fell in love with information theory in 1967 when I encountered Amnon Katz's "Statistical Mechanics: an Information Theory Approach". Later I realized that ET Jaynes had done this circa 1951, and had noted that equating thermodynamic entropy to information entropy led to a lot of nonsense theorems being proved.

Your Kolmogorov approach reminds me somewhat of 'Software Physics', circa 1977, relating the complexity of software to the number of operations and data types. I discovered the literal truth of this when I designed and built a prototype for Intel and Olivetti while they kept adding functionality (more operations) to the original spec.

Anyway, I found your essay easy to read and understand and significant for the question "What is fundamental?" Congratulations.

You say "the defining feature of the message is that it changes the state of the recipient." I have in a number of comments remarked that energy flows through space. If it crosses a threshold and changes the structure (or state) of a physical system, it 'in'-forms that system and the information, or record, or number of bits, comes into being. It has meaning only if a codebook or context is present: "One if by land, two if by sea." The proof of information is in the pudding, in my opinion it is energy that flows across space, there is nothing "on top of" that energy that is 'physical information'. If the frequency or modulation or what have you causes the change of state, that change is information (if it can be interpreted). While messages carrying "information" provide a very useful way to formulate the problem, I believe many physicists project this formulation on to reality and then come to believe in some physical instantiation of information aside and apart from the energy constituting the message. In that sense I enjoyed your section "Physics As Information Theory" in terms of "foundation messages".

I believe your second and third challenges have essentially the same answer, but current theories based on false assumptions get in the way - one reason I am currently working to uncover the false assumptions.

I like that you mention Pauli's treatment of spin. His projection of the 'qubit' formulation onto Stern-Gerlach data was one of the first (after Einstein's projection of a new time dimension onto every moving object) instances of physicists accepting the utility of projecting mathematical structure onto physics, and then coming to believe in the corresponding physical structure. My belief is that until the current belief in the structure of reality imposed by mathematical projections is overcome, your second and third challenges will not be satisfied. For brief comments on related topics, review three or four comments prior to yours on my essay page.

You conclude by suggesting new insights are most likely to come from data. My belief is that the only solution to our current problems (your three challenges for instance) is to unearth the false assumptions that are built into our current theories and have been for generations. That is not a popular proposition. Almost all working physicists would prefer new ideas to add to their toolbox, not ideas that contradict things they have taught and published and that got them where they are -- another reason to value FQXi. I hope your experience is such that you will come back year after year. And I hope you enjoy your newly gained retirement. It's a good time in life.

Best regards,

Edwin Eugene Klingman

    Dear Terry Bollinger,

    Thank you for an interesting essay.

    I was wondering about the relationship between Kolmogorov Complexity and Occam's razor? Do simpler things really have lower KC? Also what about Bennett's logical depth? Why is KC better than logical depth?

    Please take a look at my essay.

    Thank you again for great read.

    All the best,

    Noson

      Dear Noson,

      Thank you for your excellent and insightful question! The answer is yes. If you translate a solution that has survived Occam's Razor into binary form (that is, into software), then the binary form of that solution will exhibit both the brevity and high information of a (near) Kolmogorov minimum.

      One can think of the "side trips" of a non-compact message as the information equivalents of the various components of a Rube Goldberg contraption. Simplifying the message thus becomes the equivalent of redesigning an information-domain Rube Goldberg contraption to get rid of unnecessary steps. The phrase "Occam's Razor" even suggests this kind of redesign, since for both messages and physical Rube Goldberg contraptions the goal is to cut away that which is not really necessary.

      One point that I think can be a bit non-intuitive is that solutions near their Kolmogorov minima are information dense -- that is, they look like long strings of completely random information. The intuitive glitch comes in here: If the goal of Occam's Razor is to find the simplest possible solution, how can a Kolmogorov minimum that is packed to the gills with information be called "simple"?

      The explanation is that to be effective, messages -- strings of bits that change the state of the recipient -- must be at least as complex as the tasks they perform. That means that even an Occam's Razor solution must still encode non-trivial information, and depending on the situation, that in turn can translate into long messages (or lengthy software, or large apps).

      If the desired state change in the recipient is simple in terms of how the recipient has been "pre-programmed" to respond (which is a very interesting issue in its own right), then the Kolmogorov minimum message will also be very short, perhaps as short as just one bit. But even though a single bit "looks" simple, it still qualifies as having maximum information density if the two options (0 or 1) have equal probability.

      The other extreme for Occam's Razor extreme occurs when the state of the recipient requires a major restructuring or conversion, one that is completely novel to the recipient. That can be a lot of bits, so in that case Occam's Razor will result in a rather lengthy "simplest possible" solution. Notice however that once this new information has been sent, the message recipient becomes smarter and will in the future no longer need the full message to be sent. A new protocol has been created, and a new Kolmogorov minimum established. It's worth pointing out that downloading a new app for your smart phone is very much an example of this scenario!

      We see this effect all the time in our modern web-linked world. As globally linked machines individually become more "aware" of the transformations they are likely to need in the future -- as they receive updates that provide new, more powerful software capabilities -- then the complexity of the messages one needs to send after that first large update also shrinks dramatically.

      This idea that Kolmogorov messaging builds on itself in a way that over time increases the "meaning" or semantic content of each bit sent is a fascinating and surprisingly deep concept. It is also deep in a specific physics sense, which is this: The sharing-based emergence of increasingly higher levels of "meaning" in messages began with the emergence of our specific spacetime and particle physics, and then progressed upwards over time across a spectrum of inorganic, living, sentient, and (particularly in the last century) information-machine based message protocols. After all, how could we know some of the elements in a distant quasar if the very electrons of that quasar did not share the design and signatures of the electrons within our detection devices? We assume that to be so, but there is no rule that says it must be so. It is for example certainly conceivable that some distant quasar might be made of a completely different particle set from matter in our part of the universe. But if the universe did not provided these literally universally shared examples of "previously distributed" (by the big bang e.g.) information baselines, then such transfers of information would not even be possible.

      So here's an important insight into the future of at least our little part of the universe: Meaning, as measured quantitatively in terms of observable impacts on physical reality per bit of Kolmogorov minimum messages sent, increases over time.

      This idea of constantly expanding meaning is, as best I can tell, the core message of this year's deeply fascinating essay (topic 3088) by Nobel Laureate Brian Josephson, of Josephson diode fame. His essay is written in a very different language, one that is neither physics nor computer science, so it is taking me some time to learn and interpret it properly. But reading his essay has already prompted me to reexamine my thoughts a year or so ago (on David Brin's blog I think?) regarding the emergence over the history of the universe of information persistence and combinatorics. Specifically, I think focusing on "meaning," which I would define roughly as impact on the physical world per bit of message sent, may provide a better, cleaner way to interpret such expanding combinatoric impacts. When I reach the point where I think I understand Professor Josephson's novel language adequately, I will post comments on it. (I should already note that I am already deeply troubled by one of his major reference sources, though Professor Josephson does a good job of filtering and interpreting that extremely unusual source.)

      Please pardon my overly long answer! You brought up a very interesting topic. I'll download your essay shorty and take a look. Thanks again for your comments and question!

      Cheers,

      Terry

      Dear Edwin Eugene Klingman,

      Thank you also for your thoughtful comments and generous spirit. I am pleased to see that I was reasonably on target in understanding several of your key points, since we seem to share a number of views that are definitely not "standard" according to prevalent physics perspectives.

      I'm going to cut to the chase on one point:

      May I suggest that what you seem to be proposing is that our universe is a state-machine computer simulation?

      The "now" (my term) of the post-GR Einstein ether would be the current state of that simulation. But more critically, your essay concept of a single universal time would no longer be time as we measure it within our universe. Instead, it would be the external time driving this universe simulator. That is why it is perfect time, time that is never affected by the local clock variations seen within our universe. It would be a form of time whose source is not even accessible from within our universe! Within our universe, you are instead forced (as Einstein was) to use the physical oscillatory behaviors of matter and energy -- of clocks -- as your only available definition of time. And as physical objects, they are of course fully subject to the rules of special relativity.

      Given your impressive computer background, I suspect that you may already be thinking along these lines, and are just being cautious in how you present such an idea to a physics audience. But even if that is true, there is a very lively subset of physics that like this idea already, so you would not be alone.

      Also, if you define your universal time as external to our universe, folks who like the beauty and incredibly good experimental validation of every aspect of SR would breathe a lot easier when they read what you are saying. The SR concept of time remains just as Einstein defined it, using only physical clocks, so none of that is impacted. Instead, you would be introducing a new concept, an external, perfect, and truly universal time -- the clock of the simulator in which all these other clock are running as simulations.

      So, I just a thought and an idea for presenting your ideas in a way that might (?) help you get a bit more traction.

      I'm getting to your comments on my own essay thread, BTW, though likely not this evening. You do bring up lots of interesting points!

      Cheers,

      Terry

      Dear Terry,

      Thanks for your response. I think you're beginning to see how valuable FQXi comments are. For example, I learned from your response to Noson Yanofsky, in the comment that follows mine. The essays have a nine page limit, but there is no limit to how much information we can exchange in the comments!

      My dissertation, "The Automatic Theory of Physics" was based on the fact that any axiomatic formulation of physics (including special relativity) can be reformulated as an automaton. So I appreciate your suggestion that "universal time" is the 'external' trigger to the state sequencer that yields the (simulated) universe. However the theme of "Universe as a simulation" shows up every so often in these contests, and I always argue against it. My point, repeated in my comment above, is that "physics", our models of reality, are based on projection of mathematical structure onto physical reality - the more economical the better, whether Kolmogorov or Occam's razor. But I do not believe these structures are actually replicated in reality. Rather, I believe that the root of our current problems is based on structures imposed early, and now accepted as gospel. Clearly GR and QM are correct, so far as they go, but I believe they can be physically reinterpreted (retaining almost all mathematical structure, since it works) and a better theory would result.

      Your major point, if I'm reading you correctly, is that we don't measure time per se. We measure duration, based on imperfect clocks. Einstein imagined perfect clocks, and distributed them profusely, but they don't exist, and his space-time symmetry leads to nonsense that is not supported by reality. You mention experimental validation of "every aspect of SR", but reference 10 in my essay argues that length contraction has never been measured. And the space-time symmetry of SR is asymmetric in the Global Positioning System. So I question this "every aspect".

      Nevertheless, your informative comment gives me more to think about and can only improve my approach.

      Best regards,

      Edwin Eugene Klingman

      Dear Terry

      Thank you for your feedback on my essay, '

      I will use a similar marking system to that used by you.

      What I liked:

      Easy to read. Well set out. Your core idea came through well.

      What I thought about as I read it:

      The initial idea seemed quite a lot like Occam's razor, shifted from a philosophical stance to a mathematical stance.

      Einstein's derivation of E=mc^2 was moderately complex at certain points, and each step represented the last, using more letters, yet each says the same thing, so must express the same level of fundamentality. The difference is only in he or she who reads it. Fundamentality would then be in the eye of the beholder?

      There is an implicit assumption in your example of replacing a gigabyte file. Say I write this program, but before it ends, I pull the plug! The program is time dependent while the gigabyte file is more space dependent. This was a cool idea, but I don't think the file and the system are equivalent, only potentially equivalent. Also, reproducing the original file by a computer, even a perfect computer, would require an expenditure of energy under Landauer's 'information is physical' correlation that would require more effort to retrieve than simply keeping the original file. I mention this because it hints at hidden aspects that make some things seem more fundamental than they really are.

      I thought the reference to pi was quite fun. It occurred to me that given pi is irrational, if one had some way of referencing a point in its decimal expansion, then every finite number would be expressed within it and the larger the number, the more efficient it would be to specify the start and end point? I see this would likely have some limit due to a trade-off, but it is fun to think about. I suppose the same would apply to any irrational number.

      You say, 'the role of science is first to identify such pre-existing structures and behaviors, and then to document them in sufficient detail to understand and predict how they work.' For philosophical reasons, there is a well-known (Hume, Kant, Popper) strong epistemic schism between science and reality. We can never identify, from an empirical standpoint, pre-existing structures. We can only guess at them from hints provided by experiment. But you probably know this already. My essays were written exactly to span the gap from a rationalist perspective.

      On the Spekkens Principle. I haven't read that essay, but the terminology suggests his work echoes that of Raphael Sorkin and his Causal Set Theory, only interpreted in an informational universe context (I think). It is exactly the dynamics, the ultimate cause, which my work addresses.

      On your 'Challenge 3'. I am a bit surprised that you didn't mention in your comments on my essay argument for an increasing baryon mass (whether due to intrinsic change of the baryon or extrinsic change due to the 'shape of space', for which my model is not yet sufficiently advanced) which would answer at least one part of this challenge, namely why the ration of electron to proton mass is at it is.

      'The Standard Model qualifies overall as a remarkably compact...framework.' Oi! Are we reading the same model? That said, I see you point that compared to SUSY and string theory and so forth, it is relatively compact, but with all those parameters, might they be hiding a very large set of other theories?

      I hope you take the time to read my previous essay in 'It from Bit'. In it there may be a gemstone of simplicity. My whole model comes from a single principle, and that principle is no more than an expression of our idea of equivalence.

      I am providing a short response to your comments on my paper.

      Best wishes,

      Stephen

        Stephen,

        Thank you for your thoughtful and intriguing comments! I will look at your comments on your essay thread. A few quick responses:

        -- Ironically, I'm not a big fan of E=mc2, since I immensely prefer the energy-complete, Pythagorean-triangle-compatible form:

        [math](E)^2=(\overrightarrow{p}c)^2(mc^2)^2[/math] E=mc2 addresses only the very special case of mass at rest, and so is not very useful in any actual physics problem. I also find it deeply amusing that Einstein's original paper in which he announced his mass-equals-energy insight, "Does the Inertia of a Body Depend on Its Energy Content?", uses L in place of E, and for some odd reason never actually gives the equation! Instead, Einstein uses a sentence only to describe an equivalent equation with c on the other side:

        "If a body gives of the energy L in the form of radiation, its mass decreases by L/c2."

        -- Your points about the subtleties of the gigabyte example are correct, very apt, and interesting! In some of my unpublished work I am strong on distinguishing very carefully between "potentiality" (yor word) versus "actuality", and not just in physics, but especially in mathematics. The very ease with which our brains take potentials to their limits and then treat them as existing realities is both an interesting efficiency aspect of how capacity-limited biological cognition works, and a warning of the dangers of being sloppy about the difference. Computer science flatly forces one to face some of these realities in ways that bio-brain mathematical abstractions do not. This is why I think it is very healthy for mathematicians to contemplate what happens if they convert their abstract equations into software.

        In any case, issues such as the ones you mention are where the real fun begins! While a short essay is great for introducing a new landscape to a broader audience, there is a lot more going on underneath the hood, as you have just pointed out with your comments on both Einstein's mass equation and my gigabyte file example. An essay is at its best more like a billboard enticing viewers to visit new land, including at most a broad, glossy, overly simplified view of what that new land looks like. The real fun does not begin until you start chugging your vehicle of choice over all the &%$^# potholes and crevices that didn't show up in that enticing broad overview!

        -- I would note that while in principle it may be possible to find any random sequence of numbers somewhere in pi (wow, that would be an interesting proof or disproof...), there is a representation cost for the indices themselves that must be also be taken into account. A full analysis of the potential of pi or other irrational numbers as data compression mechanisms would be mathematically fun, and who knows, might even end up pointing to some subset of methods with actual compression value. The latter to me seems unlikely, though, since the computation cost is likely to get huge pretty quickly.

        -- As an everyday passionate Hume in being, I Kant speak all that knowledgeably about the biological structures behind how our minds perceive reality. But I try hard to avoid the subconscious assumptions of truth that Popper up so often in physics and even math, when all one can really do is prove that at least some of these assumptions are false. Thus when writing for a broad audience, I try hard to make it easier for the reader to follow an argument and stay focused on it by simply explaining any necessary philosophical point "in line" and as succinctly as possible. The less satisfying alternative would be to give them a hyperlink to some huge remote body of literature that would then require a lot of reading on their part before they could even get back to the original argument... which by that point they have likely forgotten... :)

        -- I am also surprised that I did not mention your baryon mass idea in my comment, since I certainly was thinking about them when I wrote the comment! I guess I was just more focused on the experimental constraints on the idea?

        -- On the Standard Model being "relatively compact", that is, say, in comparison to string theory. But oi indeed, my emphasis was very definitely on the word "relatively"! The Standard Model as it stands is a huge, juicy, glaringly obvious target for some serious Occam outtakes and Kolmogorov compressions.

        By the way, as someone who is deeply convinced that space, a very complex entity if you think about it, is emergent from simpler principles, I found your constructive approach to it interesting.

        I will try to read It from Bit soon.

        Cheers,

        Terry

        Edwin Eugene Klingman,

        I am delighted and more than a little amused at how badly I misunderstood your intent! I would have bet that your answer was going to be "yes, I was just being subtle about simulation"... and I was so wrong!

        I'll look more closely to figure out why I got that so wrong. I may even look up your thesis, but no guarantee on that -- theses tend to be long in most cases!

        I downloaded your ref [10] and definitely look forward to looking at that one! I would say immediately that a lot of particle and especially atomic nuclei folks would vehemently disagree, since e.g. things like flattening have to be taken into account when trying to merge nuclei to create new elements. But that's not the same as me having a specific reference hand, as you do here.

        So: More later on that point. Thanks for an intriguing reference in any case!

        Cheers,

        Terry

        Dear Terry,

        With great interest I read your essay, which of course is worthy of the highest praise.

        I found in the forum thread of Kadin your questions, which are much more interesting and relevant than the questions of FQXi.

        My opinion on these issues:

        (1) Entanglement - is the only remote mechanism in the Universe for forming the force of interaction between the elements of matter, which is realized as a result of the interaction of the de Broglie toroidal gravitational waves at the common frequencies of the parametric resonance.

        This quantum mechanism of gravity is shown in a photo of phenomena observed in outer space (essay 2017) "The reason of self-organization systems of matter is quantum parametric resonance and the formation of solitons" (https://fqxi.org/community/forum/topic/2806).

        For example, a molecule is a state of entanglement (interaction) of atoms at common resonant frequencies of the de Broglie toroidal gravitational wave complex (including tachyon waves) belonging to different levels of matter.

        (2) Full fundamental fermion zoo - is described by simple similarity relations of the fractal structure of matter, on the basis of the parameters of the electron and the laws of conservation of angular momentum and energy.

        Fermions of different levels of matter are neutrinos for each other. All this is given in the essay 2018 "Fundamental" means the underlying principles, laws, essence, structure, constants and properties of matter (https://fqxi.org/community/forum/topic/3080).

        Also given are the ratios for the deterministic grids of all the main resonance frequencies of the zoo of toroidal gravitational waves (fundamental fermions), and comparisons are made with known observed resonant frequencies.

        (3) Recreating GR predictive power - is possible only after understanding the fact of the existence of potential stability pits in all fundamental interactions, both in strong interaction.

        After such an understanding, logically easily is solved the paradox of electrodynamics, when the orbital electron does not radiate.

        Potential stability pits (de Broglie toroidal gravitational waves, orbital solitons) are formed due to quantum parametric resonance in the medium of a physical vacuum.

        With understanding of potential pits comes an understanding of inertia and mass.

        (4) Clarifying waves vs superposed states - This is the result of the interaction of the toroidal gravitational waves of de Broglie (fundamental fermions), it can be determined by solving classical quantum parametric resonance problems, for example, using the Mathieu equations (as in radio engineering).

        The solutions of these equations can be represented as a Fourier series, which is actually a set of real toroidal gravitational waves interacting (entangled) in a system on deterministic grids of a set of resonance frequencies.

        I'm sorry that everything is wrong, «how very much like space curvature could create such observed effects».

        Instead of curvature of space-time, there is a derivative of spatial coordinates in time. Equivalent of "curvature of space" is the speed of propagation of gravitational interaction.

        I hope that my modest achievements can be information for reflection for you.

        Vladimir Fedorov

        https://fqxi.org/community/forum/topic/3080

          Vladimir,

          Thank you for your kind remarks and comments. I will take a look at your essay sometime today, Friday Feb 16.

          Cheers,

          Terry

          Hi Edwin Eugene Klingman,

          Let's get to the main point: You surely realize that the Curt Renshaw paper (your ref 10) contains has no data whatsoever disproving special relativity? I assume you do, since you worded your description of the paper as "arguing" that SR length contraction does not exist, versus saying that the paper actually provides data contradicting SR.

          The Crenshaw paper instead only asserts that when the NASA Space Interferometry Mission (SIM) satellite is launched in 2005 (it is an old paper), it will disprove SR, because the author says it will:

          "The author has demonstrated in several previous papers that the Lorentz length contraction likely does not exist, and, therefore, will not be found by SIM."

          SIM was supposed to be launched in 2005 but was cancelled. Its nominal successor, SIM Lite, was also cancelled. Thus no such data exists, either for or against SR. The title of the paper, "A Direct Test of the Lorentz Length Contraction", is at best misleading, although it could perhaps be generously interpreted as a paper about a proposed test that never happened.

          In sharp contrast to this absence of data, all of the effects of SR, including time dilation, relativistic mass increases, and squashing of nuclei, is unbelievably well documented by hundreds or thousands of people who use particle accelerators around the world. Particle accelerators easily explore velocities very close to the speed of light, and so can produce extremely strong signals regarding the effects of SR. Shoot, even ordinary vacuum tubes prove SR if you crunch the numbers, since the electrons must accumulate energy under the rules of SR. Denying the existence of this gigantic body of work, engineering, and detailed SR-dependent published papers is possible only by saying "I don't like that enormous body of data, so I will just ignore it."

          Bottom line: You gave me a data-free reference with a title that fooled me into thinking it had actual data in it. I like you and your willingness to explore alternatives, but I sincerely wish you had not done that. My advice: Go look at the thousands of papers from the particle accelerator community, and stop focusing on a single deceptively titled non-paper (or author, since Crenshaw has other papers).

          Sincerely,

          Terry Bollinger

          Terry,

          There is a lot to like in your essay. It gives good guidance for simplistic discovery, including your 3 challenges, which add to a relatively out-of-the-box perception of simplistic processes of investigation. We all marvel over Einstein's equation, it's simplistic epiphany of the duality of energy and mass. Euler's identity is intriquing to all and fermion-boson spin baffling. And if we programmed in our careers we staggered over the mind-numbing immensity of mishmash of recursive equations years of coding piled on. I speak of new approach and discovery as well in my essay. Fundamental does involve fewer bits but also new discovery in following a more simplistic thread as you mention. I rate your essay high on several points. Hope you get a chance to look at mine.

          Jim Hoover

            Thanks Terry,

            There are questions that arise concerning our interests in simplification that are not commonly admitted. For example:

            1. Is simplification 'simply' a means of reducing complexity to a level of understanding that is acceptable (i.e. comfortable) and thereby communicable to others?

            2. Is the search for simplification acknowledgement that the subject under consideration is beyond the capacity of a person to comprehend in its totality?

            3. Is simplification a means by which one can get connected to people operating at a higher (or lower) level of consciousness?

            4. If simplification is assumed to promote a common cause, the purpose of which is to unite one's interests with those of others, at what point does the process of simplification become too simple and thereby confuse rather than clarify issues?

            5. Is the FQXi question so simple that it stimulates multiple lines of enquiry rather than serving to unite people in a common understanding?

            At issue is how many people are reasonably expected to benefit from any process of simplification. If that family is limited to professional physicists, mathematicians, or people that happen to speak a particular 'foreign' language, then is the quest for simplification really justified?

            Does being 'more fundamental in the sense of having the deepest insights' really contribute to understanding, or was Einstein the only person that truly understood what he was saying at the time?

            Thank you Terry for inviting us along your chosen path. You carry my best wishes.

            Gary.

              Jim,

              Thank you for your positive and thoughtful remarks! I look forward to seeing your essay, and will download a copy of it shortly.

              Cheers,

              Terry

              Gary,

              Thank you for your positive remarks! And wow, that is an intriguing set of questions you just asked!

              I like in particular that you are addressing the human and social interactions aspects of communications simplification. These are critical aspects of what I call collaborative or collective intelligence, that is, the "IQ" of an entire group of people, systems, and environments. The idea of a collective IQ addresses for example why free market economies tend in comparison to authoritative economies tend to be hugely more clever, efficient, and adaptable in their use of available resources. The intelligences that emerge from free market economies are examples of intelligences that are beyond detailed human comprehension; that is precisely why the human-in-charge authoritarian structures are so ineffective.

              Intelligence is never fully spatially localized, and that is the source of many deep misunderstandings about its nature. Even when you do something as simple as read a book, you have extended your intelligence beyond the bounds of your own body, since you are now relying on an external memory. I would suggest that the main reason human intelligence can be oddly difficult to distinguish from animal intelligence is because it is not the innate cleverness of any one human that defines human intelligence, but rather the extraordinarily high level of networking in both time (writing) and space (language) of human intelligence that makes us unique. For example, a very clever bonobo can I think be individually not that different from a human in terms of innate problem solving and cleverness. But that same bonobo lacks the scaffolding of language, both internally (e.g. for postulating complex imaginary worlds) and externally (for sharing with other bonobos), and so is unable to "build on the shoulders of others," as we like to say.

              (A bit of a physics tangent: I would also suggest that intelligence is deeply intertwined with the emergence of information within our universe, in ways we do not yet fully comprehend. At the very origin of our universe the emergence of "I need my own space!" fermions in flat space enabled the emergence of what we call information, via persistent configurations of fermions within that accommodating flat space. But only obstinately persistent and isolationist fermions can readily create the kinds of unique configurations that we call history. Once the universe made history (information) possible, higher levels of complexity also became possible, including only very recently networked human intelligence.)

              Your particular questions can be answered specifically only by first grappling with the curiously probabilistic issues that underlie all forms distributed intelligence, but which are particularly conspicuous in human interactions. Pretty much by definition, an intelligent system must deal with issues that cannot be fully anticipated in advance, but which also can be at least partially anticipated. These complex underlying probabilities in turn affect the nature of "simplifications" needed in any one messaging event. Three major simplification options include subsetting (sending only a small but specific subset), generalizing (capturing an overall message, but leaving the recipient to synthesize the details), and complete transfer (e.g., loading a new app onto a smartphone).

              The nature of and state of the recipient is of course also critical, and just to confuse everything a bit more, often highly variable over time. The general trend is that due to accumulation of earlier messages and their implications, meaning-per-message increases over time. That also complicates the idea of summarization, since what previously was an incomplete message may over time become entirely adequate. You can watch that effect in slow-but-real time as your Alexa or Hey Google or whatever grows a little smarter each week about how to interpret exactly the same human sentence.

              I will address your specific questions after I've read your essay. Again, thank you for such excellent questions!

              Cheers,

              Terry

              Hi Terry,

              I read you essay and I loved the last paragraph...

              If you see such a thread and find it intriguing, your first step should be to find and immerse yourself in the details of any high-quality experimental data relevant to that thread. Some obscure detail from that data could become the unexpected clue that helps you break a major conceptual barrier. With hard work and insight, you might just become the person who finds a hidden gemstone of simplicity by unravelling the threads of misunderstanding that for decades have kept it hidden.

              Now even though I am going to say this - I still loved your essay... Your conclusion is completely wrong and this is the reason why...

              I can assure you with utmost confidence that no high-quality experiment with its high quality data will help in revealing what is hidden from us which is required to figure out the theory of everything. Yes I know I am making a very bold statement but, I just wanted you to hear this for future reference when physicists start looking into Gordon's Theory of Everything.

              The law of conservation of energy is what is preventing us from realizing what dark energy is... Yes it would actually break the law of physics to solve the theory of everything the way you are proposing. :)

              Anyway - if you have any interest - a very limited exposure to my theory is presented in my essay, "The Day After the Nightmare Scenario"

              All the best to you

              Scott S Gordon, MD/Engr.

              Hi Scott,

              I love it!!

              Yep, you are right: Details of past data are unlikely to do squiddly for such incredibly important issues as "dark matter" and "dark energy". You nailed me royally on that point! I was thinking in particular about overlooked issues in the Standard Model, but hey, even there the whole dark-dark issue has to come in somehow.

              I've added you to my reading list, which is a getting a bit long, but I hope to get to it soon.

              Thanks again! Since I am Missourian by upbringing, it is the well-stated critiques that make my day. I've found by hard experience that if I start getting way too confident in my own ideas, I start looking and acting like the rear end of one of those Missouri mules. :)

              Cheers,

              Terry

              • [deleted]

              Hi Terry,

              I liked that you provided a simple model of what is fundamental. And your essay followed its own premise: "Fundamental as Fewer Bits". I really enjoyed reading it.

              In particular I liked:

              "Because gravity is so weak, principles of quantum mechanics drove the scale of such models into both extremely small length scales and extraordinarily high energies. This in turn helped unleash so many new options for "exploration" that the original Standard Model simply got lost in an almost unimaginably large sea of possibilities.[9]"

              I my essay "The Thing that is Space-Time" I attempt to pull gravity out of the Standard Model.

              I postulate a graviton is not a Boson and that, and in general has very low energy and very large distances (aka wavelength) that span all the matter in the universe. Thus it is a very low energy particle. I use three basic equations to produce this theory: 1. The Planck-Einstein equation. 2. E=mc^2 and 3. The equation for the Planck mass. The general overview is that the graviton is much like a guitar string that is anchored on opposing Planck masses. This quantum mechanical guitar string (the graviton) has a mass and instead of supporting musical notes it supports the different frequencies of light (photons).

              Question: Would you take a look at my entry and let me know if this version of gravity has any merit in terms of meeting your criteria of having fewer bits? Any response appreciated!

              Thanks,

              Don Limuti

                Thanks, Terry Bollinger,, for his criticism of my essay. I understand that it was written poorly. Its main aim is to attract researchers to continue the theory of everything of Descartes' taking into account modern achievements in physics. The principle of identity of physical space and matter of Descartes' allows us to remodel the principle of uncertainty of Heisenberg in the principle of definiteness of points of physical space, according to which in order to get the point of it required an infinitely large momentum. Look at my essay, FQXi Fundamental in New Cartesian Physics by Dizhechko Boris Semyonovich Where I showed how radically the physics can change if it follows this principle. Evaluate and leave your comment there. Do not allow New Cartesian Physics go away into nothingness.

                Sincerely, Dizhechko Boris Semyonovich.