Cristi,
Thank you for such kind remarks, and I'm glad you liked my essay!
Your first paragraph above is a very good analysis of issues that for reasons both of essay length limits and keeping the focus on a general audience I decided not to put into the essay.
One way I like to express such issues is that the full Kolmogorov complexity can be found only by treating the functionality of the particular computer, quite literally its microprogramming in some cases, as part of the message. That's really not all that surprising given that one of the main reasons for creating high-level instructions within a processor is to factor our routines that keep showing up in the operating system, or in the programs themselves.
I like your analysis of a two-language approach. Another way to standardize and ensure complete comparisons is to define a standardized version of the Turing machine, then count everything built on that as part of the message. That way basic machine functions and higher-level microcodes instructions all become part of the full set of factoring opportunities in the message.
Incidentally, a Turing-based approach also opens up opportunities for very unexpected insights, including at the physics level.
Why? Because many of the very instructions we have pre-programmed into computers contain deep assumptions about how the universe works. Real numbers are a good example, since their level of precision amounts to an inadvertent invocation of Planck's constant when they are applied to data for length, momentum, energy, or perhaps most importantly, time. If you are trying to be fundamental, a lot more caution is needed on how such issues are represented at the machine level since there are multiple ways to approach numeric representation of external data, and he operations on them.
Here's an example: Have you ever thought about whether a bundle of extremely long binary numbers might be sorted without having to "see" the entire lengths of the bundles first?
Standard computers always treat long numbers as temporally atomic, that is, you always treat them as a whole. This means you have to complete processing of each long unit before moving on to the next one, and it's the main reason why we also use shorter bit lengths to speed processing.
But as it turns out, you can bundles of numbers of any length, even ones infinite in length, by using what are called comparators. I looked into these a long time ago, and they can be blazingly fast. The don't need to see the entire number because our number systems (also parts of the total program, and thus of our assumptions!) require that digits to the right can never add up to more than one unit of the digit we are looking at. That means that once a sort order is found, no number of follow-up bits can ever change what it is.
But all of this sounds pretty computer-science-abstract and number-crunchy. Could ideas that deep in computing theory really affect the minimum size of a Kolmogorov message about, say, fundamental physics?
Sure they could. For example, for any bundle of infinite-length integers there are only so many sorted orders possible, and so only so many states needed in the computing machines that track those numbers and their sorted order. What if those states corresponded to states of the quantum numbers for various fermions and the infinite lengths to their progression along a worldline, or alternatively to the various levels of collapse of a quantum wave function?
I really am just pulling those examples out of a hat, so anyone reading this should please not take them as hints! But that said, such radically different approaches to implementing and interpreting real numeric values in computers are good examples of the kind of thinking that likely will be needed to drop fundamental physics to smaller message sizes.
That's because the Planck relationships like length-momentum and time-energy argue powerfully that really long numbers with extreme precision can only exist in the real world at high costs in terms of other resources. Operating systems that do not "assume" that infinitely precise locations in space or time to be cost-free likely are closer to reality than are operating systems that inadvertently treat infinitely precise numbers as "givens" or "ideals" that the machine then only approximates. It's really the other way around: Computers that make precision decisions both explicit and cost-based are likely a lot closer to what we see in the quantum model, where quantum mechanics similarly keeps tabs on precision versus costs. Excessive use of real numbers in contrast can become very narrow but very bouncy examples of thee trampoline effect, causing the computation costs of quantum models that use them to soar outward by requiring levels of precision that are go far beyond the those of the natural systems they are intended to model.
Getting back to your comments about higher-level reformulations in terms of e.g. gauge theories and Clifford algebras: Absolutely! Those very much are examples of the "factoring methods" that, if used properly, often can result in dramatic reductions in size, and thus bring us closer to what is fundamental. The only point of caution is that those methods themselves may need careful examination, both for whether they are the best ones and whether they, much like the real-number examples I just gave, contain hidden assumptions that drives them away from simpler mappings of messages to physics.
Regarding your second paragraph: Trampolines as multi-scale potential wells, heh, I like that! I think you have a pretty cool conceptual model there. I'm getting this image of navigating a complex terrain of gravity fields that are constantly driving the ship off course, with only a very narrow path providing fast and accurate navigation. I particularly like your multi-scale (fractal even?) structuring, since it looks at the Kolmogorov minimum path at multiple levels of granularity, treating it like a fractal that only looks like a straight line from a distance. That's pretty accurate, and it's part of why a true minimum is hard to find and impossible to prove.
Thanks again for some very evocative comments and ideas!
Cheers,
Terry