"The mind itself is not fundamental. Neither are the biological processes by which it works, but the principles of information by which it functions are"

The central principle of Shannon's Information Theory is that, in order to reduce the length of any transmitted message, to the least possible number of encoded bits, it is imperative that the transmitter never send anything that the receiver already knows. For example, I don't need to keep telling you your name. But everything that you can predict, is a subset of the things you know. It follows, that everything that you can predict, is not even considered to be information, in Shannon's theory. That fundamental reality is enough to make most physicists apoplectic. They are searching for the truth, but as the movie said "You want the truth, you can't handle the truth." Because the truth is, the information content of most physical processes lies almost entirely within the unknown initial conditions, required to solve the equations of mathematical physics, not the long-sought equations themselves. This is what "emergence", emerges from.

Rob McEachern

    I hope you do get round to submitting an essay this year. There is some overlap between our philosophies which helps me find ways to expand my own viewpoint.

    There are two sides to my essay, the philosophical and the mathematical. On the philosophical side it is partly about finding the right words to express ideas in a way that makes them sound reasonable. I think in terms of a high degree of emergence, so fundamentals must take us away from anything we know in conventional physics. This is bound to be ambitious and speculative to a high degree, but structures like space and time and particles have properties that are too specific to be fundamental in my opinion. Information, events are the relationships between them are much more generic. I think you have a similar view.

    The mathematical side is more important of course. Without mathematics to interpret the philosophy there is no end point. I have some mathematical ability in problem solving and algorithms but the more abstract ideas needed to develop these ideas are outside my comfort zone. I feel like an art critic who can appreciate what is good and can talk about how things should be, but without actually having enough skills and creativity to do it myself.

    I agree that non-associativity is likely to play a part. I see octonions as just one algebraic structure with some nice properties that plays some role in certain possible solutions with good properties. The starting point must be something much more general like free universal algebra or higher category theories. Simple categories are associative, but with higher categories it is more natural to relax and weaken the structure to allow more interesting properties. The identities that define associativity are replaced with isomorphisms. Symmetry always arises as a useful tool in any algebraic structure. For example, if you want to understand octonions you will certainly want to know its automorphism group, and then there are the exceptional Lie algebras up to E8 that are also related to octonions. I think symmetry has to be generalised to supersymmetry, quantum groups, n-groups etc, so there is a long way to go.

    The free Lie-algebra that I discuss in the essay is just a starting point that is simple enough to illustrate my point. It provides the important mapping from algebra to geometric structures using iterated integrals along paths. I suspect that there are generalisations of this where iterated integrals map more general alegbras onto branching networks, like Feynman diagrams. I don't know if I will ever get my mind round it well enough to formulate something that works.

    Philip Gibbs

    I thank you for this interesting article. I have been provoked in my thinking. We should, as you say, regard science as finding better, and better, approximations. You are also right when stating information as fundamental in the field of physics. It is dangerous to listen to only one guru, as you say.

    Best regards ___________________ John-Erik Persson

    Good luck.

      Rob, you are right to highlight the principle of redundant information. Imagine you wanted to send some information into space to tell any aliens something about us. You might send a bitmap photo image for example. To keep the transmission short you could compress the data, but the aliens would not have the decompression algorithm. When data is maximally compressed it becomes a stream of random bits that is impossible to decode without the algorithm. You could send send the algorithm in some uncompressed from, but that is adding extra information. The point is that fully compressed data without redundancy is incomprehensible.

      The information that described the state of the universe is holographic, so it can be represented on a surface. This is the compressed form of the data. What we observe in the bulk volume is an uncompressed form with lots of redundancy in the form of gauge symmetry. In this form it is comprehensible to us. we observe and understand the universe in its expanded version, not the compressed holographic form.

      Phillip & Andrew,

      There is an implicit assumption when depending upon mathematics "to guide the way" for new directions in physics. That assumption is that our current mathematics is adequate to the tasks we attempt to use it for. If it is not, then we will find it very difficult to make much progress. Mathematics likely suffers from the same effect as you describe for physics - the pen and corral situation.

      I will suggest that this is actually the problem physics, which tends to lead other scientific disciplines so all of science, is faced with: The mathematical tools we currently have are not adequate to the task science has put to it.

      The limitations of our mathematical tools might actually be keeping us from seeing aspects of our universe, which would be even more reason to consider fundamental reviews of mathematics and its limitations (especially on how it is applied).

      I believe we will find a guide to a new direction this way.

      Don

      Phillip & Robert,

      There is an interesting assumption in information theory - that there is a limit to what can be compressed or represented by a 'unit' of information. There might be a limit, given today's mathematics, but will that always be the case?

      How efficiently can I represent pi? Using decimal notation, it is an infinite non-repeating sequence. If I use pi as the base of the numeric system, then pi is 1 - possibly a tremendous compression of information, although not without its problems for other values. What if a new numeric system, that used different bases in the same representation of a number were found - might this supplant our current system?

      If context and perspective can make such a difference in the presentation of information, can we be sure that the limitations of our current representational structures will not be radically altered in the future? Is a positional numeric system the optimal way to present the value of pi? Like optimization concerns in general, there might not always be an optimal solution. This could suggest there is no limit to what can be represented as (a unit of) information.

      This also appears to be the implicit assumption of any final Unification Theory - that there is an optimal way (usually assumed to be mathematical) to characterize all phenomena in the universe. If mathematics cannot present an optimal solution then likely neither can physics.

      Don

      Phillip:

      "The information that described the state of the universe is holographic, so it can be represented on a surface. This is the compressed form of the data. What we observe in the bulk volume is an uncompressed form with lots of redundancy in the form of gauge symmetry."

      The information content of an emission, is not the same as the information content of the emitter that produced the emission. Every emission must travel through every spherical surface surrounding the emitter and with a radius less than the distance between the emitter and the receiver, if it is to ever be received in the first place. Thus, the entire information content of every long-range emission must be observable on those spherical surfaces. This is why the holographic principle exists, and why all long-range forces are inverse-square. It has nothing to do with the information content stored within the emitter or with data compression used to produce the emission. Assuming otherwise is a major misunderstanding of Shannon's Information Theory, within the physics community.

      Rob McEachern

      Don,

      "There is an interesting assumption in information theory - that there is a limit to what can be compressed or represented by a 'unit' of information. There might be a limit, given today's mathematics, but will that always be the case?

      How efficiently can I represent pi?"

      There are two branches to information theory:

      (1) Shannon's original theory has to do with how many discrete (quantized) samples are needed to perfectly reconstruct an arbitrary continuous function, such as those which might be solutions to the equations of mathematical physics. Shannon's Capacity Theorem specifies both the number of required samples and the number of required bits per sample, required to achieve perfect reconstruction. Thus, it provides the missing-link between the the worlds of continuous functions and quantized results. It is easy to show, for example, that setting Shannon's Capacity equal to a single-bit-of-information, will yield the minimum value of the Heisenberg Uncertainty principle. In other words, the Heisenberg Uncertainty Principle simply means that all observations must contain one or more bits of information. Otherwise, it is not an observation at all - just noise. That is why you cannot determine the values of two variables like position and momentum - in the limit, they only encode a single bit of information, between the two variables! This is also the cause of the so called "spooky action at a distance" and the correlations observed in experiemnts attempting to test Bell's Inequality Theorem.

      (2) Algorithmic Information Theory, which deals with data compression, AFTER the data has been represents in quantized form.

      The physics community has been mostly interested in (2), which is very unfortunate, since it has little relevance to physics, since it deals only with already quantized observations. But (1) addresses the question - Why are observations and measurements quatizable in the first place? - which is of direct relevance to the correct interpretation of quantum theory.

      Rob McEachern

      Very interesting Rob. I hope you will be submitting an essay with these ideas.

      Phillip,

      I have submitted a couple of essays in the past, that touched upon some of these issues, but as you yourself have observed, the physics community has little interest in ever looking at such things. These days I prefer to put things on vixra. You will find a couple of my posts on these matters there. Thanks for creating the site!

      Rob McEachern

      Robert,

      if you are looking in, and apologies to Philip... your statement of all long range emissions making all the information content observable on all the intervening spherical surfaces, being a rationale for the inverse square law; is rather intriguing. Could you elaborate a bit, please. Thanks, jrc

      John,

      My contention is simple: Two things, be they particles, waves, fields or anything else, cannot interact, if they cannot even detect each others existence. Detection requires detection of information - at least one bit. Hence, all interactions are driven by information detection. Since the ability to detect information from a signal is a function of the signal-to-noise-ratio, which is in turn a function of the distance squared, long-range interactions are governed by the inverse-square law. At short-ranges, the emitter may appear as an extended source, rather than a point-source. Consequently, the situation in regards to the signal-to-noise ratio is more complicated than just an expanding sphere.

      This is why quantum tunneling occurs - if an entity cannot even detect the existence of a barrier and the barrier cannot detect the entity - then the barrier, in effect is not even there.

      It is also why the phenomenon of virtual particles exist. I like the analogy of two submarines, moving through an ocean of noise, trying to detect (and thus interact) with each other. If they make contact, they commence to interact, generating emissions that can be detected by others. But if they quickly lose contact (can no longer even detect each other) they return to running silent and running deep. And the ships on the surface, that themselves detected the subs' initial response, are left to wonder what just happened- was there anything really there?

      Obviously, individual particles do not expand with distance, so their detection is not an inverse-square law - it is described, statistically, by the quantized behaviors which are the subject of quantum theory.

      Rob McEachern

      Thanks Robert,

      signal-to-noise following the inverse square, being the rationale. That clarifies, I am always hoping for something more than the observed measurement we have had since Newton. I'll guess I'm still eating Lotus. :-) jrc

      Hi Philip, your essay is a pleasure to read and once I had started I was compelled to read to the end. Your ideas about story telling resonate with my own thinking about how we relate to the world. Especially via our senses and by imposing singular perspectives. I wonder why then at the end you say"' If they don't know then they must be prepared to consider different philosophical options, letting the mathematics guide the way until the experimental outlook improves." Why do you say mathematics must guide the way? Why not biology first? (To elucidate the effects of building from a literal human centered perspective or to seek and eliminate its effects.) Or why not all of the sciences leading together in a multidisciplinary effort? Well done, kind regards Georgina

        Robert,

        Thank you for your reply.

        As I understand Information theory, it is built upon the use of positional representations of a number in a specific base. The use of logs for the expositions means it presumes a single base representation of number values (it is locked into one base, even if that base can change).

        This is all that is needed for Real numbers.

        However the suggestion I make is that there could be more powerful methods of representation that are not limited to a single base. This would suggest that Information theory can be expanded, as could its uses. It also means the limits placed by the current theory, using logs, may not be absolute limits.

        Don

        Don,

        "As I understand Information theory, it is built upon the use of positional representations of a number in a specific base. The use of logs..."

        That is not correct. Shannon was interested in determining under what circumstances a receiver would be able to perfectly reconstruct a transmitted message, without producing any errors in the encoded message. For example, suppose you wanted to send some critical information (where even a single error in a number received would have very bad consequences), like a list of bank account numbers, via a radio message. In effect, the "base-2 log" function in his Capacity formula, expresses the number of bits required per sample, needed to digitize the analog signal. Since the Capacity formula is representing the maximum number of bits that can be received without error, it is important to use base-2. You could use a different base, but then you would have to do a conversion, to convert the number into the correct expression for counting bits.

        Rob McEachern

        Hi Georgina, thanks for your interesting question. I do think that the human side of experience and evolution is relevant to the philosophical side of how we should understand our place in the universe. This came up more in the previous essay. However, I stop short of thinking that biology is in any way fundamental. I think that mathematics is the right place to look for fundamentalism, and is the more powerful tool for developing the harder side of the theory. Perhaps that is where our thinking diverges.

        Nevertheless, there are grey areas in my thinking and I do hope that we get a few essays that argue the case for biology being fundamental, or that there is more to be learnt about foundations from biology. It seems like a more radical idea but perhaps my view can be pushed a little in that direction. we will see.

        Yes, as we gather information our model of reality becomes a better approximation. Sometimes this can lead to a paradigm shift where the underling principles are suddenly very different even if the predicted measurements don't change by very much.

        Rob,

        You are missing my point. He is still using a single base number system - than of logs. It doesn't matter which base he uses, as they are inter-convertible.

        We all use a single base number system, be it decimal or binary. We don't really know of any other one as we have been using decimals and logs for a few hundred years.

        Is that system the best one that can be built? Or can a better one, say using multiple bases in the same number representation, be devised? Such a system might be able to represent numbers we cannot today. If so, then these calculations MAY need to be revised (as well most other ones).

        A (poor) analogy is the Romans attempting to build space ships using their Roman numeral system. The calculations would be much too hard and many measurements could not be performed (as they could not properly represent Real numbers). They would not be able to make certain measurements we can today, without proper scaling of numbers.

        Might we be in a (somewhat) similar situation, where a more powerful numeric system could be devised that would alter what and how we measure and/or calculate?

        Then a different value, inexpressible today, would change what a bit can represent.

        As I understand things, Shannon is saying this is the absolute limit regardless of mathematical tools. I am suggesting his statement needs to be limited to the tools we are currently using and there might be a different way, since there might be better mathematical tools.

        Don

        Don,

        "We don't really know of any other one as we have been using decimals and logs for a few hundred years." Most modern, communications systems don't encode information with any numerical system - they use "alphabets" of peculiar waveforms. For one example, see the wikipedia article on Quadrature Amplitude Modulation. These strange alphabets get translated into bit-patterns, not numbers, by the receiver. A second translation step might interpret those patterns as sets of numbers. But it might also interpret them as sets of alphabetic characters, like the ones I typed and you are now reading.

        "Then a different value, inexpressible today, would change what a bit can represent." It already does that - it represents both nothing and everything. Bits of information are not like bits of data. It is not like a measurement that has a most and least significant bit or digit. It is like an index number - an index to a look-up table. Consequently, what that index/number "represents", is whatever totally arbitrary stuff you may have placed into the Table, at that index location. In other words, what it "represents", has no relation whatsoever, to its numerical value. You can change what the number represents, by simply changing whatever "stuff" is in the corresponding location of the Table. One such looked-up meaning, of an index/number, might say "compute the square-root of pi. Another table, for the very same index/number, might say "slap your face with your right hand." It is completely arbitrary - having no relationship whatsoever, to the value of the index/number.

        Don't feel bad, if you don't understand; few physicists do either. It is related to the "measurement problem" - few physicists seem to realize that physical entities do not have to treat measurements as measurements - they may treat them as indices (symbols), resulting in a total disconnect between the "value" of the supposed "measurement" and the "meaning" of the measurement, in the sense of what behavior a system undertakes (compute sqrt(pi) versus slap your face) as a result of performing a measurement and obtaining some particular value for that measurement.

        Rob McEachern