Dirk: That's exactly right. The physical theories and their mathematical representations are excellent approximations, but their inadequacy to fully describe nature was revealed by quantum mechanics and its multitude of interpretations. By assuming that mysteriously created objects obey mysteriously created laws, QM and GR appear to be incompatible, for example. We need to develop a theory based on what actually is fundamental, which I argue is information and not objects/laws.

Alas, our current informational theories are only skeletal at this point, but that's what intrigued me about Bob Coecke's graphical system. By graphing the flow of information, we can at least get an idea of how informational context can gradually evolve from simpler to more complex. KC

4 days later

Karl

I really like the focus of this essay on informational mechanics as a ``generalization of quantum mechanics that embeds contextual data into descriptions of subsystem interactions''. This is completely in line with the idea of top-down causation associated with contextual effects. You suggest that "all a theory really needs to address is the beautiful world of automorphism-invariant information." That is rather similar to the emphasis Auletta, Jaeger, and I have put on how equivalence classes as characterising top-down action. I also applaud your sensible take on quantum measurement.

George Ellis

    George, thank you for your comment. I'm seeing several threads common among many of the essays this time, for example arguments against reductionism and against particles-as-ontological-objects -- ideas that are clearly related.

    I checked out your paper with Auletta and Jaeger. I especially appreciated the sentence, "Mechanical devices, like a thermostat, are able to implement information control without any intervention of biological elements and in purely mechanical terms. This is however an erroneous point of view, since such devices have been built by humans to act in a certain way. Therefore, the functional element (and the goal) is already inbuilt." This rather obvious aspect of technology tends to be overlooked when discussing quantum measurement and measuring devices. Similarly, even the simplest living organism seems not to be just a bottom-up collection of matter particles doing complex things, but rather is an informational entity whose complex interactions derive out of a legacy of evolving context. No wonder it's so hard to create a living system by putting together a bunch of inorganic molecules.

    10 days later
    • [deleted]

    Dear Karl

    You might find very interesting that there is an informational derivation of quantum theory . The assumption of a relational and informational reality seems very interesting and it would be nice to study all its consequences.

    I have also thought of ways of conceiving the world without the primitive notion of ''an object''. It is a natural extension of Machian thoughts on the foundations of dynamics, and I have developed this in my essay Absolute or Relative Motion...Or Something Else?, which you might find interesting.

    Good luck in the competition,

    Daniel

      Daniel -- I am interested in reading both the paper and your essay; thank you for the links. As my essay argues, I suspect that the world is only as complex as the system or systems (within) that measure its complexity; thus, complexity in the world becomes a relational function of biological and technological evolution. Superficially that may sound like a facile "philosophical" statement, one that escapes falsifiability. However, if the world is in fact fundamentally informational and relational, then there would be no other accurate way to describe complexity except in those purely relational, informational terms. And it should be possible to demonstrate this in experiments of sufficient sophistication.

      19 days later
      • [deleted]

      Hi Karl,

      I really liked your essay a lot. Your writing covers a lot of ideas and possibilities in a way that is interesting to read.

      In particular, I like how your essay draws the distinction between data and information. That string of randomly-generated bits is taken to exist as a single datum, and without the existence of other such datum there is no chance for information to be had -- information is an entirely relational concept that involves multiple data. This is no ephemeral concept devoid of realism -- the vast majority of us rely on fully-implemented relational databases every day when we interact with banks, websites, etc.

      I do wonder though if the distinction between data and information is still tripping other people up. For instance, I see phrases like "information compression" in some of the literature. Of course, in actuality, it is the data that is compressed/decompressed, and the information content of the data is what governs the compression limit -- information-poor data is very compressible, information-rich data not so much. Again, this is no ephemeral concept -- it is the core principle used in relational databases and really any lossless dictionary-based compression algorithm (ZIP, RAR).

      From this perspective, if a blue photon always imparts X momentum upon absorption, and we repeat that observation over and over, we will never gain information. In fact, the law that we derive (constancy of momentum per photon frequency) springs directly from exactly how information-poor the data is. I take this to mean that even if such an observation leads to one type of information-rich chain of events (ie. saying "It's blue!" via sound waves and then giving each other a high five) versus another (ie. aliens making some equivalent statement via pheromones, etc), the root event is still the same no matter what. To me, it is the root data that really counts here, and that leaves little or no room for mysticism. I think this is what you are trying to say too?

      - Shawn

        • [deleted]

        To be fair, not all is lawful, and perhaps this is precisely why quantum mechanics is information-based. I am considering the probability cloud that represents an electron's possible position. If one repeatedly performs an experiment in which they make a measurement to gain a datum about an electron's actual position, then one will come to find that over time the data are non-repetitious -- the angular components of the position are fully random, and the radial component partially random (still random, but the probability drops off based on radial distance). I suppose that one could call this the "law of lawlessness" (since it is a constant kind of random), but no one seems to explicitly say it and stick to it, which is why I think that we get varied opinions on what information really is (and isn't) -- none of this gives credence to the phrase "information compression", and I doubt that anything would.

        Shawn -- Thank you for reading my essay, and for the comment. I hadn't explicitly considered the distinction between data and information, but I think you've hit the nail on the head. Data is an absolute thing, but information is necessarily relational. This was the message I took away from the book "Information and the Nature of Reality," co-edited by Paul Davies. One chapter stuck out for me as problematic -- Seth Lloyd's -- perhaps because he did not make this distinction. He writes, "Quantum mechanics, via decoherence, is always injecting new bits into the world." Those would be bits of data, but are they information? If so, relative to what? The same can be asked regarding a string of random numbers, or highly compressible (low-information) data as you insightfully mentioned. I feel these questions are not only relevant, but perhaps even fundamental.

          Dear Karl,

          I read your essay with great interest. I share a lot of the same general philosophy toward fundamental physics. A few questions and remarks.

          1. Have you thought much about causal sets in this context? I don't completely agree with all the hypotheses of causal set theory, but I admire its viewpoint, and it seems very relevant to your viewpoint as well. You reference Fotini Markopoulou, who has written papers on this, and you also illustrate quantum picturalism and information trees, which are very close to the causal sets formalism. If you have not already read it, you might find Sorkin and Rideout's paper on sequential growth dynamics interesting.

          2. You reference Hermann Weyl's definition of objectivity in terms of group symmetry. Weyl, of course, is the father of group representation theory in physics. In particular, you mention that causal networks survive automorphism due to topological constraints. Since you are advocating a minimalist relationist viewpoint, I thought you might be interested in look at this from the other direction, in which one redefines covariance to mean preservation of causal structure. For instance, Lorentz transformations in ordinary SR change the order of spacelike-separated events (relativity of simultaneity), but preserve the causal order. If you define a "generalized frame of reference" to mean a refinement of the causal order, then you can redefine the covariance principle in these terms and completely do without the background geometry. I discuss this further in my essay here: On the Foundational Assumptions of Modern Physics.

          3. Have you thought much about quantum computing in this context? I ask this for a couple of reasons. First, it seems to me that a lot of the same formalism can be applied immediately to practical problems in quantum computing itself. Second, if the fundamental structure of spacetime is really informational, it might be possible that we could model fundamental-scale physics at much larger scales using quantum computers. I mention this briefly near the end of my essay.

          Thanks for the great read! Take care,

          Ben Dribus

            Ben, Thank you for the thoughtful comment. It got me very interested in learning more about causal sets (I confess ignorance on the topic). I think there are a number of equivalently valid formalisms to look at this problem, some easier to grasp than others. I remember checking out your essay when it was first posted and taking a few notes; I'll be sure to look over it thoroughly and post comments over there. And, you make a great point about quantum computing. It's a good argument for why developing new kinds of information theories should be taken seriously. In fact, it's probably the only way to maximize the potential of quantum computing, and the potential, as you point out, is enormous. Thanks again and best of luck in the competition.

            • [deleted]

            Hi Karl,

            I believe that you're correct in that information is a kind of emergent property that arises only in the context of multiple data. From what I can gather, it's primarily the distinctness of the data that gives for high information content per datum. I'm not sure how deep you have gotten into the math of information theory, so please forgive me for going into further detail: there will be a point.

            Consider receiving a string of symbols (data) consisting of "0123". This string contains 4 distinct symbols that all occur with the same frequency (1/4th of the time), which ends up producing a Shannon entropy (average information content per symbol) of S = ln(4). Converting that to binary entropy, it's S_b = S / ln(2) = 2. The same entropy is produced by the semi-randomized string "0312", which still contains 4 distinct symbols that occur with the same frequency -- the order of the symbols doesn't affect the result, only the distinctness. What the binary entropy effectively means is that if we wish to represent 4 distinct symbols that occur with the same frequency, then we're going to need at least ln(4)/ln(2) = 2 bits per symbol. This notion becomes very clear if we're already familiar with integer data types on the computer, where we know that a 2-bit integer type can represent 2^2 = 4 distinct symbols (0 through 4). Likewise, a 3-bit integer can represent 2^3 = 8 symbols (0 through 7), etc, etc.

            A symbol by itself does not have information content -- the extremely short string "0" produces an entropy of S = ln(1) = 0. There is no distinctness to be had because there is nothing else in the string to compare our single symbol to.

            To consider randomization further, consider the string "1010010101010110101010101010110101100010100111010001001101011010", which is 64 symbols long. It's apparent that the string cannot be considered to be totally random just at the root level -- there are only two root symbols ("0" and "1"); any repetition of any symbol is a pattern; patterns are anathema to randomness. Secondly, we notice that there are patterns in the adjacency of the symbols as well; the composite symbols "00", "01", "10", "11" appear quite often. Since we clearly cannot avoid patterns, we must then look for balance (equal frequency) of the patterns at all possible levels in order to test for randomness.

            As for actual figures regarding the frequencies of our string, we get...

            For 0th level adjacency (singles), we go through the string one step at a time from the left to the right. This gives us the singles "1", "0", "1", "0", "0", etc:

            The root symbol "0" occurred 32/64th of the time, as did the root symbol "1".

            Balance in frequency occurred.

            As for 1st level adjaceny (pairs), we go through the string one step at a time from the left to the right. This gives us the pairs "10", "01", "10", "00", "01", etc:

            The composite symbol "00" occurred 7/63th of the time;

            The composite symbol "01" occurred 24/63th of the time.

            The composite symbol "10" occurred 25/63th of the time.

            The composite symbol "11" occurred 7/63th of the time.

            Imbalance in frequency occurred.

            As for 2nd level adjacency (triplets), we likewise go through the string one step at a time from the left to the right. This gives us the triplets "101", "010", "100",

            "001", "010", etc:

            The composite symbol "000" occurred 2/62th of the time.

            The composite symbol "001" occurred 5/62th of the time.

            The composite symbol "010" occurred 18/62th of the time.

            The composite symbol "011" occurred 6/62th of the time.

            The composite symbol "100" occurred 5/62th of the time.

            The composite symbol "101" occurred 19/62th of the time.

            The composite symbol "110" occurred 6/62th of the time.

            The composite symbol "111" occurred 1/62th of the time.

            Imbalance in frequency occurred.

            We will skip detailing the 3rd, 4th, etc level adjacency. The point is that at the n-th level of adjacency, there are 2^(n 1) distinct symbols (root symbols for the 0th level, composite symbols for 1st level and higher), and we can consider the whole string to be random only if there is perfect balance in the frequency of the symbols at *all* levels of adjacency. Since there are imbalances in the frequency at the 1st and 2nd level, the string cannot be considered to be random.

            Of course, if our string was indeed generated by a random process, then we would simply need to keep making the string longer and eventually a balance in the frequency at all levels of adjacency will naturally arise as time goes on. This is interesting to note because it means that we simply cannot know if a binary number generating process is truly random until we can analyze the symbols it puts out at an infinite level of adjacency, which requires that our string be infinite in length. It's literally impossible to tell with perfect certainty that a string of "0"s and "1"s is randomly-generated if the string is finite in length.

            In essence, when someone says "quantum physics is random" and they show some data as evidence, it's still only an assumption because we do not have an infinite number of measurements to verify that statement. We can become more and more confident as we make more and more measurements (and find that the frequency in the patterns are balanced at higher and higher levels of adjacency), but we can never ever be absolutely certain until we take an infinite number of measurements. Consequently, if we were to make billions and billions of measurement and we found out that there was an unexpectedly large imbalance in frequency at say, the 23324324th level of adjancency, and this imbalance simply does not go away when we make billions and billions of more measurements, then we could say with a high degree of confidence that something deterministic is likely going on there, deep at the heart of things.

            In other words, your essay indirectly puts a spotlight on one of our basic physical assumptions: "quantum physics is random" when you make mention of randomized strings. I think that's a pretty important observation, and some of your readers might pick up on it right away. This is another reason why I liked your essay a lot.

            • [deleted]

            (I'm not necessarily saying that quantum physics is deterministic at some deep level, just that we cannot know for sure either way until we make a ridiculous number of measurements)

            Hi Karl,

            I was very glad to find your essay and enjoyed it very much, since my own ( An Observable World ) also deals with the assumption that the informational environment we can observe must be based on some kind of underlying reality. I was especially happy with your thoughts on the history of the universe, since I'd wanted to include something similar in the last part of my essay. I had to cut it, since the piece was already too long and too condensed.

            The main difference between our approaches is that you're envisioning a physics based on a new kind of fundamental entity, the bit -- per your response to Georgina Parry above. You told her information exists in the same way we think of objects as existing. To me, the key point about information in physics is that it's observable, i.e. definable in the context of other observable information. Some kinds of physical information do resemble binary bits, but I don't think it's helpful to think of all such information as reducible to simple interchangeable bits. Instead, the question is how all the different kinds of physical information contribute to an environment that's able to define them all, in terms of each other.

            If we open up the question of how information gets physically defined, we get a picture of the informational environment that's anything but elegant or unified . So I don't envision replacing our theories of fields and particles with an all-embracing theory of information. To the extent that it works to describe our world in terms of real, objective entities, I think we should certainly do so. My point is that we shouldn't be surprised that this kind of theory doesn't work at a fundamental level. It's only when we try to understand how our theories fit together, how they constitute a basis for the observable world, that we need to switch over to an informational viewpoint.

            Anyhow, it will be some time before it's clear just how to make this conceptual shift, from the world as a structure of real things-in-themselves -- inferred from phenomena -- to the structure of communications that constitute the phenomena. I appreciate your brave incursion into this unexplored territory.

            Thanks -- Conrad

              Conrad, thank you for the kind worlds. You raise a good point about being too rigid about the definition of informational entities. I'd just add that if we reduce everything to simple interchangeable bits, we do so with the understanding that those bits are defined relative to something -- in our case, relative to the context in which they have been discovered and observed. Perhaps that has something to do with why information often manifests to us observers as binary bits. I'll certainly check out your essay to get a better idea of your view; thank you for pointing me to it.

              Karl

              Thank you Shawn, that's very intriguing. As I look at some of the other essays I realize I may have bit off too much, and perhaps I should have gone somewhere like the idea you suggested. I will probably have about 18 months to think about it before the next essay contest, so thanks for the ideas!

              Karl

              • [deleted]

              Hello Mr Coryat,

              I beleive strongly that the electromagnetism is so complex.I have several relevances considering the polarizations of informations. With a diffrent sense of rotation than m for the hv. The spherical volumes for the serie of uniqueness becomes very relevant. But in fact the complexity of this electomagnetic scale is very important considering the volumes of entangled spheres. In fact the volumes are the key !!! it permits to class the informations and the synchro and sortings.

              See that the gravitation is for the most important volume, the electromaginetism is the number x very important, the gravitation the 1 more the volumes , it is very relevant.

              The entanglement is like our universal sphere with the central sphere the most important volume. It permits to class better the forces, foundamental between the mass and the light in a pure evolution point of vue.The spherization is fascinating.

              Regards

              I dunno... I think that your essay is pretty awesome as it is, and I gave it a solid community score accordingly.

              Hi Karl,

              As I mentioned on my thread, I enjoyed your essay immensely - it is well written, well argued and extremely interesting.

              I am in full agreement with you that information, rather than objects, is ontologically fundamental as Wheeler first suggested and as the philosophy of ontic structural realism has fleshed out (Dean Rickles has a superb essay in this competition on this matter). To my mind, theoretical discoveries such as the holographic principle, as you noted, and the various dualities in string theory make an object-based ontology pretty impossible to uphold. (In case you're interested, I wrote a short essay about this last year on the Edge website.) I also totally agree that information is fundamentally relational, as Rovelli boldly expressed.

              Your idea that observations made by one observer places constraints on another is really fascinating - I have to spend some more time thinking about it to fully wrap my head around it. I'm also intrigued by your claim that multiple individual observers connected by contextual information correlations are topologically equivalent to a single super-observer. As I mentioned in my essay and on my thread, the idea of a single super-observer (an observer who effectively stands outside the universe independent of a coordinate frame) seems to violate certain laws of physics, as demonstrated in Susskind's horizon complementarity. However, if you can model the superobserver as many individual local observers, that raises some interesting new questions.

              Thanks again for a wonderful read.

              Best regards,

              Amanda

                Dear Karl,

                My own ideas on an algorithmic world are somehow consonant with your information mechanics, at least in some aspects. I also think that the role of the observer is key in modern theories of physics (beyond the current state given to the observer in quantum mechanics). A la Wheeler you place information at a lower level underlying the rest of reality. I would have liked you to contrast your ideas with that of the so-called digital physics where this assumption is the founding stone (although some may think I am making a conflation between information and computation, which might be the case). I found interesting your attempt of formalisation of objectivity by means of invariance with respect to the group of automorphisms, I would have liked to see it further formally developed.

                In biology evolution of life is seen as indirected, in your view you seem to suggest that there is an increase of complexity, I think it is risky to do so because the notion of complexity would need to be much more precise and I also think it is not easy to justify such an increase of complexity from the evolutionary point of view (it is evident that some sort of "complexification" had to happen to come from, for example, unicellular to multicellular organisms).