Shawn -- Thank you for reading my essay, and for the comment. I hadn't explicitly considered the distinction between data and information, but I think you've hit the nail on the head. Data is an absolute thing, but information is necessarily relational. This was the message I took away from the book "Information and the Nature of Reality," co-edited by Paul Davies. One chapter stuck out for me as problematic -- Seth Lloyd's -- perhaps because he did not make this distinction. He writes, "Quantum mechanics, via decoherence, is always injecting new bits into the world." Those would be bits of data, but are they information? If so, relative to what? The same can be asked regarding a string of random numbers, or highly compressible (low-information) data as you insightfully mentioned. I feel these questions are not only relevant, but perhaps even fundamental.
Toward an Informational Mechanics by Karl Coryat
Shawn -- My reply is below.
Dear Karl,
I read your essay with great interest. I share a lot of the same general philosophy toward fundamental physics. A few questions and remarks.
1. Have you thought much about causal sets in this context? I don't completely agree with all the hypotheses of causal set theory, but I admire its viewpoint, and it seems very relevant to your viewpoint as well. You reference Fotini Markopoulou, who has written papers on this, and you also illustrate quantum picturalism and information trees, which are very close to the causal sets formalism. If you have not already read it, you might find Sorkin and Rideout's paper on sequential growth dynamics interesting.
2. You reference Hermann Weyl's definition of objectivity in terms of group symmetry. Weyl, of course, is the father of group representation theory in physics. In particular, you mention that causal networks survive automorphism due to topological constraints. Since you are advocating a minimalist relationist viewpoint, I thought you might be interested in look at this from the other direction, in which one redefines covariance to mean preservation of causal structure. For instance, Lorentz transformations in ordinary SR change the order of spacelike-separated events (relativity of simultaneity), but preserve the causal order. If you define a "generalized frame of reference" to mean a refinement of the causal order, then you can redefine the covariance principle in these terms and completely do without the background geometry. I discuss this further in my essay here: On the Foundational Assumptions of Modern Physics.
3. Have you thought much about quantum computing in this context? I ask this for a couple of reasons. First, it seems to me that a lot of the same formalism can be applied immediately to practical problems in quantum computing itself. Second, if the fundamental structure of spacetime is really informational, it might be possible that we could model fundamental-scale physics at much larger scales using quantum computers. I mention this briefly near the end of my essay.
Thanks for the great read! Take care,
Ben Dribus
Ben, Thank you for the thoughtful comment. It got me very interested in learning more about causal sets (I confess ignorance on the topic). I think there are a number of equivalently valid formalisms to look at this problem, some easier to grasp than others. I remember checking out your essay when it was first posted and taking a few notes; I'll be sure to look over it thoroughly and post comments over there. And, you make a great point about quantum computing. It's a good argument for why developing new kinds of information theories should be taken seriously. In fact, it's probably the only way to maximize the potential of quantum computing, and the potential, as you point out, is enormous. Thanks again and best of luck in the competition.
[deleted]
Hi Karl,
I believe that you're correct in that information is a kind of emergent property that arises only in the context of multiple data. From what I can gather, it's primarily the distinctness of the data that gives for high information content per datum. I'm not sure how deep you have gotten into the math of information theory, so please forgive me for going into further detail: there will be a point.
Consider receiving a string of symbols (data) consisting of "0123". This string contains 4 distinct symbols that all occur with the same frequency (1/4th of the time), which ends up producing a Shannon entropy (average information content per symbol) of S = ln(4). Converting that to binary entropy, it's S_b = S / ln(2) = 2. The same entropy is produced by the semi-randomized string "0312", which still contains 4 distinct symbols that occur with the same frequency -- the order of the symbols doesn't affect the result, only the distinctness. What the binary entropy effectively means is that if we wish to represent 4 distinct symbols that occur with the same frequency, then we're going to need at least ln(4)/ln(2) = 2 bits per symbol. This notion becomes very clear if we're already familiar with integer data types on the computer, where we know that a 2-bit integer type can represent 2^2 = 4 distinct symbols (0 through 4). Likewise, a 3-bit integer can represent 2^3 = 8 symbols (0 through 7), etc, etc.
A symbol by itself does not have information content -- the extremely short string "0" produces an entropy of S = ln(1) = 0. There is no distinctness to be had because there is nothing else in the string to compare our single symbol to.
To consider randomization further, consider the string "1010010101010110101010101010110101100010100111010001001101011010", which is 64 symbols long. It's apparent that the string cannot be considered to be totally random just at the root level -- there are only two root symbols ("0" and "1"); any repetition of any symbol is a pattern; patterns are anathema to randomness. Secondly, we notice that there are patterns in the adjacency of the symbols as well; the composite symbols "00", "01", "10", "11" appear quite often. Since we clearly cannot avoid patterns, we must then look for balance (equal frequency) of the patterns at all possible levels in order to test for randomness.
As for actual figures regarding the frequencies of our string, we get...
For 0th level adjacency (singles), we go through the string one step at a time from the left to the right. This gives us the singles "1", "0", "1", "0", "0", etc:
The root symbol "0" occurred 32/64th of the time, as did the root symbol "1".
Balance in frequency occurred.
As for 1st level adjaceny (pairs), we go through the string one step at a time from the left to the right. This gives us the pairs "10", "01", "10", "00", "01", etc:
The composite symbol "00" occurred 7/63th of the time;
The composite symbol "01" occurred 24/63th of the time.
The composite symbol "10" occurred 25/63th of the time.
The composite symbol "11" occurred 7/63th of the time.
Imbalance in frequency occurred.
As for 2nd level adjacency (triplets), we likewise go through the string one step at a time from the left to the right. This gives us the triplets "101", "010", "100",
"001", "010", etc:
The composite symbol "000" occurred 2/62th of the time.
The composite symbol "001" occurred 5/62th of the time.
The composite symbol "010" occurred 18/62th of the time.
The composite symbol "011" occurred 6/62th of the time.
The composite symbol "100" occurred 5/62th of the time.
The composite symbol "101" occurred 19/62th of the time.
The composite symbol "110" occurred 6/62th of the time.
The composite symbol "111" occurred 1/62th of the time.
Imbalance in frequency occurred.
We will skip detailing the 3rd, 4th, etc level adjacency. The point is that at the n-th level of adjacency, there are 2^(n 1) distinct symbols (root symbols for the 0th level, composite symbols for 1st level and higher), and we can consider the whole string to be random only if there is perfect balance in the frequency of the symbols at *all* levels of adjacency. Since there are imbalances in the frequency at the 1st and 2nd level, the string cannot be considered to be random.
Of course, if our string was indeed generated by a random process, then we would simply need to keep making the string longer and eventually a balance in the frequency at all levels of adjacency will naturally arise as time goes on. This is interesting to note because it means that we simply cannot know if a binary number generating process is truly random until we can analyze the symbols it puts out at an infinite level of adjacency, which requires that our string be infinite in length. It's literally impossible to tell with perfect certainty that a string of "0"s and "1"s is randomly-generated if the string is finite in length.
In essence, when someone says "quantum physics is random" and they show some data as evidence, it's still only an assumption because we do not have an infinite number of measurements to verify that statement. We can become more and more confident as we make more and more measurements (and find that the frequency in the patterns are balanced at higher and higher levels of adjacency), but we can never ever be absolutely certain until we take an infinite number of measurements. Consequently, if we were to make billions and billions of measurement and we found out that there was an unexpectedly large imbalance in frequency at say, the 23324324th level of adjancency, and this imbalance simply does not go away when we make billions and billions of more measurements, then we could say with a high degree of confidence that something deterministic is likely going on there, deep at the heart of things.
In other words, your essay indirectly puts a spotlight on one of our basic physical assumptions: "quantum physics is random" when you make mention of randomized strings. I think that's a pretty important observation, and some of your readers might pick up on it right away. This is another reason why I liked your essay a lot.
[deleted]
(I'm not necessarily saying that quantum physics is deterministic at some deep level, just that we cannot know for sure either way until we make a ridiculous number of measurements)
Hi Karl,
I was very glad to find your essay and enjoyed it very much, since my own ( An Observable World ) also deals with the assumption that the informational environment we can observe must be based on some kind of underlying reality. I was especially happy with your thoughts on the history of the universe, since I'd wanted to include something similar in the last part of my essay. I had to cut it, since the piece was already too long and too condensed.
The main difference between our approaches is that you're envisioning a physics based on a new kind of fundamental entity, the bit -- per your response to Georgina Parry above. You told her information exists in the same way we think of objects as existing. To me, the key point about information in physics is that it's observable, i.e. definable in the context of other observable information. Some kinds of physical information do resemble binary bits, but I don't think it's helpful to think of all such information as reducible to simple interchangeable bits. Instead, the question is how all the different kinds of physical information contribute to an environment that's able to define them all, in terms of each other.
If we open up the question of how information gets physically defined, we get a picture of the informational environment that's anything but elegant or unified . So I don't envision replacing our theories of fields and particles with an all-embracing theory of information. To the extent that it works to describe our world in terms of real, objective entities, I think we should certainly do so. My point is that we shouldn't be surprised that this kind of theory doesn't work at a fundamental level. It's only when we try to understand how our theories fit together, how they constitute a basis for the observable world, that we need to switch over to an informational viewpoint.
Anyhow, it will be some time before it's clear just how to make this conceptual shift, from the world as a structure of real things-in-themselves -- inferred from phenomena -- to the structure of communications that constitute the phenomena. I appreciate your brave incursion into this unexplored territory.
Thanks -- Conrad
Conrad, thank you for the kind worlds. You raise a good point about being too rigid about the definition of informational entities. I'd just add that if we reduce everything to simple interchangeable bits, we do so with the understanding that those bits are defined relative to something -- in our case, relative to the context in which they have been discovered and observed. Perhaps that has something to do with why information often manifests to us observers as binary bits. I'll certainly check out your essay to get a better idea of your view; thank you for pointing me to it.
Karl
Thank you Shawn, that's very intriguing. As I look at some of the other essays I realize I may have bit off too much, and perhaps I should have gone somewhere like the idea you suggested. I will probably have about 18 months to think about it before the next essay contest, so thanks for the ideas!
Karl
[deleted]
Hello Mr Coryat,
I beleive strongly that the electromagnetism is so complex.I have several relevances considering the polarizations of informations. With a diffrent sense of rotation than m for the hv. The spherical volumes for the serie of uniqueness becomes very relevant. But in fact the complexity of this electomagnetic scale is very important considering the volumes of entangled spheres. In fact the volumes are the key !!! it permits to class the informations and the synchro and sortings.
See that the gravitation is for the most important volume, the electromaginetism is the number x very important, the gravitation the 1 more the volumes , it is very relevant.
The entanglement is like our universal sphere with the central sphere the most important volume. It permits to class better the forces, foundamental between the mass and the light in a pure evolution point of vue.The spherization is fascinating.
Regards
I dunno... I think that your essay is pretty awesome as it is, and I gave it a solid community score accordingly.
Hi Karl,
As I mentioned on my thread, I enjoyed your essay immensely - it is well written, well argued and extremely interesting.
I am in full agreement with you that information, rather than objects, is ontologically fundamental as Wheeler first suggested and as the philosophy of ontic structural realism has fleshed out (Dean Rickles has a superb essay in this competition on this matter). To my mind, theoretical discoveries such as the holographic principle, as you noted, and the various dualities in string theory make an object-based ontology pretty impossible to uphold. (In case you're interested, I wrote a short essay about this last year on the Edge website.) I also totally agree that information is fundamentally relational, as Rovelli boldly expressed.
Your idea that observations made by one observer places constraints on another is really fascinating - I have to spend some more time thinking about it to fully wrap my head around it. I'm also intrigued by your claim that multiple individual observers connected by contextual information correlations are topologically equivalent to a single super-observer. As I mentioned in my essay and on my thread, the idea of a single super-observer (an observer who effectively stands outside the universe independent of a coordinate frame) seems to violate certain laws of physics, as demonstrated in Susskind's horizon complementarity. However, if you can model the superobserver as many individual local observers, that raises some interesting new questions.
Thanks again for a wonderful read.
Best regards,
Amanda
Dear Karl,
My own ideas on an algorithmic world are somehow consonant with your information mechanics, at least in some aspects. I also think that the role of the observer is key in modern theories of physics (beyond the current state given to the observer in quantum mechanics). A la Wheeler you place information at a lower level underlying the rest of reality. I would have liked you to contrast your ideas with that of the so-called digital physics where this assumption is the founding stone (although some may think I am making a conflation between information and computation, which might be the case). I found interesting your attempt of formalisation of objectivity by means of invariance with respect to the group of automorphisms, I would have liked to see it further formally developed.
In biology evolution of life is seen as indirected, in your view you seem to suggest that there is an increase of complexity, I think it is risky to do so because the notion of complexity would need to be much more precise and I also think it is not easy to justify such an increase of complexity from the evolutionary point of view (it is evident that some sort of "complexification" had to happen to come from, for example, unicellular to multicellular organisms).
Hector, I probably tried to cover too much ground and do too much in my essay. Perhaps it would have been better to take a core concept of informational physics, such as complexity or objectivity, and exhaustively explore the assumptions involved to reach a specific conclusion. Thank you for your comments -- you make very good points about complexity, and the problems with treating complexification as a trivial or autonomic process.
Amanda, thank you for submitting your fine essay. I sincerely hope you win a prize. And by the way, please have my children....
Hi Karl, I think that you are very right in identifying the diagrammatic language and relationalism. I always find most relational approaches to vague on what they mean by "relation", while the compact closed categories underpinning graphical reasoning are a natural formal way expressing what relations when interacting, and hence avoiding the need to give an explicit description of the underlying objects. Classicality (of ...) is then an extra ability to manipulate "whatever these relations carry", resulting in additional structure, the dots in the diagrams. These relations aren't really the mathematical relations, but something more flexible and general, closer to what we mean by relation in natural language.
If you do not understand why your rating dropped down. As I found ratings in the contest are calculated in the next way. Suppose your rating is [math]R_1 [/math] and [math]N_1 [/math] was the quantity of people which gave you ratings. Then you have [math]S_1=R_1 N_1 [/math] of points. After it anyone give you [math]dS [/math] of points so you have [math]S_2=S_1+ dS [/math] of points and [math]N_2=N_1+1 [/math] is the common quantity of the people which gave you ratings. At the same time you will have [math]S_2=R_2 N_2 [/math] of points. From here, if you want to be R2 > R1 there must be: [math]S_2/ N_2>S_1/ N_1 [/math] or [math] (S_1+ dS) / (N_1+1) >S_1/ N_1 [/math] or [math] dS >S_1/ N_1 =R_1[/math] In other words if you want to increase rating of anyone you must give him more points [math]dS [/math] then the participant`s rating [math]R_1 [/math] was at the moment you rated him. From here it is seen that in the contest are special rules for ratings. And from here there are misunderstanding of some participants what is happened with their ratings. Moreover since community ratings are hided some participants do not sure how increase ratings of others and gives them maximum 10 points. But in the case the scale from 1 to 10 of points do not work, and some essays are overestimated and some essays are drop down. In my opinion it is a bad problem with this Contest rating process. I hope the FQXI community will change the rating process.
[deleted]
Hi Karl,
Thanks for the great read! You present a lot of interesting ideas. I am curious, even with looking at living systems, how useful the concept of "contextual information" will turn out to be in the long run. Right now I agree that is is a very constructive way of thinking about these issues (which is why I like to use it as well!), but it is very difficult to formalize. This is one reason I have been leaning toward causal descriptions which are more rigorously defined. I'd be very interested to hear your thoughts on this.
Again, thanks for the engaging read!
Best,
Sara
Dear Karl,
Once again you wrote a very insightful and beautiful essay. It was interesting to me to see how far one can go without assuming that information is "about something". I think that what we can know are relations, and they can provide a complete description of reality (at least the accessible reality). You may want to take a look at some slides I prepared for a talk named "Global and local aspects of causality", where I take the wavefunction and unitary evolution seriously, and try to explain quantum correlations by global consistency effects (this is not related to my essay, named "Did God Divide by Zero?").
Best wishes,
Thanks Bob, I greatly appreciate that you took the time to read and respond. Rock on!