Information has always been considered a serious contender to be a foundational notion in physics. Several problems, though, were identified over the years:
1. What is the physical representation of information, and how it relates to the notion of information as viewed by an observer? What is the role of an observer in an information-driven world? Does it fundamentally even have any role? How does it relate to Shannon's view of information?
2. What is the mathematical apparatus to express physical matter as a container of information?
3. Can purely information based model (i.e. model that excludes the very first principles of physics) achieve something more that It-From-Bit?
Information-based physics (i.e. "Information Physics") can answer the above questions to some extent, with the obvious goal not of explaining all that is unexplained, but with the goal of simplifying the view of reality, while providing concrete mathematical and experimentally-verifiable results.
The basic notion is of information field. It is a field that originates in physical particle. A particle is a container of information, with information existing equally on every closed surface around it. The motion of particle and the motion of information it contains are indistinguishable. Thus as particle moves, so does its information field. This field is scalar, consisting of a finite number of facts about the particle. Because we consider each such fact to have equal chance to be anywhere, such facts must shift their position periodically.
A particle effectively moves in space that contains information field of all other particles in existence. Collection and usage of this information is what compels particle to exhibit non-random behavior. This non-random behavior is what we generally ascribe to physical laws. The difference here is that the physical laws are the result of information use within a particle.
A particle is not "made" out of information. For this model, it is not important what it is made out of. What is important is that the usage of information is assumed to be the simplest possible.
This simplest possible usage of information has, surprisingly, very little leeway. Even if we have no clue whatsoever about what is the physical representation of this information field, we can say a lot about its mathematics.
For instance, since the information field is scalar, a particle cannot deduce anything by using this field in a single point in space. A particle has to use at least 2 points in space. By doing so, a particle has to collect two sets of facts from each point in space, and to use such facts, each fact from one set has to be combined with each fact from another, or otherwise some facts won't be used at all, and the question arises why would they exist. So if two sets of facts have A and B facts in them, the number of fact combinations (called "fact interactions") has to be AxB. The actual throughput of information has to be a square root of AxB.
The motion of a particle will result in more information being collected, similar to collecting more droplets of rain if you run. The amount of additional information collected in motion is proportional to the relative speed of a particle (relative to all other particles which contribute their information field to be used).
Since particle has finite information capacity, any excess information in the present moment necessitates loss of information from the previous, as at least both of them have to be kept (as mentioned, at least two sets of information must be retained at any given time). Thus, the behavior of a particle cannot ever be deterministic, because in motion, information must be lost.
In our example with two sets of facts, the gain of information in the present moment, and the loss of equal information from the previous, can be expressed as (A-loss)*(A+loss) = A^2-loss^2. The actual information throughput declines, and is proportional to sqrt(A^2-loss^2). Since the additional information (equal to loss) is proportional to the speed of a particle relative to other particles, we can write for a case a particle relative to massive nearby body:
sqrt (1-(loss/A)^2) = sqrt (1-v^2)
This expresses how much will information usage of a particle decline. The decline in information usage corresponds to the ability of a particle to respond to the outside world. This is the "sluggishness" of a particle, otherwise known in Relativity as "time dilation". The simplification here is that we can leave time to be linear and constant, and analyze the throughput of information use within a particle, which can decline or increase.
While the explanation and the math here is more akin to a caricature than a real theory, what's attached to this post, in the paper, is a real theory. Also attached is a free book on the subject. It helps with understanding the basics, since it is written in free text.
While space in this post cannot possibly be enough to show some other consequences of Information Physics, here are some highlights available in the paper:
Relativity and Information Physics produce almost identical results. When it comes to situations where a very massive "observer" (such as Earth, and us on it) is present, the two produce exactly the same results. But, when outside of the anchoring effect of a large observer, there is a divergence of equations. We show that an observer affects the observation because it possesses information, and with information directing all physical effects, it clearly has to change the outcome of any experiment. But, at the same time, a physical process will unfold according to the same information laws even without the presence of the observer - it will just unfold in a different way.
The laws are the same though in both cases, with or without an observer. We also deduced that all physical laws must be non-deterministic - to imply otherwise would imply an unlimited information capacity of physical matter.
When it comes to relative speed, which is an essential quality of an observer, the concept of relative speed loses its significance in favor of just "speed", which does not depend on any frame of reference, and is just a number associated with a particle at a given time and position in space.
This number is essentially a sum of all relative speeds of a particle where objects that are closer and more massive count more. Basically the "speed" accounts for speeds relative to all other objects by means of weighted factors associated with those objects. The speed of Earth relative to the Sun counts a great deal, but the speed of Earth relative to a distant galaxy doesn't count much. This notion eliminates the question of whether all observers are equal or not - even if we think that they are, such assumption is no longer necessary.
We show that this weighted number (or just "speed") has a limit equal to what we now call the "speed of light". In special cases, such as near Earth, the "relative speed to Earth" and the "speed" (our weighted number independent of any observer) are one and the same, explaining why Relativity and Information Physics give the same results in most cases.
While it may be obvious, it should be stated that Information Physics does not use any postulates of modern physics, raising the question if it is physics at all. It may not be in a construct that physics is today. It is in a sense that produces mathematical results that are verified by the experiments, and that it produces mathematical results that have not yet been verified. For example, no postulate from Special/General Relativity is used. In fact, Galilean relativity is entirely unnecessary, and so are the notions of mass, light or gravity. If you can imagine yourself in a darkest corner of the Universe, knowing nothing about it, you can deduce the exact equations of modern physics just by using the axiomatic notions presented above.
For more information, please visit web site for Information Physics.Attachment #1: relativityout.pdfAttachment #2: faster-than-light-book.pdf