Thank you Felix for your welcome message. I am really curious about the actual interest that the causal-set based, digital/computational approach that I have described might attract in this context. I expected a few more contributions along those lines, but so far I have not seen any, and I wonder whether I should be happy or worried about it.
Reality Is Ultimately Digital, and Its Program Is Still Undebugged by Tommaso Bolognesi
Dear Tomaso,
I enjoyed reading your essay, which is well written and reveals a deep understanding. Discrete approaches like that you explore can add much to our understanding of reality. I personally believe that it may be more in causal sets than just the conformal structure, and I strongly encourage their study. And trying to obtain laws we consider fundamental as emergent phenomena of simpler laws is what science is about.
Am I on the digital or analog side? It's complicated, I just added something about this here and here.
Best wishes,
Dear Tommaso
I have read your interesting essay in which I would like to make a comment. In your essay you say the following:
Furthermore, sometimes we identify new, unifying laws that allow us to jump one level down: laws that appeared as primitive (e.g. Newton's law of gravitation) are shown to derive from deeper laws (e.g. General Relativity).
I wish this were totally true, but there are evidences that points in another direction. I will mention the following example about the speed of light and special relativity (SR):
The value of the speed of light in vacuum was conventionally defined by the Bureau Intertanational des Poids et Mesures (BIPM) as V_r= 299 972 458 m/s. But this value was taken as a convention , this does not imply that the actual (or measured) speed of light possesses that exact value but the actual value is around V_r with a speed uncertainty within the interval 0V_r. Now we ask: are we violating the second postulate of relativity? Is the parameter c really equal to V_r? Why not c is taken to be equal to V_n or V_i? Recall that for SR to make physical sense, the parameter c must be higher than the speed of the inertial frame v, so that the Lorentz transformations do not render complex numbers. In this sense the selection c=V_r>v is partially justified, but we could have conventionally defined c=299 792 460 m/s and the physics would not be affected at all since the theory by itself only demands a constant with units of speed with any value but different than v. SR does not give us any clue even of the order of the value of the parameter c [Ellis, G.F.R., Uzan, J.P.: Am. J. Phys. 73, 240 (2005)]. But why did SR borrow (not borrow, steal) the value from another theory (electrodynamics)? Why is not SR capable of determining the value of its own constants? The theory then, with no relation to a measurement, cannot determine the value of c by itself. These arguments also apply for any other theory, see for instance the case in general relativity [Narlikar, J.V., Padmanbhan, T.: The Schwarzschild Solution: Some conceptual Difficulties. Found. Phys. 18, 659 (1988)].
The conclusion here is that if the theory of general relativity is stealing the value of the constants (e.g. G= gravitational constant) from another theories (e.g. Newtonian mechanics or Maxwell electrodynamics) and is incapable of determining its own values, then this suggests that Newton Mechanics is not really derived from general relativity.
Please feel free to make any comment
Good luck in the contest
Israel
[deleted]
Hello dear Tommaso Bolognesi,
A very beutiful essay full of rationality.Congratulations.
Here is my humble point of vue in bad english, sorry , I am writing litterally and too quuick a bad habit,
We see the encodings in the pure finite series....these codes compute our reality.It's a little if our particles, entangled spheres for me, knew what they must become in fact.In a real evolutive topology.
It's a little if we said that they possesse different codes of becoming.Like an activation.A time code, a space code, rotationSSS codes,a polarity codes and this and that...if fact theses codes permit to transform light in mass by a kind of fusion between this two essentials.hv and its linearity and m and its gravitational stability.
Now let's assume that the number for the entanglement for m and hv is the same...thus this number doesn't change during the fusion but the rotations yes...thus the mass also....only a different sense of rotation between mass and light can explin this difference.The volumes do not change.....the codes seem in the mass.But they are the same in their pure BEC.The space also thus we see an universal contact between all spheres, entangled....mass space and light.The rotations in time imply the difference.
The topological and spherical system of rotatiopns of spheres, quant and cosm, is necessary....THUS A CENTER IS ESSENTIAL AND A SPHERE ALSO .
These sets are finite for the uniqueness.In the 2 sense.
Good luck for this contest.A team of winner I think, Lev, Moulay, Bolognesi,Stoica,Klingman,...
Best Regards
Steve
[deleted]
I too found the essay fascinating. It's great to see some truly foundational takes on reality in this essay contest, and your perspective is most enlightening.
Hi Israel. Thanks for the comments.
Suppose one 'borrows' some constants (for example, Planck h, or universal gravitation G) from existing theories, and uses them in a new theory such that:
(1) the predictions of the 'old' theories are confirmed by the new theory, yielding even better agreement with experimental results, and
(2) more phenomena can be explained and predicted with high accuracy by the new theory, that fall even ouside the application scope of the 'old' ones.
What's wrong with that? The idea is that the new theory 'absorbs' the old theories as special cases -- of more limited applicability and lower accuracy. I do not see the inheritance of physical constants from theory to theory as a problem, but as a nice feature of scientific progress.
But perhaps you are addressing the problem of whether a theory is autonomously capable of justifying/determining the value of its constants?
I am indeed fascinated by this problem, although it is a bit out of the scope of this contest. In my opinion, the most ambitious form of ToE (if it exists) should be able to do without any physical constant: all of them should be derivable -- should emerge from the rules of the game. It is nice to think that those values had not to be chosen and fine-tuned by Someone, before switching on the Universe... And I believe that theories fundamentally based on a discrete substratum, on computation, and on emergence -- the type I discuss in my essay -- have much higher chances to achieve this goal, almost by definition.
Dear Tomaso
Thank you for your reply. I have read my own post and it seems that there are some sentences missing in the argument about special relativity. I am rewriting it so you understand better what I mean.
The value of the speed of light in vacuum was conventionally defined by the Bureau Intertanational des Poids et Mesures (BIPM) as Vr=299 972 458 m/s. But this value was taken as a convention, this does not imply that the actual (or measured) speed of light possesses that exact value but the actual value is around Vr with a speed uncertainty within the interval 0< a Vr and/or Vi > Vr. Now we ask: are we violating the second postulate of relativity? Is the parameter c really equal to Vr? Why not c was taken to be equal to Vn or Vi? Recall that for SR to make physical sense, the parameter c must be higher than the speed of the inertial frame v, so that the Lorentz transformations do not render complex numbers. In this sense the selection c=Vr >v is partially justified, but we could have conventionally defined c=299 792 460 m/s and the physics would not be affected at all since the theory by itself only demands a constant with units of speed with any value but different than v [Ellis, G.F.R., Uzan, J.P.: Am. J. Phys. 73, 240 (2005)]. But why did SR borrow (not borrow, steal) the value from another theory (electrodynamics)? Why is not SR capable of determining the value of its own constants? The theory then, with no relation to a measurement, cannot determine the value of c by itself. These arguments also apply for any other theory, see for instance the case in general relativity [Narlikar, J.V., Padmanbhan, T.: The Schwarzschild Solution: Some conceptual Difficulties. Found. Phys. 18, 659 (1988)].
This being said, I totally agree with your two points of your last post. And indeed I am talking about this:
You: But perhaps you are addressing the problem of whether a theory is autonomously capable of justifying/determining the value of its constants?
Certainly as you say the TOE should be able to do without any constant. But I believe that if this were the case, that is, if a theory were able to determine the values of its constants and parameters, the theory would likely become independent of experience (measurements) as Max Tegmark argues. Tegmark, M. Found. Phys. 38, 101 (2008), and Tegmark, M., Annal. Phys. 270, 1-51 (1998).
Israel
Dear Tommaso
When I see the preview post text, everything is ok, but when I submit the post several sentences do no appear. I am attaching the pdf so you read it completely. You can find these arguments on page 15. I apologize for this inconvenience.Attachment #1: 2010IPerez_1012.2423v1_PhysicsViewUniverse.pdf
Dear Israel,
the Tegmark paper that you mentioned -- 'The Mathematical Universe' -- is very interesting; thank you for pointing it out to me. I agree with his remarks at p. 12 on physical constants. He mentions that in traditional quantum field theory the Langrangian contains *real* parameters, whose specification would require an infinite amount of bits. Under his Computable Universe Hypothesis, however, this is not allowed, and two possibilities are left:
(A) either the parameters are 'finite' (effectively computable from a finite amount of information), or
(B) there exists an uncountable infinity of universes, in each of which each parameter takes one value from a corresponding, finitely computable range.
(Case (B) sounds 'maximally offensive to human vanity', borrowing his words; my vanity is actually doing fine, in this respect, but I admit that my preference would go to plan (A)...)
The problem of HOW the ultimate mathematical theory of physics could determine the value of these parameters is not directly addressed in that paper.
But I guess there is not much to say: we have to guess the *right* values. In a computational theory of the type I describe in my essay, rather than multi-digit numeric parameters, one has to figure out the correct algorithm and the correct initial condition (e.g., a 2-node, 3-connected, 3-valent graph --- as you see the involved numbers are pretty small!). How can we know that the values are right, and that the mathematical structure/theory is well tuned? By testing whether the emergent reality corresponds to ours, that is, by computing the 'inside view' (or 'frog view') from the 'outside view' (or 'bird view'). In Tegmark's words, this is one of the most important questions facing theoretical physics. And most exciting, I would add!
[deleted]
Dear Tommaso,
Very interesting essay.
You mention in Section 3 first sentence: "No experimental evidence is available today for validating the digital/computational universe conjecture." Let me point you and your readers to one of my papers entitled "On the Algorithmic Nature of the World" (http://arxiv.org/abs/0906.3554) where we compare the kind of distributions one finds in real-world datasets to distributions of simulated--purely algorithmic/digital--worlds.
The results may be marginal, that there is always some (from weak to strong) correlation with varying degrees of confidence with at least a natural model of computation, but we think the investigation provides a legitimate statistical test and real experimental evidence indicating the compatibility of the digital hypothesis with the distributions found in empirical data. This is based on the concept of algorithmic probability.
This claim concerning empirical data requires of course great care, since one would need first to show that there is a general joint distribution behind all sorts of empirical data, something that we also did test, which results we also report in the same paper. The proof that most empirical data carries an algorithmic signal is that most data is comprehensible in some greater or lesser degree.
People may wonder whether the compressibility of data is an indication at all of the discreteness of the world. The relationship is actually strong, the chances of finding incompressible data in an analog world are much greater (as it has been argued by some researchers that think the world is mostly random).
If I have a chance I will further elaborate all this in a later essay to be submitted to this contest, together with the precise definition of what we mean by an algorithmic world (basically a world of rules that can be carried out by a digital computer).
Great work. Sincerely.
Hi Hector,
great to see you are here too. I hope indeed that you will bring more water (or, rather, bits) to the mill of the algorithmic universe conjecture! And thanks for the comments.
If I understand correctly, your work provides some estimate of how likely it is that the world we experience (through the statistical analysis of real data sets) be the output of some computation. You write that most empirical data seem to carry an algorithmic signal.
As you may guess, I put the highest expectations on deterministic (algorithmic) chaos, and I believe that whenever a natural phenomenon can be explained in terms of it, then recourse to pure randomness (where every bit has to be paid) should be avoided, for reasons of 'economy'.
But if deterministic chaos could completely replace genuine randomness, in ALL cases and for ALL purposes, including the support of our universe, then we would have a problem: how could we tell the difference between a deterministic and a 'genuinely' random universe?
If I understand correctly (I have not read your papers yet, but the stack is thicker every day here!), you do find some way to differentiate between the two cases. In fact, in the last figure of my essay I show that deterministic chaos has something more to offer than genuine randomness: it induces the emergence of a phenomenon that I call 'causet compartmentation', which appears as a fundamental prerequisite for the occurrence of anything interesting at all in discrete spacetime -- and this cannot happen in a truly random spacetime.
So, perhaps we agree on the fact that deterministic white noise is whiter, or, at least, more 'brilliant' than pure white noise!
Maybe you have a quick comment on this?
Dear Tommaso,
This is a very good essay. I recommend to the informed reader to move on to the bibliography.
I do have a question/suggestion: beside the whole universe, there are other smaller, man made universes, where this type of computational approach could explain something, like the emergence of the pattern of use of space in a city. I am no architect, but a mathematician. Recently I became aware of a host of research (in architecture) concerning SPACE. Here are some relevant references:
I first learned about the work of Christopher Alexander from this secret life of space link which I am sure you will enjoy.
Then I learned from Bill Hillier ("Space is the machine") about the existence of "axial maps" (Turner A, Hillier B & Penn A (2005) An algorithmic definition of the axial map Environment & Planning B 32-3, 425-444) which still escape a rigorous mathematical definition, but seems to be highly significant in order to understand emergent social behaviour (see Space Syntax).
So I wonder if such a computational approach could be of any help in such a more concrete but mathematically elusive subject.
Best,
Marius
Dear Marius,
thank you for the pointer to the 'secret life of space' by blogger Leithaus.
Having been involved in process algebra (even older than Pi calculus) for quite some time I cannot but agree that one of the attractive features of those formalisms is their peculiar way to simultaneously handle 'structure' and 'behaviour'. But I also fully share the concern expressed in that blog, about the usefulness of modeling the geometry of spacetime in Pi calculus:
"...will it be of any use to encode these notions in the model, or will it just be another formal representation -- potentially with more baggage to push around."?
Who knows! But the idea that formal analogies between Pi calculus specifications of some spatial geometry, on one hand, and of biological processes, on the other, might suggest that 'space itself is alive' does not sound convincing to me, to say the least (although we all know that space is indeed alive!...). One reason is that two specifications with very different structure (syntax) may well share the same semantics/behavior, indicating that the formal structure of a specification is not so important.
One should rather concetrate on the semantics of the specification; and the semantics can be given in several ways, including by a mapping from syntax to ... causal sets -- the structure that I discuss in my essay. It would be interesting to see whether relatively simple process algebraic specifications could yield causal sets exhibiting the variety of emergent properties that I observe in causets grown by other models of computation.
[deleted]
Hi Tommaso,
In my research the nature of randomness is secondary so at the lowest level there might be (or not) 'true' randomness and it would be pretty much irrelevant (from the algorithmic perspective). A consequence of assuming my algorithmic hypothesis is, however, that randomness is the result of deterministic processes and therefore is deterministic itself (which I think is compatible with your model). If randomness looks so is only in appearance. What I further say is that if randomness had any place in the world, it may no longer do. Whether you start a computer with a set of random programs starting from randomness or from emptiness, there is no difference in the long term. By contrast, if the universe somehow 'injects' randomness at some scale influencing the world (and our physical reality), empirical datasets should diverge from the algorithmic distribution, which is something we have being measuring (to compare both one has also to build the algorithmic distribution, hence to simulate a purely algorithmic world).
In my algorithmic world randomness is, as you say, also the fabric of information in the way of symmetry breaking. You can either start from nothing or true randomness but you will end up with an organized structured world with a very specific distribution (Levin's universal distribution). What I do is to measure how far or close data in the real world is to this purely algorithmic distribution.
Sincerely.
..."One reason is that two specifications with very different structure (syntax) may well share the same semantics/behavior, indicating that the formal structure of a specification is not so important."
Right. But this, I think, is already taken care of by Leithaus (Greg Meredith), with Snyder, in this paper: Knots as processes: a new kind of invariant.
Which, to my understanding, seems somehow related to this paper by Louis Kauffman who was among those who started topological quantum computing (along with Freedman, Kitaev, Larsen) which is just a form of computation with braids.
[deleted]
In fact, as many you confound a little the computing with the reality but it's well.Very good knowledge of maths computings.We thank you for that.Indeed the coomputing is not always a known matter for all.After all it's an application of physics.
An ultimate mathematical theory of physics you say......I say, the physics before, the maths after.The algorythms invented by humans shall be always far of the ultim universal algorythm......of course The Universe, this sphere, God if you prefer doesn't play at dices at my knowledges.
An important point is this one, can you create mass, lifes and consciousness, ?...evidently,never, the logic never says that.
But in your line of reasoning, it's interesting your tools for a computing of evolution. But the mass must be analyzed rationally.
The rotations of the entanglement are rpoprotionals with the mass and its rules of evolution and complexification....quantum spheres(finite number, decrease of volume)...HCNO...CH4NH3H2OHCN _COOH......AMINO ACIDS....ADN ARN .....EVOLUTION....LIFES ......PLANETS...STARS.......BH.......UNIVERSAL SPHERE.
fOR A CONCRETE REALISM THE MASS MUST BE WELL INSERTED IN THE ALGORYTHM AND ITS SERIES.....ALL CAN BE CALCULATED IN AN EVOLUTIVE POINT OF VUE.....when the mass polarises light....
Regards
Steve
Hi Hector,
of course I also sympathize with the idea that no pure randomness is continuously injected into phsycial reality, and that everything that appears random is still the result of a deterministic process.
You write that by the algorithmic universe approach one ends up with 'an organized, structured world with a very specific distribution (Levin's universal distribution)'.
Levin's distribution m(x) provides the a-priori probability of binary string x, and depends on the number of programs of any length that trigger a computation on a Prefix Universal Turing Machine that terminates by outputting x. Thus, the sum of m(x) over all x depends on the number of programs that trigger a computation on a Prefix Universal Turing Machine that terminates (by outputting ANY x), and this is Chaitin's Omega! Nice! I imagine you knew already, but I didn't!
So, are you saying that you have been able to measure the extent to which distributions of data sets (binary strings) from our real world vs. from an artificial, algorithmic world approximate the m(x) distribution (which, I read, is 'lower semi-computable', that is, knowable only approximately)? This sounds very challenging. But I am curious about the type of artificial universe that you have experimented with, and the type of data that you analyzed in it.
For example, if I gave you a huge causal set, intended as an instance of discrete spacetime, where would I look for a data set to be tested against Levin's distribution?
By the way, are these distributions referring to an internal or external view at the universe (Tegmark's frog vs. bird view)? The problem being that in the real universe we collect data as frogs, but with a simulated universe it is much easier to act as birds.
A final question for you. By introducing the apriori probability of string x one shifts the focus from the space S of strings to which x belongs, to the space P of programs that can compute x. But then, why not assuming that even the elements of space P -- strings themselves -- enjoy an a-priori probability? (This is not reflected in the definition of m(x).) How, or why to avoid an infinite regression?
[deleted]
and Solomonoff will say ...waww AIXI is possible .....but a string is divisible and a sphere no.hihihi I love this platform.
Of course a string in computing is different.But but confusions hihihi
Now I insist , for a correct turing universal machine, the real fractal of the main central sphere with its pure number is essential....if not it's a wind.
Second , never a machine will be intelligent because never we shall reproduce the first code at the Planck scale if you prefer.
The system must be quantified really with rotating spheres.and furthermore it must have an evolutive spherical topology.
The algorytrhms are reals , universals or humans.......and the universal probability makes the rest no.....
If I understand well, you invent codes, some algorythms and series for some applications.
The Universe is totally different.I understand thus why some persons invent time machine and multiverses, or others irĂ´nic sciences.
We understand thus why it's important to compute universally.It's even intriguing these codes invented by humans.
The conjectures must repect the sphere and its distribution.If not it's just a superimposing of logical series where some limits can be analyzed and sorted.But in a serie of polarity of evolution if the numbers are respected....that it's relevant.
The Universe is a sphere and our particles also....our computing must repect that, if not never we shall find where we are inside this beautiful universal sphere at this moment.The evolution is a specific serie with its intrinsic codes,.....the computing is an invention, human with a very young age.....but it's wonderful, but never the artificial intelligence is possible,only by our hands and brains.the automatic serie of intelligence and encoding is not possible for a computer, even if the Blue gene and Cray system or jaguar fuse and have children in 10000 years, they will be always machines, terminator also no...hihihih arrogance and humility , universality and computing.....hihihi they are crazy these scientists.
Cheers ....vanity of vanities, all is vanity .....
Steve
[deleted]
Tommaso,
Yes, m(x) and Chaitin's Omega are deeply connected, in fact as you noticed the former contains the latter. While Chaitin's Omega is the halting probability, m(s) is the output probability (over halting programs running on a prefix-free universal Turing machine) so knowing the latter gives you the former.
Yes, we (in joint with Jean-Paul Delahaye) have been able to measure (to some limited extent) the statistical correlations between several real-world datasets (binary strings) from the real world and an empirical purely algorithmic generated m(s).
As you say, m(s) is 'lower semi-computable', that means one can (with a lot of computational resources) approximate it but never entirely compute it (because of the halting problem). But halting runtimes are known for (relatively) large spaces of abstract machines, for example for Turing machines thanks to the busy beaver game. So one of the ways we undertook was to calculate an experimental m(s) for the known available busy beaver function values.
As an important aside result there is that if you have a way (even if limited) to calculate m(s) then you have a way to calculate C(s), the Kolmogorov-Chaitin complexity of the string s, by way of Chaitin-Levin's coding theorem! And that's the applicability of our research beyond the statistical comparison between real-world datasets and artificial/digital datasets (to test the computational hypothesis).
The calculation we performed produced enough data to produce an empirical m(s) up to relatively short strings (from small Turing machines up to 4 states), for which we could evaluate C(s), something never done before due to its difficulty given that the usual way to approximate C(s) is by compression algorithms but for short strings this used to fail for obvious reasons (compression algorithms have a hard time finding patterns in strings too short so values from compressors are too unstable).
You ask where would you look for a data set to test a real world dataset against Levin's distribution. The answer is here: http://arxiv.org/abs/1101.4795 and we can of course provide full tables.
You also ask whether this view is an internal or external view at the universe. I have troubles placing me in this dichotomy at this moment. I think I should think further about it. I think, however, our view may be a bird view (even at an upper level of the physical). In fact m(s) is also called the universal distribution because it assumes nothing but the use of a universal Turing machine, so it dominates (proven by Levin himself) whichever other semi-computable distribution (it has also been called a 'miraculous' distribution, see http://www.springerlink.com/index/867P162741726288.pdf).
So if one would, for example, create candidate universes, one should probably first look whether the universe is capable of approaching the empirical calculated m(s) which would be an indication that the said universe is capable of producing enough complexity both in terms of structured complexity and apparent randomness distributed as m(s) says, and then one should look at whether the universe fulfills all others physical properties (such as the Lorentz invariant).
You also ask another interesting question about assuming a prior for the distribution of strings (I guess strings acting as initial conditions over the programs). The beauty of m(s) is that it basically does not matter from where (or what) you start from, you end up getting the same distribution because what matters is the computational filter, the random distribution of programs. I think only the distribution of programs would have an impact into m(s) (our experiments also confirm this). An interesting question is effectively whether one can impose restrictions on the distribution of programs, for example imposed by physical laws that one may model with game theory (something close to Kevin Kelly's critics to all Bayesian approaches to learning theory and in connection to the old problem of induction).
But in fact, as a consequence of our research, m(s) is no longer a prior, our experimental m(s) (with the reserve that it has a limited scope) is no longer Bayesian but an empirical (hence posterior) distribution, which according to Kevin Kelly would give our approach and to the algorithmic complexity approach greater legitimacy as a theory. One can see complexity emerging from our experimental distributions according to the way m(s) and C(s) was believed to operate.
Sincerely.
Dear Tomaso
Indeed, It is very interesting. I also agree with case A. On the other hand, I believe that even the TOE must have parameters that have to be defined by experiment. Anyway, I invite you to see my essay, which is in essence a philosophical approach in which I cite a unified theory based on the tenet that space is a material continuum.
Kind Regards
Israel