Dear Efthimios,

Yes, classical and relativistic mechanics are both deterministic, and that's compatible with my algorithmic worldview. On the other hand, certain phenomena can be modeled assuming that matter and space exist as a continuum, meaning that matter is continuously distributed over an entire region of space. By definition, a continuum is a body that can be continually sub-divided into infinitesimal elements. However, matter is composed of molecules and atoms, separated by empty space. If a model like general relativity is believed to describe the world at all scales then one would also need to think of matter as continuum, something not compatible with my view but also not compatible with another large, and equally important, field of modern physics: quantum mechanics (the view that there are elementary particles and that they constitute all matter).

Modeling an object or a phenomenon as something doesn't mean it is that something. Even if on length scales greater than that of atomic distances, models may be highly accurate, they do not necessarily describe the universe at all scales or under all circumstances, which should reminds us that models are not always full descriptions of reality, so we should not take them to be at the most basic level of physical explanation.

You make a great point fully compatible with my worldview: if the world is analog, then we would need to live in the best possible analog world. That is what I argue, that chances of finding patterns and structures in an analog world would be very low unless, as you suggest, one assumes that our world is the best possible among all possible. Under the digital view, however, patterns and structures are basically an avoidable consequence, so no need of such a strong assumption.

Sincerely.

  • [deleted]

:) it's cool.

Spherically yours(you see, still a word with sphere lol)

Héctor:

Hello from another math student from ciencias (unam). Much older than you anyway. 45 now.

I'm in the contest also.

I read your essay and I liked it a lot, because I am into computation complexity also.

I have been far from academia for years, except for my participation on this and last year contest on fqxi.

I would like to know if you know there are computation complexity study research groups in Mexico.

I really find your essay quite good, let's wait on how the voting goes .

Please read my essay and comment .

Hola Juan Enrique,

Nice to meet you. I know of the Centro de Ciencias de la Complejidad (C3) at UNAM to which I'm associated with too. Sure, I will read your essay with interest.

Gracias por tu apoyo. Un saludo.

  • [deleted]

EInsteins dice obeys these classical rules 1 ODD 1 EVEN= 2 ODD.

And 2 ODD 2 EVEN= 4 EVEN.

QM is determined by EInsteins dice and you can have a model of the universe where evrything is determined at least in the computer world......

This is not OUR UNIVERSE this is a universe where everythng is binary either zero or one.

Interesting Joe. I should have a closer look at it. Regards.

  • [deleted]

Dear Dr. Zenil,

I have just read your paper, and thought you may like to know that my essay agrees with your assertion that 'operations that at the lowest scale are very simple'. My paper deals with physics in which I derive 'the Light' and 'Equivalence Identity'.

This raises the question of whether Wolfram's systematic computer search for simple rules with complicated consequences could ever 'accidentally discover' the two foundations revealed in my paper.

In case you already haven't, you may like to read the following article by Chaitin

http://arxiv.org/PS_cache/math/pdf/0210/0210035v1.pdf

All the best,

Robert

    Dear Robert,

    Yes, I knew about Chaitin's paper, you do very well bringing it up to this discussion, specially as it is connected to my essay content.

    Wolfram has recently written on his quest to find the universe rule that he also thinks should be simple. Here is the link: http://blog.wolfram.com/2007/09/11/my-hobby-hunting-for-our-universe/

    Sincerely.

    • [deleted]

    Dear Dr. Zenil,

    Chaitin's paper is also connected to my essay viz III What do Working Scientists Think about Simplicity and Complexity?

    Cheers,

    Robert

    • [deleted]

    the word of the day Rocksphere.lol

    One per day hihi.

    Hector,

    Thanks for the interesting essay.

    Your example of the 158 characters of C that compress the first 2400 digits of pi seems to overstate the actual degree of algorithmic compression. The 158 characters of C do not produce the 2400 digits alone unless also combined with a C compiler which also has considerable information content. In other words, throwing the dice in the air would need to produce not only the C program itself, but also the compiler to properly interpret and execute the program. Correct?

    Regards,

    Tom

      Dear Thomas,

      That's a very good point. However, I don''t overlook the fact that one has to add the size of the C compiler to the size of the program. When one compares computer programs one has to do it on the basis of a common language. If the common language is C, as it is in this case, one can ignore the size of the compiler because it is the same size for any other program. In other words, because the additive constant is common to all programs one can ignore it.

      The invariance theorem shows that it is not very important whether you add the compiler length or not or which computer language is used because between any 2 computer languages L and L' there exists a constant c only depending on the computer languages and not the string, such that for all binary strings s:

      | K_L(s) - K_L'(s) | < c_L,L'

      Think of this as saying that there is always a translator of fixed length (another compiler between computer languages) which one can use to talk about program lengths without caring too much about additive constants and without any loss of generality.

      Good question. Thanks.

      Robert,

      There is a common agreement that algorithmic (program-size, aka Kolmogorov) complexity is the framework to talk about simplicity vs. complexity in science. This is based, as you may know, on the concept of the shortest possible description of an object.

      The idea is that if the shortest program running on a universal Turing machine of, for example a string, is of about the length of the string, then the string is said to be complex or random, while if the program is considerably shorter than the original string length then the string is said to be simple. This means that if a string is compressible then it is simple, and if it is not then it is random.

      Other finer measures have been proposed based on this same concept of algorithmic complexity, such as Bennett's logical depth. According to this other complexity measure, the complexity of an object is given by the decompression time of the near shortest programs producing an object. This measure has the particularity of distinguishing between simple or random vs. structure (organized complexity), as opposed to random complexity as in the original algorithmic sense.

      These measures are, unfortunately, still largely underused, sometimes greatly overlooked or even misunderstood. I am quite surprised, for example, that only a handful of participants in this contest have even mentioned them to address the contest question, perhaps because they are relatively new theories. I'm glad to be the participant defending his view by using these state of the art tools.

      The main problem is that these measures are not computable, meaning that there is no algorithm that gives you neither one or another complexity value when provided a string (because of the halting problem explained in my essay). There are, however, attempts to build tools based in these concepts, and this has been part of my own research program. If you are interested you can have a look at my recent list of papers on ArXiv: http://arxiv.org/find/all/1/all:+zenil/0/1/0/all/0/1

      Sincerely.

      • [deleted]

      Dear Dr. Hector Zenil,

      I find this essay to require a large challenge on many points. I will start slowly and see if there is any interest. From page one:

      "Whether the universe began its existence as a single point, or whether its

      inception was a state of complete randomness, one can think of either the point

      or the state of randomness as quintessential states of perfect symmetry. Either

      no part had more or less information because there were no parts or all parts

      carried no information, like white noise on the screen of an untuned TV. In

      such a state one would be unable to send a signal, simply because it would

      be destroyed immediately. But thermal equilibrium in an expanding space was

      unstable, so asymmetries started to arise and some regions now appeared cooler

      than others. The universe quickly expanded and began to produce the first

      structures."

      This reads like the Book of Genesis. Without intelligence behind it, there is a lot of explaining to do. First question: Symmetry breaking of less or no information leads to increased information?

      Moving to the end:

      "An analog world means that one can divide space and/or time into an infinite

      number of pieces, and that matter and everything else may be capable of fol-

      lowing any of these infinitely many paths and convoluted trajectories. ..."

      What is the empirical evidence to support the idea of space and/or time can be divided into pieces? I will leave it at two question for now. Later I will ask about bits and strings of bits and information and meaning.

      James

      I should also add that a way to avoid large constants and concerns about shallow comparisons is to stay close to the 'machine language'. Remember that the definition of algorithmic complexity of a string is given in terms of the length in bits of the shortest program that produces the string.

      One can often write subroutines to shortcut a computation. In Mathematica, for example, you can get any number of digits of Pi by simply executing N[Pi, n], with n the number of desired digits. Note, however, that the C program calculating the first 2400 digits of Pi does not use any particular function of C, but basic arithmetical operations. In any case, the main argument holds, that Pi is simpler to calculate by throwing bits that one interpret as instructions of a computer language, disregarding the language (or if you prefer rules), but it is much harder if you want to generate any number of digits of Pi by throwing the digits themselves into the air. This is because programs of Pi will be always short in relation to its expansion.

        I should also add that a way to avoid large constants and concerns about shallow comparisons is to stay close to the 'machine language'. Remember that the definition of algorithmic complexity of a string is given in terms of the length in bits of the shortest program that produces the string.

        One can often write subroutines to shortcut a computation. In Mathematica, for example, you can get any number of digits of Pi by simply executing N[Pi, n], with n the number of desired digits. Note, however, that the C program calculating the first 2400 digits of Pi does not use any particular function of C, but basic arithmetical operations. In any case, the main argument holds, that Pi is simpler to calculate by throwing bits that one interpret as instructions of a computer language, disregarding the language (or if you prefer rules), but it is much harder if you want to generate any number of digits of Pi by throwing the digits themselves into the air. This is because programs of Pi will be always short in relation to its expansion.

        Your paper is interesting and presents some things to think about. David Tong here comes to an opposite conclusion. My sense is that continuous and discrete aspects of reality are complements. In my paper http://fqxi.org/community/forum/topic/810 I work aspects of the algebraic structure for quantum bits with black holes and AdS spacetimes.

        The universe as a set of digital processors has some compelling features to it. As I see these are structures associated with qubits on horizons or AdS boundaries. The exterior world has equivalent quantum information content, but it is the holographic projection from the boundary or horizon. To compare to DNA it is analogous to the map which takes a single strand and parses that into complex folded polypeptides. We may then say this permits "errors," or mutations, or in physics broken symmetries.

        Of course from an algorithmic perspective we have the halting problem. The universe as a grand computer or quantum computer executes various algorithms, which are quantum bit processors for interacting fields. All of these need to be computable, and have a finite data stack for a standard scattering experiment. So there must be some sort of selection process, a sort of quantum Darwinism, which selects for qubit processors that are computable. The Chaitan halting probability may then be some estimated value which serves as a screening process. Maybe if the algorithm is nonhalting and requires an unbounded amount of energy it is renormalized out, or absorbed into a cut off.

        Cheers LC