Oh well,

In the above commentary, I meant to say that Florin spoke about explaining the Standard Model of Particle Physics - stating there was most likely a reason it exists (even if we don't know it). I agree that it has to come from somewhere, and that 'cause' may be connected with certain orderly relations of Math. I think whatever let's us move forward is better than being stuck. Sometimes observation lags theory, and other times it's the reverse. Make progress where you can.

Regards,

Jonathan

  • [deleted]

Dear Owen,

To reply your kind reply to my original review let me focus on two excerpts from your paper:

The first:

"Simply declaring that the universe must be a computer because everything in it can be represented as bits completely sidesteps the aspect of that notion that would actually be interesting and insightful, i.e. which portions of the universe are code and which are data."

Alan Turing showed there was no such a distinction, you can basically write data as code and code as data as long as you can always build a universal Turing machine. Even though I may think your paper contains some interesting ideas that I haven't followed in every detail, it seems your main point is based on that flawless separation from my point of view.

Second excerpt:

"Traditional CA approaches to digital physics, such as that offered by Stephen Wolfram [2], still fundamentally assume a single clock governing the interactions of each cell -- the nth state of a given cell may depend on the n-1th state of its neighbors, but all cells march forward from the n-1th state to the nth state in synchrony."

Wrong, Wolfram's proposal of the digital version of the universe is not based on a CA but on a trivalent network from which he has been able to even derivate General Relativity, hence no single clock governing the whole, yet digital. You seem to ascribe Wolfram to the CA model as everybody else, that's Fredkin main proposal though. Hence the reason I say you should first read careful the ideas you are willing to criticize before being inspired to write an essay with the a title so close to a work of somebody else that you didn't really read.

I might be guilty of not fully understanding yet some parts of your proposal for lack of time, but it is not me who is writing an essay of your essay, but you who has written an essay with a proposal that is close in spirit to others (like the trivalent network one) and unlike you said, has nothing to do with a CA proposal.

Perhaps you could tell us what are the differences and advantages of your graph-theoretic approach compared to Wolfram's graph-theoretic approach instead of saying all Digital physics is about CAs.

Thanks.

Sincerely,

Evariste.

Hi Evariste,

Thanks for a thoughtful and substantive reply. I have a better understanding now of why you made the criticisms you did in your original posts, and agree that some of them are warranted.

Although my paper cites Wolfram (once) and is named in homage to his book, I don't want anyone to think that (a) I regard his work as the be-all-end-all of digital physics, or (b) that digital physics is only about CAs.

I am the first to admit that my knowledge of digital physics is less thorough than it should be; your point is well taken in that regard. And I am not completely ignorant as to the significance of Turing's work; I agree that, for instance, it is guaranteed that if a computational ToE is ever discovered, it will be some kind of universal Turing machine.

But just because all UTMs are isomorphic doesn't mean that different specific UTMs, different constructions, might enjoy or more or less of the explanatory and/or predictive power that we would expect of a ToE. Rather than falling back on Church-Turing as an excuse to content itself solely with "existence proofs," I would hope that the digital physics community would instead view it as an exhortation to start slinging code, cooking up possible constructions the way that real-world software reverse engineering efforts do (like what produced the Samba Linux fileserver, for instance).

Perhaps that's exactly what the digital physics community is doing, and I'm just not aware of it because I'm too far removed from those academic circles; certainly Causal Dynamical Triangulations seem like a step in the right direction. At least those guys were willing to write some code, run it, and see whether its behavior at all looked like reality.

What I share with Wolfram is a conviction that CAs are the most profitable region of the UTM landscape in which to search for a computational ToE. My paper is an attempt to offer a specific construction that might fit the bill. I think its primary value is that it combines behavioral aspects of two different species of computation: CAs and fractals. Its graph-theoreticness is not a property that I find valuable in its own right; it's valuable because the graph was the simplest possible substrate within which I could contain a computational entity that exhibits both fractal-like behavior and CA-like behavior.

The two papers in this contest whose theses seem most similar to mine (not that I've read them all -- still working on that!) are George Schoenfelder's and Robert Oldershaw's. Schoenfelder proposes a graph of nodes that operate in two modes; so do I. Oldershaw proposes an infinite hierarchy of "scales" within which a fixed set of patterns can manifest themselves; so do I. Neither idea is new; but the marriage of them is, at least as far as I know.

Traditional fractals like the Mandelbrot set, Menger sponge (indeed anything listed at http://en.wikipedia.org/wiki/List_of_fractals_by_Hausdorff_dimension) consume a subset of traditional n-dimensional space; part of the reason a Sierpinski triangle's Hausdorff dimension is less than 2 is that a complete representation of it can fit inside a 2-dimensional plane. The same reasoning ensures that the Menger sponge's Hausdorff dimension is less than 3. No fractal at the wikipedia page has a Hausdorff dimension greater than 3.

I wonder if part of the reason this is the case is that traditional fractals, because they work by "claiming" points within a larger predefined space, can only _consume_ space. The fractal I propose in my paper (the "Object" class) simultaneously consumes _and_generates_ space (by "claiming" points/nodes and then also having a mechanism for creating new points/nodes). This means, if we were to find a suitable generalization of Hausdorff dimension that can take graph-theoretic fractals into account, then the "Object" class's Hausdorff dimension could be something greater than 3.

So, the ability to generate space (or, to use the cosmological vernacular, "eternally inflate") is the first major feature/benefit of the "Object" class. The second is its behavior of "coalescence." As I told Jonathan J. Dickau earlier in this discussion thread, coalescence is cool because it provides a deterministic mechanism by which George Ellis's notion of bidirectional causality between hierarchical layers could be realized. My guess is that planet formation, galaxy formation, atom formation, protein formation, molecule formation, and even tribe formation are all instances of coalescence at different ranks.

In the same post to Jonathan I said: "I think the next logical step for me is to simply write the Object class described in the paper, run it, and see what happens. I know it won't get very far before it crashes (due to OS limits on thread creation and stack space), but I'm sure it would still provide some interesting material for further study. In the back of my mind I've always conceived of my paper as a "functional specification" for software that could actually be built. Time to build it!" I have spent the last two weeks working on that; I hope to have a runnable version of the Object class in the next month or so.

Thanks again for your willingness to spend time on this.

Regards,

Owen

  • [deleted]

Dear Owen,

I read your essay, and I am stuck at one point. We know that in general topology has a very big influence on possible solutions of differential equations. My first question is on what grounds are you choosing your computational topology? Suppose on the other hand that you do not arbitrarily pick a topology and you consider all possible kinds of topologies. Then you end up having the same problems as the LQG class of models. How do you obtain the continuous event manifold in your model?

Regards,

Florin Moldoveanu

Hi Florin,

I don't have any actual physics training or background, so I don't even understand the question!

Regretfully,

Owen

  • [deleted]

Hi Owen,

It is not as hard as it sounds.

Let's take an empty cube. Make a sound inside the cube. The sound is made out of pressure waves and the waves bounce off the walls of the cube. When the side of the cube is a multiple of the wavelength of the sound waves, you get constructive interference. If not, you get destructive interference. The natural modes of oscillation of sound waves depend on the shape, size, and kind of the resonating cavity. This is why a violin sounds different than a clarinet for example. Now sound waves are described by one kind of differential equations. There are other kinds of evolution equations, and in general the type of "resonating cavity" or equivalently the type of topology, constrains the type of solutions, or even the type of equations which are possible.

Connecting each node only with the nearest neighbor is one kind of topology. Suppose I connect each point not only with the nearest neighbor, but also with three more remote nodes. Because of this shortcut, the overall system behavior can be qualitatively different.

So my first question was: why are you connecting the nodes the way you do in the picture at the bottom of page 2? Suppose I print your picture and roll the paper it in a cylinder, connecting the right edge noses with the left edge nodes. This is another topology. Can you prove that your topology of the picture on page 2 is the most general one in the sense that all other topologies generate the same kind of evolution behavior? If not, you should consider all possible ways to connect your nodes. And here is my question no. 2. When you consider all kinds of topologies, you are effectively doing a loop quantum gravity type of approach where macroscopic space-time is only an average of the microscopic "Plank foam". But only in very special cases people were able to prove that you obtain the right macroscopic behavior.

Picking the right topology is critical. When you have a hammer, everything looks like a nail. Picking a topology is like picking a hammer, or a garden hose, or a forklift. The conclusions you derive in the end may be only artifacts of the initial topology and this is the reason I was stuck reading your essay because I cannot asses the rest of the paper without a clear understanding of the assumptions.

Regards,

Florin

Hi Florin,

That was very helpful, thank you. I think I understand the question a lot better now.

Don't get too hung up on the specific arrangements of edges and vertices in the diagrams... think of those as just sample graph topologies, meant only to illustrate how the recruitment process works.

Computational ontology (C.O.) allows for an axiomization of the graph topology with precisely one initial condition, and precisely two rules for how that topology can change over time. (There is a third, very special rule, that I regard as much more speculative than the first two -- it's described under the "Baryon Asymmetry" bullet in the "Cosmological Evidence" section at the end.)

The initial condition is, there isn't really a graph, or more accurately, there is a graph completely devoid of edges -- there is just a single node/vertex (a single instance of the Object class). Everything in the graph grows and develops out of that one initial node/vertex/Object.

The two rules for topological change are this:

(1) A new node/vertex/Object can be added to the graph. In this case, the new node/vertex/Object has precisely one edge connecting to anything -- and that "anything" to which the edge is connected is always its "parent," i.e. the node/vertex/Object that created it. (Note that there are two possible mechanisms that can result in the creation of a new node/vertex/Object -- copying and coalescence -- but in either case, the newly created node/vertex/Object is initially assigned only a single edge leading back to its parent.)

(2) A node/vertex/Object that has just been recruited by another node/vertex/Object has a mechanism whereby it can introduce a new edge between itself and and another node/vertex/Object which may or may not be the one that just recruited it. This is slightly more difficult to summarize, but is explained in detail in a passage beginning at the very last paragraph of page 4, and ending with the next section ("These Threads Are Made for Walkin'").

Please let me know if, despite your best efforts, I have still misunderstood your question. And thanks for your patience in dealing with my gaps in education.

- Owen

  • [deleted]

Hi Owen,

Thank you for your answers. You did understand my questions, but I am not sure I fully agree with your answers. If I understand correctly, your topology is evolving in time. In this case there are two things to discuss. First, in nature, topology tends not to change, and a lot of modern physics is about the so-called topological invariants. (This is the part were I do not agree).

But there is an exception to every rule. You may want to read the works of Sorkin and Bombelli. (Check out http://en.wikipedia.org/wiki/Causal_sets ) Their approach works to some degree and resonates with your ideas. It would be interesting to see a direct comparison between your essay and their work. In particular it would be interesting to see if your essay contains usable insights to help the causal set theory overcome its obstacles. Since I am not a causal set expert myself, I really cannot comment on it in a useful way, but you may want to try to contact people working in this area and solicit their feedback. As an advice from my experience, contacting people cold in physics is almost never successful, you need to find some common acquaintance first to get introduced, or your emails will remain unanswered.

Florin

  • [deleted]

Hi Florin,

I'm posting a copy of this message in both your and my forum.

I have read and enjoyed your most recent posts and know I owe you a response, but recent personal events have conspired to keep me away from physics for at least another few days. I will reply as soon as I can.

Again, thanks for your time and interest.

- Owen

Hi Florin,

I wrote: "That is, just as yours asks 'What behaviors exist in reality that cannot be accounted for using mathematics?' perhaps this heuristic could ask 'What behaviors cannot be accounted for in mathematics that _can_ be accounted for using computation?'

You replied: "I think the answer to this is more or less settled mathematically by the Church-Turing thesis. While not really proved, no acceptable counter examples were found so far. So unless we either have a proof for this or a valid counterexample, there is nothing NEW to be said here."

First, I need to admit to a poor choice of words. "Accounted for" makes my question sound very stark and either-or, like it pertains to whether there are differences between mathematics' and computation's _technical_ability_ to handle certain behaviors _at_all_. Of course Church-Turing establishes there are no such differences. I should have phrased the question more along these (admittedly more qualitative) lines: "Are there any behaviors that can be symbolically modeled -- naturally/gracefully manipulated -- using computation, that can only be crudely or partially treated by mathematics?" This is more the perspective on conditional branching, if/then/else statements, that I was trying to highlight.

I readily concede that conditional branching _can_ be effectively expressed in mathematics, but in so doing it loses some of the fluidity and ease of access from human thought processes that it would otherwise enjoy if expressed in computational terms (i.e. source code).

So, that clarification might ring true for you, or it might just sound like me making a mountain out of a molehill; but in any case, the time has come to say some more general things about the Church-Turing thesis.

As you point out, the Church-Turing thesis is a thesis, not a theorem. It has yet to be proved, despite its near-universal acceptance. You say "there is nothing new to be said" about it unless one either provides a proof or a valid counterexample. Just for sake of argument, let's imagine that someone does come along and provide a proof, and after a few years, the peer-review community agrees: it's now the Church-Turing Theorem.

This might be a watershed event for computer science, but I predict it would be of surprisingly little consequence in the digital physics community. This is because, as formally stated, the Church-Turing thesis concerns itself solely with algorithms that terminate. The eternal expansion called for by the Lambda-CDM model of cosmology would nudge one -- if one were already predisposed to thinking of the universe in computational terms -- in the direction of a nonterminating procedure; hence one that owes nothing (or, at least, owes a frankly unknown amount) to Church-Turing. And remember, that's even if Church-Turing is promoted to Theorem. In its current thesistic state, the fealty is even weaker.

The other point I'd like to make is best conveyed through a metaphor (that I promise will be less strange than the "computation-as-beast-of-burden" one!). Imagine that the science of biology had an entirely different history from its actual one -- all the same observations and discoveries, but made drastically out of order compared to its real history. In this parallel biologisphere, the existence of DNA was among the very first discoveries made, before such basics as symbiosis, photosynthesis, or even evolution itself. This alternate science of biology is ruled by the Crick-Watson Thesis, which states that all organisms, by virtue of using DNA to encode the totality of their features, are just instances of Universal Watson Machines. Participants in this sort of biology would have trouble understanding why someone like Darwin would want to spend years in godforsaken hellholes studying birds and turtles up close, because they would already "know" -- already have "proved" -- thanks to the Crick-Watson Thesis, that every organism is isomorphic to every other organism. In a sense, there is only one organism -- the UWM -- and what we would normally think of as "species" are simply specific constructions of UWM, perhaps different from other UWMs in some superficial or toylike way, but at bottom, in any deep sense, exactly the same as every other UWM.

Do you see where I am heading with this analogy? To me there is an entire vibrant "biosphere" of computational entities, just as real as any biological species you care to name -- yet those higher-level computational constructions are not being explored with the optimal combination of rigor and enthusiasm, because everybody keeps tripping over the Church-Turing thesis. The Church-Turing thesis should be viewed as a starting point for higher-level exploration of ever more refined forms, not just an excuse to avoid asking deeper questions. This, in a sense, is the _real_ halting problem: the presumed truth of C-T causing computer scientists themselves to halt.

Back in real-world biology, the recent progress made in epigenetics seems to bear this analogy out to a certain extent -- it turned out that even the Crick-Watson Thesis, while correct, was incomplete. DNA-level isomorphism can conceal higher-level heterogeneity that has heretofore remained hidden. Epigenetics is slowly opening our eyes to an entirely new world of higher-level structural refinement in biology. The same thing is urgently needed in computer science; the closest equivalent to such study is "design patterns," but that has problems of its own (mainly a lack of scientific rigor and symbolic manipulability).

If possible computational architectures that could produce an eternally inflating discrete fractal cosmos are to be constructed and studied, the Church-Turing thesis is simply not the most relevant or useful tool for the job. That is not meant to be construed as a full-bore attack against C-T or a statement that it is completely valueless or somehow "wrong"; just that the insights it provides do not appreciably benefit this particular problem domain.

Thanks for your time,

Owen

  • [deleted]

Hi Owen,

Sorry for the delay, I was really busy at work and I had to put my FQXi activity on hold for a bit.

""Are there any behaviors that can be symbolically modeled -- naturally/gracefully manipulated -- using computation, that can only be crudely or partially treated by mathematics?" This is more the perspective on conditional branching, if/then/else statements, that I was trying to highlight."

I think this goes back to basic logic. In computers the basic electronic gate is NAND and this is related to the existence of the so-called "stroke" or Sheffer function. All basic logic can be obtain from AND and OR and both of those can be obtained from the stroke function. Now there are two kinds of logics. One is propositional logic and here all tautologies are provable, and all provable statements are tautologies. The next level is achieved introducing ANY and EXISTS qualifiers (predicate logic). This is a richer logic because one can introduce models, and results have to be valid across all models. What one can do in computers is to do arithmetic and logic only up to aleph zero. Transfinite induction, continuum models, or the models where the continuum hypothesis is false, are much richer domains where computability and computers have very little to offer.

"This might be a watershed event for computer science, but I predict it would be of surprisingly little consequence in the digital physics community."

I would agree with that. The same with Gödel's incompleteness theorem which has very little impact on current mathematical research.

"The other point I'd like to make is best conveyed through a metaphor [...]

Do you see where I am heading with this analogy?"

Sorry, but I am lost here and I do not really understand the analogy. I do not really know what epigenetics is and I will not pretend I am an expert by goggling it out. But I understand the point of Darwin's evolution. This is basically a search in a vast landscape of possibilities for the best fit caused by the environment. The problem of any search is that you can easily get stuck in a local minimum. I know about simulated annealing, quasi-Newton, conjugate gradients search methods as part of my thesis was about them.

I am not sure what "design patterns" are, maybe you can explain them to me and why they are relevant. For me, they look like cookie cutter solutions to common problems, similar with standard problem solving techniques for say differential equations, but I may be wrong.

Regards,

Florin

Write a Reply...