Philip

But that all just turns on what we are labelling as being light, ie is light really the physical state which eyes can, upon receipt, utilise. And the rest of the physical entity is the 'carrier'. But this distinction, as with many others one can identify when going into detail, is irrelevant to the point. Light is a representation of something else, ergo, it is information.

Indeed, there is only information about/knowledge of reality. We cannot 'directly access' it. But this is not the point either, because that is a statement of the obvious, and pointless. In the same way that everything provides us with information, so the concept of information again is pointless, as there is no differentiation from not-information. Incidentally, physically, the book is not information, it is ink & paper, or whatever.

Space and time are not emergent. Distance is an artefact of physically existent entities, it being a difference between them in terms of spatial position. Existence necessitates physical space, but that can only be assigned via entities. So distance can only involve entities which exist at the same time. And they can only exist in one physically existent state at a time. Time is the turnover rate of existent states (ie realities).

Paul

Paul, that is a very reasonable view of information and I cant disagree with it.

I am however confused by your last statement. You say space and time are not emergent but then you express the relational view of space and time which is usually identified with an emergent approach. I am probably misunderstanding what you mean.

  • [deleted]

Okay Phil,

Once again, a thought provoking essay! Of course I'm sympathetic to your position but I feel you open yourself up to a bit of critique so I'll take advantage. Where did you open yourself up? I quote:

"Should we base our theoretical foundation on basic material constructs such as particles and space-time or do these things emerge from the realm of pure information? Wheeler argued for the latter. But no amount of philosophizing can tell us if this is how the universe works. There is no point in asking where the information comes from, or where it is stored."

So, with this in mind, I'm going to actually propose to you a fundamental question!

For the last couple of weeks I've been reading FQXi essays, past and present, together with some of the fascinating articles provided. Julian Barbour's essay [JB] is, of course, relevant to this year's subject and although I'm not sympathetic to Mr. Barbour's position it seems to me Mr. Barbour does a rather excellent job of analyzing the nature of information. He divides information into three categories:

"In summary, we must distinguish three kinds of information: Shannon's information, the uncertainty as to which message will be selected from a source; factual information, the content of such a message; and intrinsic semantic information, which distinguishes a random message, or configuration, from one that carries meaning and to some extent explains its very genesis."

After establishing the different kinds of information, Mr. Barbour spends a great deal of time talking about probabilities in the context of quantum information theory; his position is that ITs, quantum configurations or fields, create qubits:

"The key point is this. If we are to speak about ontology, as opposed to efficient coding in communication channels, the most important symbol in (1) is not p for probability but i for the thing, or configuration, that has the probability pi. Probabilities are for outcomes: what you find when you open the box. Thus, even if quantum probabilities are an integral and essential part of the world [PBR] (reference mine), they are meaningless in themselves. They are at best secondary essentials, not primary essentials. They must always be probabilities for something."

Now this is the thing that struck me! When speaking of quantum mechanics, whether regarding the Standard Model or Cosmology, the emphasis is always on probabilities and statistical configurations; but what about the Hilbert Space?

Consider the winning FQXi essay by mathematician George Ellis [GE]:

"Causation: The nature of causation is highly contested territory, and I will take a pragmatic view:

Definition 1: Causal Effect - If making a change in a quantity X results in a reliable demonstrable change in a quantity Y in a given context, then X has a causal effect on Y.

Existence: Given this understanding of causation, it implies a view on ontology (existence) as follows: I assume that physical matter (comprised of electrons, protons, etc.) exists. Then the following criterion for existence makes sense:

Definition 2: Existence - If Y is a physical entity made up of ordinary matter, and X is some kind of entity that has a demonstrable causal effect on Y as per Definition 1, then we must acknowledge that X also exists (even if it is not made up of such matter)."

Now, it's generally agreed that Bell's work proves the mutually exclusive relationship between locality and counterfactual definiteness and, of course, Bell's work has been further elaborated upon since, for example by Mateus Araujo in [MA]. The EPR experiments of Aspect et al. and the Mach-Zehnder experiments of Herzog et al. unequivocally support counterfactual definiteness at the expense of locality. Furthermore, the experiments of Aspect et al. would seem to place quantum entanglement on a firm ontological footing. So then while Hilbert Space is the space of all possible configurations it's much more than a simple Universe of Discourse; the inseparability of Hilbert Space describes quantum entanglement! Therefore, based on experimental evidence and according to the definitions of George Ellis, the Hilbert Space must have an ontological referent!

So I propose to you a "Hogwartian" and certainly fundamental question:

What ontological entity does the Hilbert Space refer to?

References

[MA] Araujo, M., Bell inequalities refined to Boole inequalities in: Quantum Realism, Quantum Surrealism (http://arxiv.org/abs/1208.6283), accessed 26 April, 2013.

[GE] Ellis, G., An excellent exposition revealing the essence of organicism in: Recognising Top-Down Causality (http://fqxi.org/data/essay-contest-files/Ellis_FQXI_Essay_Ellis_2012.pdf), accessed 26 April, 2013.

[JB] Barbour, J., An informative discourse on information in: Bit from It (http://fqxi.org/data/essay-contest-files/Barbour_Wheeler.pdf), accessed 26 April, 2013.

[PBR] Pusey, M. F. et al., Are wavefunctions ontological entities? in: A boost for quantum reality (http://www.nature.com/news/a-boost-for-quantum-reality-1.10602), accessed 26 April, 2013.

I shall wait for a few more essays to present themselves before I rate your essay but, rest assured, I'll levy an upper range rating.

With regards,

Wes Hansen

    • [deleted]

    Wes, thanks for this interesting question. I don't think I can give a very short answer because the ontology I use is quite elaborate. It could be a whole essay in itself which is why I don't try to cover such issues in this essay, but I cant really answer the question without telling you how my ontology works so I will give you the outline.

    Before I do I should make the point that philospohical ontologies are in my opinion just scaffolding that we use to build a house, where the house is the real physical theory. When you are finished you can take the scaffolding away. Ellis and Barbour are offering you a different brand of scaffolding that does not work in the same way. It may work just as well and you may use it to build the same house. The only difference is you might build it faster or slower depending on how good their scaffolding is compared to mine. I am not going to criticize their scaffolding but I will try to sell you mine instead.

    My starting point is basically Pl*t*n*c. I have censored this word because some people find it very offensive. This includes people who don't like maths of course, but also many good physicists will complain about it. I find that the ones with the more mathematical background like myself are more comfortable with it. Any who isn't will need to find a different brand of scaffolding.

    The pl*t*n*c principle says that all mathematical systems exist in a realm outside of the physical universe. This realm is essentially unique. Even if you start with different axioms you end up in the same realm because any set of axioms that is consistent and sufficiently general gives the same set of mathematical systems. This even transcends questions of undecidability in my opinion.

    In this realm all possible universes with any self consistent set of laws exist and there are relationships such as equivalence, or overlap between them. The number of complex systems is much larger than the number of simple systems. You can think of it as an outer level of m*lt*v*rs* if you like (sorry more censorship required there)

    To find the place of our universe in this m*lt*v*rs* we must first of all understand the concept of universality. Universality is the idea from complexity theory where some kind of universal behavior is found in large collections of complex systems, for example, computer languages are defined by complex arbitrary rules but aside from limitation due to memory size most of them provide equivalent definitions of computability provided they are not too simple. Another example is chaotic behavior of non-linear systems which has a type of universality characterized by Feigenbaum constants that are the same no matter how you construct the original system, provided it is not too simple. More examples appear in critical systems where you approach a critical point and the correlation lengths go to infinity. You remormalise to get a consistent macroscopic limit which does not depend on all the microscopic details, provided they are not too simple.

    I think that if you could analyse the pl*t*n*c realm in the right way you would find that there is also a universal behavior at work and this is what forms the laws of physics. In essence this behavior is so dominant that anything living in the pl*t*n*c realm (i.e. everything) would only notice this universal behavior so these would describe their top level laws of physics. I call this idea the theory of theories and first wrote about it twenty years ago.

    So how do these laws work? The first thing we can say is that they are indeterministic. Different mathematical systems can overlap so from your experience you can never determine exactly what system you are in, even given the universal behavior, so experience must be inderterministic, but there are statistical laws of some kind telling you what is most likely, i.e quantum mechanics. The second thing is that this is a complex system, even though it has clear fundamental laws. There are many solutions. Probably there are different universes all fulfilling these laws as different solutions. This is the second level of m*lt*v*rs* and indeterminism within each universe provides a third layers of m*lt*v*rs*. Because of the second layer we can't expect to derive the low energy effective theory of physics. That is part of the solution. It must be there because that is where the *nthr*p*c principle comes in and allows the universe to be fine-tuned for life.

    Trying to derive the top level laws of physics from first principle is also going to be hard. perhaps if we were hugely intelligent we could do it and would then only need to do experiments to figure out the solution level, but we are not that smart. Some people have tried to model the pl*t*n*c realm as a kind of statistical ensemble with a big path integral over all possible mathematical systems. This captures the ontology of the theory of theories quite well but in my opinion the actual path integral is based on a more algebraic principle than a statistical one. Even in quantum mechanics you get complex numbers and grasmann variables rather than plain probabilities etc.

    However, one useful observation is that how ever it works the "path integral" will itself define a mathematical system. That means that is can also be found under the path integral. forming a path integral is a version of quantisation so effectively it gets quantised a second time, and then that system is also under the path integral. Continueing recursively ad infinitum we can argue that multiple quantisation is included in the system. My conjecture is that multiple quantisation actually describes the universal behavior that dominates the path integral. This means that we just need to find the right general construction for quantisation and we are done. We have then built the house and pull, away the scafolding.

    You asked what is the ontological origin of the Hilbert space. Complex numbers and vector spaces are universal features in mathematics. They were discovered by mathematicians long before they were used in quantum theory. Although real numbers were originally inspired by geometry even they are need in mathematics to solve problems defined in purely combinatorial terms. Mathematicians would have discovered them independently of physics if they had needed to. It is no surprise that Hilbert spaces would emerge in the ultimate universality structure that controls the ensemble of all mathematical systems.

    Some people like to think in terms of "simulations". That is to say that we are in a simulation, but there can be lots of simulations. There are even simulations within simulated systems so you can get a similar idea of multiple quantisation that way. I dont like the simulation idea quite so much because it seems to imply that causality is fundamental, and perhaps time too. Causality is part of thermodynamics so it is emergent in the lower layers of physical law. causality, locality, space and time are not fundamental in the top level laws described by multiple quantisation, but they are emergent as features of some solutions. That is why my ontology does not discuss why the big bang happened for example. That is not a relevant question.

    However, there are some important things that are emergent in the formation of the top level laws as a feature of universality and multiple quantisation. These include the role of information, quantum mechanics, qubits etc. It also includes symmetry. It is a feature of critical systems that symmetries emerge at critical points, e.g. you can do field theory on a grid lattice which does not have full spacetime symmetry, but when you move towards the critical point the scale is remormalised and rotational and translational symmetry can emerge. This is one reason why I expect symmetry to be so important. It is emergent at a higher level than space and time and can be hidden by symmetry breaking at lower levels.

    OK I will stop there because I have probably confused everyone that tried to read this. As I said, it would take a full essay to describe this in comprehensible terms and even in my mind it is quite vague, but perhaps I have given you an idea of how my scaffolding holds up.

    Oops, did not log in. That was me in case it was not obvious.

    Philip

    I have a suspicion about this word 'emergent', like several other 'buzz' words (information is probably one) what lies behind it probably does not correspond with reality.

    Reality is spatial. My point was that whilst we know that there must be 'space' for there to be physical existence, we only measure it indirectly, ie in terms of not something. That is, this something occupies that space, or these somethings have that much space between them. So space is not an illusion, it is 'physically existent'. Though in those terms it may be nothing, as the space is completely full of somethings, ie there is no space as such, only somethings. So space does not 'emerge', it is 'there', like everything else. What it 'constitutes' differs in that the somethings differ, and at any given time, it has a definitive configuration, because the somethings have a definitive configuration (ie physically existent state). And any given reality is one of those. It is the physically existent state of the somethings (and hence space) at that time. And as with everything else, in order to measure something we invoke a process of comparison to identify difference.

    Time is concerned with the turnover rate of realities. There is no time in a reality. It is a calibration of the speed at which difference occurs. And if there is difference that is another reality. There can only be one at a time. And to be existent it must be definitive. So again, the concept is not an illusion, though it is not properly attributed to the physical event which lies behind it. And again, to calibrate time, ie rate of change, we compare rates of change and identify difference. If you use a quartz timing device, for example, then you are comparing number of oscillations with whatever.

    So spatial configuration alters, realities alter, at a rate. If this is what emergent really refers to then it is a pointless concept. But I know there is reification, and this concept embodies incorrect conceptions. It attributes difference with existence, and presumes more than one reality is co-existent at any time.

    Perhaps you could define the commonly used meaning of this term emergent.

    Paul

    • [deleted]

    Very interesting. Just wondering if there are any loopholes in info theory. Gravitons carry some information about the contents from a black hole, if a gauge theory of quantum gravity is a reality. The black hole must be exchanging gravitons with all other masses in the universe. These gravitons can be viewed as carrying information about the mass in the black hole. Measure the strength of the gravitational field and you have info on how much mass is in the hole. Is this relevant to the definition of "information"?

    Also, is Hawking radiation perfectly random? If Hawking's own heuristic mechanism for his radiation is right (pair production at the event horizon, with one virtual particle falling in and the other escaping to become onshell and real), surely it will be affected by the electric charge of the black hole? E.g. if mainly positive ions (not light electrons) fall into the black hole, it gets positive charge, and this polarizes the virtual fermion pairs, so after the pair production it's no longer a random virtual charge that falls in, but mostly a negative (attracted to the positive matter in the black hole). The Hawking radiation escaping would then be positrons, rather than gamma rays from annihilation of random charges usually assumed to escape. So is it possible that information may be carried out by Hawking radiation?

      Paul, "Emergent" means the opposite of "fundamental". Any thing we know about that is physically real is either fundamental in the it is written into the underlying laws of physics, or it is emergent in that it appears as a collective behavior or derived property from those laws. Emergence is fairly well understood in the context of complex systems, condensed matter, etc.

      However, I agree that such terms can be confusing if they are applied to physical realms that we are less familiar with. Things that appear fundamental at one time may be emergent in a deeper theory. I have even bigger problems with words like "real", "illusion", "physical", "existence" etc. I know what I mean by these words in my view of things but you may use them differently.

      You keep objecting to the words I use but really you need to look more closely at what I am saying to see how I am using these words. It is the model of reality that I describe that really counts, not the words. What you are saying about space, information etc. is perfectly acceptable to me if I interpret the meaning of your words in a way that makes sense, so it may be that any disagreement you may have with me is purely semantic.

      Hi Philip,

      Your hypothesis is that quantum information is fundamental and all material entities including spacetime are emergent. But do we need something more fundamental than the spacetime itself? In my opinion (presented in my essay http://fqxi.org/community/forum/topic/1609) we do not. I propose a simple experiment to prove that particles, fields and information are the same i.e. they are only spacetime deformations but perceived and defined by human beings in different ways.

      My prediction is not very exotic and new but is generally a conclusion of general relativity (at least its great discovery that gravity is not a force but only a spacetime geometry). I ask why not to apply the same rule to another "forces" and find them somewhere in spacetime geometry? Using your words ...that would be the amazing power of consistency. That simple question should be followed by a complicated answer. The answer is an universal metric. As we do not have any, I do not want to wait and instead I propose to start from a prediction of the idea itself and the experiment that could falsify that prediction or confirm it. In the latter case the time will come to look for the metric, being at least sure of its existence.

      We are human beings so we depend on our perception abilities. Everything we perceive is a creation of our minds. We call it Reality. If we want to be sure the Reality exists and not only in our minds we desperately need a real experiment. Creating highly sophisticated ideas like holographic principle does not change that fact. Do the holographic idea or string theories give any prediction?

      We need the experiment outcome and maybe than we could propose even strongest equivalence principle claiming that any interaction is entirely geometrical by nature.

      If you are looking for energy conservation you will find it assuming that the spacetime is the fabric of everything i.e. particles and field forces. A spacetime deformation needs energy and is the energy itself. That is not my idea but that is GR. Then any randomly chosen region of spacetime gives you that perfect conservation needed. Every entity e.g. a particle is spread out over the entire spacetime (or Universe if you like) according to the Gaussian distribution that is a continuous probability distribution with the apparently strongest deformation (spacetime density?) in the center of the observed entity.

      Thanks

        "Measure the strength of the gravitational field and you have info on how much mass is in the hole. Is this relevant to the definition of "information"?"

        Yes, this is a point I am making in my essay. The gravitational field around a black hole can tell you its mass, momentum, angular momentum and position. That's ten numbers. The electric field can tell you its charge. Other gauge fields would give more information. If there were a huge hidden gauge symmetry it may be that all information could be accounted for in this way. That could be how holography works.

        I think you are right that Hawking radiation is not completely random. If the BH is charged it will at some point radiate away that charge. However the radiation should be thermal with energy distribution of a black body. To resolve the information loss problem we may need to accept that it is non-random in other more subtle ways.

        • [deleted]

        Just a few more comments on the black hole Hawking radiation information claims. (Apologies if this wastes space, please delete if it seems off topic.)

        The idea that heavy positive ions may be more likely to fall into a black hole is conventional fractionation of ionized gas in a gravitational field. If a large cloud of gas is falling into a black hole, it accelerates and heats up, being ionized. The positive ions in this gas plasma have less kinetic energy than the lighter electrons, so the heavy ions effectively fall faster in the gas and enter the black hole first. Obviously in a vacuum, all masses fall at the same rate, but in a gas the more massive, highly inert molecules move more slowly and so end up at the bottom, while the lighter ones pick up greater velocity in collision and end up preferentially at higher altitudes as explained by Maxwell's kinetic theory of gases. E.g., hydrogen gas rises in the atmosphere while heavier molecules concentarte at sea level.

        There are issues with Hawking's claim that all black holes radiate at a rate simply dependent on 1/mass. Schwinger's vacuum polarization calculation for the running coupling in QED shows that there's a 0.5 MeV threshold (IR or low energy cutoff) on polarizable pair production, corresponding to an electric field of ~10^18 v/m. Below that electric field strength, there's a sharp exponential fall spontaneous pair production, which would prevent Hawking's radiation mechanism from working. Hence, if Hawking's heuristic mechanism for Hawking radiation (spontaneous pair production at the event horizon) is true, then Schwinger's experimentally verified QED calculation of the magnetic moment of leptons necessitates the condition that you need to have >10^18 v/m electric field strength at the event horizon of a black hole, or it won't radiate Hawking radiation. So a black hole must have a massive electric charge to radiate Hawking radiation, a fact that isn't mentioned by Hawking or included in his equation! The only ways around this would be to either forget Hawking's heuristic mechanism (that's easier for mathematicians than physicists), or else for quantum gravity (rather than existing QFT) to provide the energy density for spontaneous pair production in the vacuum.

        So it seems to that a particular quantum gravity theory is needed in order to validate the mechanism for Hawking radiation. The energy density of an electric field is the product of half the vacuum permittivity and the square of the electric field strength, and since for protons the gravitational field is about 10^40 times weaker than the electromagnetic field, you can estimate pair production in a quantum gravity field by scaling from electromagnetism. But there is the question of whether the gravity coupling runs with energy as assumed in supergravity, or not. If couplings do unify at the Planck scale, the difference in EM to gravity force strength for fundamental particles will decrease from 10^40 to 1 as energy (or inverse of distance) increases. Fundamental particles, if treated as black holes, should radiate intensely due to their small mass. Is the physical basis of gauge theory, the exchange of offshell gauge bosons between charges, causing fundamental forces?

        Jecek, welcome to the contest, I will be reading the new essays later today.

        • [deleted]

        Hi Philip,

        I have one question and one request. I have been reading your papers and trying to see what you are trying to do exactly. The use of event symmetry is interesting, but I am not clear about multiple quantization and path integral, could you clarify it please.

        The discussion of this thread and with Jochen is most interesting for me. Now, I don't want to make this thread about my theory, but I would like a line or two worth of feedback; can you see any link to your ideas. My theory is very platonic, it links space, energy , matter all in one concept, time is a change of state, it does not appear explicitly. As you can see the lagrangian falls out of the system for the Bohr like model and you get the usual relation between c,h_bar and alpha. I have many other results that I have not shown, like g-factor, Fine Structure Constant and full QM hydrogen 1S (in it, if you change the proton width-very closed to measured- even a little the energies come out wrong). I hope I can show all that in time for this contest.

        Philip

        No I do not object to words, per se, and I am always searching for the underlying correspondence with reality.

        So space is not emergent then, given that definition, because physical existence involves space. Neither is time emergent, assuming what it relates to is understood, because there definitely is difference in physical existence. And difference involves a rate at which that occurs. Without space nothing could be existent, and without change nothing would differ, which must occur at a rate.

        Certainly we can only 'assign' space via entities, and we can only calibrate time via entities, but surely that is not the point about emergent? So I am now really lost.

        In the meantime I will re-read your essay .

        Paul

        Nige, it seems to me that you might be confusing together some unrelated things. Hawking radiation has nothing to do with pair production in an electric field. It is an effect coming from the horizon. The "heuristic" explanation involving virtual pair production that Hawking and others have used in popular explanations does not really represent the detailed calculation that Hawking did. That was based on methods of semi-classical quantum gravity that are hard to describe well in general terms.

        Quantization is the procedure that takes you from a classical theory to a quantum theory by replacing classical observables with non-commuting observables in a specific way (as formulated by Dirac) An equivalent procedure is to use path integrals as defined by Feynman. Multiple quantisation is the idea that this can be repeated iteratively if you treat the quantum shoredinger equation as if it was a classical field equation and quantise again. The dream is that if you define quantization in a very gebneral algebraic way you can derive physics with just this idea.

        I had a quick look at your theory and of course it has a lot in common with other frameworks including mine. I like the idea of trying to simulate systems because I started out in Lattice Gauge Theories. I cant give you a proper review here but I think it would be good if you could write an essay and submit it here. You may then get some feedback from other authors.

        • [deleted]

        I think the view that we live in "An Acataleptic Universe" (an ancient Skeptical view that no more than probable knowledge is available to human beings) is a mistake. In the light of the quantum mechanical understanding that there is no certainty, only probability, this is an understandable mistake, but an error nevertheless. What is needed is comprehension of quantum mechanics. Einstein did not actually say it was wrong - he agreed that it's correct as far as it goes. But he thought it was very, very incomplete ... and didn't go far enough. You and I may philosophically (or acataleptically) differ, but I'm sure we agree that modern science's understanding of quantum mechanics cannot be called complete. However, that doesn't mean it will never be complete.

        "Hidden variables" is an interpretation of quantum mechanics which is based on belief that the theory is incomplete (Albert Einstein is the most famous proponent of hidden variables) and it says there is an underlying reality with additional information of the quantum world. I suggest this underlying reality is binary digits generated in 5th-dimensional hyperspace.

        I think this information underlying reality can borrow a few ideas from string theory's ideas of everything being ultimately composed of tiny, one-dimensional strings that vibrate as clockwise, standing, and counterclockwise currents in a four-dimensional looped superstring. We can visualize tiny, one dimensional binary digits of 1 and 0 (base 2 mathematics) forming currents in a 2-dimensional program called a Mobius loop - or in 2 Mobius loops, clockwise currents in one loop combining with counterclockwise currents in the other to form a standing current. Combination of the 2 loops' currents requires connection of the two into a four-dimensional Klein bottle* by the infinitely long irrational and transcendental numbers. Such an infinite connection translates - via bosons being ultimately composed of 1's and 0's depicting pi, e, √2 etc.; and fermions being given mass by Einstein's 1919 proposal of gravitational and electromagnetic bosons interacting in what quantum mechanics calls "wave packets" - into an infinite number of Figure-8 Klein bottles composing the infinite universe (according to the known definition of "infinite"). (Each Klein bottle, whose gaps are deleted by the flexibility of binary digits to form a continuous space-time where surrounding subuniverses fit together perfectly, is a "subuniverse" - and we live in a finite, 13.7 billion year old, subuniverse.)

        * This Klein bottle could possibly be a figure-8 Klein bottle because its similarities to a doughnut's shape suggests an idea of mathematics' "Poincare conjecture". The conjecture has implications for the universe's shape and says you cannot transform a doughnut shape into a sphere without ripping it. One interpretation follows: This can be viewed as subuniverses shaped like Figure-8 Klein Bottles gaining rips called wormholes when extended into the spherical spacetime that goes on forever (forming one infinite superuniverse which is often called the multiverse when subuniverses - which share the same set of physics' laws - are incorrectly called parallel universes which are wrongly claimed to each possess different laws). Picture spacetime existing on the surface of this doughnut which has rips in it. These rips provide shortcuts between points in space and time - and belong in a 5th-dimensional hyperspace. The boundary where subuniverses meet could be called a Cosmic String (they'd be analogous to cracks that form when water freezes into ice i.e. cosmic strings would form as subuniverses cool from their respective Big Bangs).

        "Empty" space (according to Einstein, gravitation is the warping of this) seems to be made up of what is sometimes referred to as virtual particles by physicists since the concept of virtual particles is closely related to the idea of quantum fluctuations (a quantum fluctuation is the temporary change in the amount of energy at a point in space). The production of space by BITS (BInary digiTS) necessarily means there is a change in the amount of energy at a certain point, and the word "temporary" refers to what we know as motion or time (in a bit-universe, motion would actually be a succession of "frames"). Vacuum energy is the zero-point energy (lowest possible energy that a system may have) of all the fields (e.g. gravitational / electromagnetic / nuclear) in space, and is an underlying background energy that exists in space even when the space is devoid of matter. Binary digits might be substituted for the terms zero-point energy (since BITS are the ground state or lowest possible energy level) and vacuum energy (because BITS are the underlying background energy of empty space).

        I call hidden variables (or virtual particles) binary digits generated in a 5th-dimensional hyperspace which makes them - as explained in the next sentence - a non-local variety, in agreement with the limits imposed by Bell's theorem. (Bell's Theorem is a mathematical proof discovered by John Bell in 1964 that says any hidden variables theory whose predictions agree with quantum mechanics must be non-local i.e. it must allow an influence to pass between two systems or particles instantaneously, so that a cause at one place can produce an immediate effect at some distant location [not only in space, but also in time].) Comparing space-time to an infinite computer screen and the 5th dimension to its relatively small - in this case, so tiny as to be nonexistent in spacetime (at least to observation) - Central Processing Unit, the calculations in the "small" CPU would create and influence everything in infinite space and infinite time. This permits a distant event to instantly affect another (exemplified by the quantum entanglement of particles separated by light years) or permit effects to influence causes (exemplified by the retrocausality or backward causality promoted by Yakir Aharonov and others (see "Five Decades of Physics" by John G. Cramer, Professor of Physics, University of Washington - http://www.physics.ohio-state.edu/~lisa/CramerSymposium/talks/Cramer.pdf). This means quantum processes, in which effects and causes/distant events are not separated, wouldn't be confined to tiny subatomic scales but would also occur on the largest cosmic scales.

        My shortened entry in FQXi's current contest (the complete version is at vixra.org and researchgate.net) does refer to the Big Bang, actually to Big Bangs, but the article is not suggesting these came from nothing. It refers to a nonlinear concept of time where inhabitants of this universe learn about the cosmos then apply that knowledge to produce the Big Bang which did not form the universe but only our infinitesimal local part of it (our subuniverse). Applying the knowledge gained over thousands of years to our local Big Bang requires time travel to 13.7 billion years ago. The method of doing this (via a 5th-dimensional hyperspace) was written in detail in my original entry - but was unfortunately one of the things deleted to meet FQXi's length restrictions.

        The inverse-square law states that the force between two particles becomes infinite if the distance of separation between them goes to zero. Remembering that gravitation partly depends on the distance between the centres of objects, the distance of separation between objects only goes to zero when those centres occupy the same space-time coordinates (not merely when the objects' sides are touching i.e. infinity equals the total elimination of distance - the infinite cosmos could possess this absence of distance in space and time, via the electronic mechanism of binary digits).

        Certainly, zero-separation makes no sense at all if the universe is confined to the laws of physics we're familiar with. It seems to only be possible if, as FQXi's member Professor Max Tegmark states, we live in a mathematical universe. That is, if the sentence "Information has nothing to do with reality" is an incomplete description of reality.

        This known definition (of an infinite universe going on and on forever) is something observation and experiment might confirm. The definition of infinity as "elimination of distance" would depend on the 1's and 0's of the binary digits. Without this "companion" definition, time travel (whether into the future or past) would be impossible if it's an instant effect. With it, intergalactic space travel can become possible. And the companion can explain (by the elimination of distance) quantum entanglement in space, as well as time's retrocausality in which the future affects the past.

        Pardon me for getting carried away and writing too much, Phil. But I'm quite a perfectionist - if I write anything at all, I end up writing a lot because I have this need to explain every detail thoroughly. Right this minute, I'm worrying that I might have forgotten something :(

          Rodney, I was reading your excellent essay while you wrote this long comment.

          I do think that quantum theory requires some revision to deal with emergent spacetime and multiple quantisation, but I don't see any reason to expect in-determinism to disappear. I suppose someone has to try out the possibility though.

          Philip,

          Although I thoroughly appreciate your expansive answer I don't believe you directly addressed my critique! Perhaps the information in the question was garbled so allow me to clarify. In the physical sciences ontology refers to what actually exists whereas epistemology refers to our knowledge of what actually exists. My question utilizes ontology in the physical sciences sense not in the theoretical/computer science sense. The ontology of physical science is the foundation on which your "house" is built and this, of course, can include the Platonic realm (although proving it is a rather thorny problem).

          The definitions of George Ellis concern real world existence and are not controversial nor are they contrary to your "scaffolding"; they're really quite simple, elegant, and yet powerful. Clearly they demonstrate that the Hilbert Space has an ontological (real world) referent; what is that referent? And I have a problem with the Hilbert Space, or its real world referent anyway, being emergent. It seems to me that qubits emerge from and live on the Hilbert Space; at the very least they would seem co-dependent arising! It would seem clear that the real world referent of the Hilbert Space is the entity which "stores" the information necessary to enable quantum entanglement; elementary particles, photons, etc., would seem to lack the capability of remembering who they had lunch with yesterday, last week, or 14 billion years ago. And the experiments of Aspect et al. (not to mention Herzog et al.) would seem to render attempts to dismiss quantum entanglement with colored balls in the urn arguments impotent.

          In my FQXi entry (http://www.fqxi.org/community/forum/topic/1608) I reference a paper by William Tiller and Walter Dibble in which they demonstrate spatial "information entanglement" upwards of 6000 miles. In [TD] they demonstrate temporal "information entanglement" as well (see references therein for a plethora of empirical evidence supporting such spatiotemporal entanglement). Messrs. Tiller and Dibble explain these empirical results by expanding our scientific reference frame to a duplex reference frame such that the electromagnetic physical realm , probed by science these last 400 years, has a conjugate magnetoelectric information domain (call it mental). These domains can be coupled by what they call a deltron moiety and conjugate properties are related by deltron modulated Fourier transforms. The deltron moiety is a symmetry breaking mechanism with the amount of hidden symmetry a function of deltron density.

          And this is the heart of my critique. I'm very much sympathetic to the Platonist view and remain quite impressed with your Theory of Theories; I've pretty much read all of your papers. But how can you say "there is no point in asking where information comes from or where it is stored?" The scientific enterprise is built on such inquiries! And for certain there is no reason to censor your Platonic perspective, after all, you're in very good company and the question remains open [AHT].

          References

          [AHT] Alford, M., et al., Three theoretical physicists discuss ontology in: On Math, Matter, and Mind ( http://www.ids.ias.edu/~piet/publ/other/mmm.pdf), Foundations of Physics, Springer Science Media, 2006, accessed 30 April, 2013.

          [TD] Tiller, W. and Dibble, W., A mechanism for quantum entanglement in: What Is Information Entanglement and Why Is It So Important to Psychoenergetic Science? ( http://www.tillerfoundation.com/White%20Paper%20VIII.pdf), the Tiller Foundation, 2009, accessed 30 April, 2013.

          With regards,

          Wes Hansen

          When taken out of context my statement that "there is no point in asking where information comes from or where it is stored?" is too strong. My point was that if you are looking for a physical theory without the philosophical side then these questions are not necessary, in my opinion.

          The Hilbert space, qubits and entanglement are of course part of standard quantum mechanics. They are intrinsically non-local so they extend over the entire system under consideration, or entire universe if you like. I personally don't think they need some extra physical entity to explain how they work, but probably you are not alone in thinking otherwise.