Dear Lawrence,

as I stated my essay forum: very good essay (some misprints) and you got a high vote from me.

This essay was inspiring for me (I'm also looking for EPR=ER currently). I'm a fan of Popper and an open world. You are certainly right that our essays are related. In my essaymy essay, I also consider networks with underlying hyperbolic structure but only for the signals going through the network. You used the tensor networks to describe the states itself. But nevertheless, we both got similar results. There must be a qualitative change to get intention or wandering towards a goal. Topology change is a good ansatz for this.

Best

Torsten

    • [deleted]

    Lawrence, I believe it is to your credit that you appreciate the fundamental question: In what sort of universe is consciousness possible.

    "It might be that consciousness is also a truncated hyper-Turing machine that approximates the ideal of a completely self-referential system that can jump out of an algorithm, or make a leap of imagination." And "The apparent ability of living systems to make choices and to perform actions far more subtle that computation may stem from the open universe..."

    What you and I are writing about in each our own way is expressed in the check required to submit a post, designed to confound a non-conscious spammer.

    That consciousness "can can jump out of an algorithm" and is "far more subtle that computation" is an important insight that seems to be lost on most essayists here. My solution to the question of how such a transcendence (equivalent to your "openness"?) is possible is more prosaic than yours, but maybe more comprehensible. I'd be interested in your evaluation.

      Dammit, it said I was logged in at the bottom of the page....

      I will try to respond tomorrow. I got the flu a few weeks ago and now I have bronchitis that is sort of dragging me down. I do have a question concerning the Uhlenbeck, Freed, Donaldson type of result, but I will have to wait until tomorrow if I am better.

      Cheers LC

      Consciousness or for that matter even just goal oriented behavior of a simple organism does seem different than just computation. I indicate how Loeb's theorem enters into this, and the upshot is that if there is some provable system inconsistencies must occur. In some sense that is the case, where contrary to Emmanuel Kant's idea of a rational life, often much of our thinking is a jumble. Underneath it is a tangle of competing subconscious messages, conflicting images and this can percolate into consciousness that is terribly inconsistent. We have all at times been there, or at least in a stream of consciousness moment.

      Cheers LC

      Dear Lawrence,

      Your essay contains a wealth of detailed material, and I cannot give all the items the attention they deserve. I want to focus on consciousness, which is an important topic in your paper and perhaps the main topic. Consciousness is also of particular interest to me. I think I understand your point that an open universe is a basis for the possibility of self-reference (page 2). I am also familiar with Douglas Hofstadter's belief that consciousness might be a form of self-reference. I am not clear, however, about how the idea of a truncated hyper-Turing machine is related to the idea of self-reference. Is it that a hyper-Turing machine is one way to implement a self-referential system? Or is there some other connection to notice?

      In any case, thanks for a stimulating essay.

      Laurence Hitterdale

        Dear Lawrence,

        as far as i understood it, Loeb's theorem says exactly what you wrote in the post above. This result indicates for me two things, firstly that there is no TOE which can be proven to be the 'real thing'. Because if one could prove it, it would be inconsistent and therefore wouldn't be the real thing, and therefore not the TOE.

        Secondly, if mathematics has these malicious properties as Loeb's results indicate, then, for the sake of consistency, we must differentiate between provability and truth. This is what naturally all authors in the essay contest not do: although their lines of reasoning cannot be proven, they assume them to be nonetheless the truth (including my essay).

        Claiming that one's result is the Truth in the absence of a proof, because these results appear to be so self-evident to the proponent would mean that the proponent equals self-evidence with a formal proof. But these both are different things. Self-evidence refers to the consistency of a certain line of reasoning, but does not say anything about the ontological status of its contents.

        Now let's make a more general point: If mathematics would indeed be the fundamental layer of reality in a platonic sense, it would obey Loeb's theorem. Since all of mathematics then resides in the platonic realm, it must be complete. Every new axiom, identified by human beings, would not be a human creation, but a discovery of a part of that platonic realm. But this cannot be the case, since in the platonic realm, mathematics must be considered as complete (and infinitely infinite). But if it is complete, every sentence that can be constructed could be proven. But this implies that this mathematics is inconsistent.

        Taking this scenario at face value, one then can return to the initial assumption and ask where the error lies. Does we find the error within Loeb's theorem or within Gödel's theorems? Or is it really true that mathematics does not encompass all of reality, even if this assumption cannot be proven to be true? I think the latter is the most probable answer: Mathematics cannot be the most fundamental level of reality, because otherwise we run into contradictions within our own lines of reasoning.

        If something such rational and calculatable as mathematics should not be the most fundamental level of reality, what then should be this level? I have argued in my essay that it needs an intelligent entity who at least invented mathematics. Otherwise one had to conclude that reality is an absurdity, producing or providing a system (mathematics) that mimics some rational and consistent behaviour but nonetheless, at its core, it must have arisen out of a sheer inconsistency, a kind of absurd nothing. Fortunately the latter can also not be proven and if one assumes it to be nonetheless true, how can one then be sure that even Loeb's theorem tells us something meaningful about reality?

        I think from a logical point of view one has to cope with the fact that mathematics has certain limits, limits which are a broad hint that mathematics cannot be the most fundamental level of reality - because in an inconsistent reality, the very notion of 'fundamental' may not carry any sense of ontology with it. For the existence of a most fundamental level of reality one could expect that it shows up from time to time in a manner that contrasts the widely held assumption of the omnipotence of mathematics (as i tried to show by the example of near-death experiences). Since we are not able to solve some 'simple' tasks like the 3- or 4-body problem and other physical tasks, the assumed omnipotence of mathematics seems not to be fully implemented at least in our physical universe. And if it nonetheless would, this 'omnipotence' necessarily would lead to inconsistencies due to Loeb's theorem. But this would lead us again to our initial assumption of how we then can validate the soundness of all of mathematics itself, including Loeb's and Gödel's theorems. As Loeb indicated, we can't do this, even in a world where the assumed omnipotence of mathematics is physically instantiated. Thus, the assumed omnipotence of mathematics is only assumed, but can never be reached, neither in a platonic realm nor in a physical realm, because incompleteness and inconsistence are mutually excluding each other. Therefore, neither an incomplete nor an inconsistent system is a good candidate for the most fundamental level of reality. By pondering about the alternative, i think one has simply to assume a teleological component behind it all (without ever being able to prove this mathematically).

        What do you think about these lines of reasoning?

        Best wishes,

        Stefan Weckbach

          The part about truncation is that this is a cut-off that prevents what might be called infinite navel gazing. A formal system with a countably infinite number of predicates that are provable, will be the Cantor diagonalization of the Godel numbering of these predicates result in ever more predicates that are not provable. Godel's theorem is really a form of Cantor's diagonalization or "slash" operation on a list of numbers. As a result a formal system has an uncountably infinite number of elements, and of course Bernays and Cohen used this to show the continuum hypothesis is unprovable in the Godelian sense, but consistent with ZF set theory. From a computation perspective we really do not want to go there!

          If we think of the most elementary hyper-computation consider the case of the switch flipped on and off according to Zeno's prescription. What will be the final state of the switch? The problem is that as time approaches zero the switch is moving with so much energy it becomes a black hole. The answer is not revealed to us. Hyper-computation has some funny connections with black holes. This sort of puts an event horizon over the ability to beat Turing and Godel.

          Thinking of this with Turing machines, the universal TM is a sort of Cantor diagonalization slasher on TMs, and it will always have TMs outside its list of halt and nonhalt. Then enters the MH spacetime which exploits properties of the inner Cauchy horizon of a black hole. It is in principle possible for an observer to cross this horizon and receive information about any possible algorithm process in the exterior. It is then in principle a sort of UTM that can make this list, even if it is uncountably infinite, and this is hyper-computation. However, this relies upon the properties of an eternal black hole. Black holes can exist for a long time, the largest that might exist in the future are a trillion solar masses (from the end point of galactic clusters say 10^{40} years from now) and these might endure for 10^{110} years. However, this is not eternal and it cuts off or truncates any possible hyper-computation. In reality I don't suspect much would be entering such as black hole as the exterior world will be a dark and cold void. The evaporation of a black hole even limits hyper-computation in the interior.

          What I do outline though is that this will adjust the Chaitin Ω-number for halting probability. If we had perfect hyper-computation available the Chaitin Ω-number would be 1 or 0. Without that we do not know it with any certainty. However, with truncated hyper-computation the Ω-number many be adjusted closer to 1 or 0, and in a quantum mechanical tunneling setting or just plain probabilities and loaded dice this may give outcomes. These outcomes may or may not work, but in a truncated hyper-Turing machine setting they will permit more favorable outcomes; in effect you can hedge your bet or there is some pink noise.

          Then if nature is dual, what happens at the extreme UV limit with black hole quantum hair is dual to low energy IR stuff, such as chemistry or biology, then ultimately this sort of structure is encoded into the nature of reality. The main argument I give then is the emergence of self-directed systems that exihibit intenionality is scripted into the structure of the universe.

          Cheers LC

          I would first read the response I made to Laurence Hitterdale just one comment above. This connects with the physics of what I am saying.

          I can't comment greatly on what this means for the objective nature of mathematics or Platonic concepts of Truth. Godel thought that his theorem bolstered the idea of Platonia, for the existence of theorems that were true without direct proof seemed to be a great argument for mathematical truth independent of human thought. Godel even thought this meant there had to be some sort of ultimate meta-consciousness as well. I will confess that when I first read about Godel's theorem in college the thought occurred to me, "Well maybe there is a God." This also connects with issues of the continuum, of which a number of papers here on the FQXi 2016-17 essay board have discussed, and where it seems the continuum is devoid of direct physical meaning, but is a curious aspect of our modeling of the physical world. We do not want a qubit assigned to every point in a continuum, for this would involve a vast uncountable number of quantum bits or states.

          I tend to stick more with the physical aspects of this. The implication I cite is that teleonomic activity may involve making "favorable bets" on undecidable propositions. As with Loeb's theorem this does mean that such behavior is not going to be consistent. No human being is perfectly consistent, not even Emmanuel Kant. Living beings in general do not behave according to what is consistent, but more on what is contingent. All one has to do is look at social behavior, and in particular the political trajectory of the last half year to see how human behavior can be driven by anything besides reason. If I am right this is due to a statistical occurrence of self-contradictory outcomes or processes that connect to Loeb's theorem.

          Cheers LC

          4 days later

          To rephrase Feynman, I need somebody to remind me not to write any more of these FQXi essays.

          LC

            Thank you LC for a nice explanation...

            By the way

            .............................. I Want you to explore one more model of the Universe, where ...............reproduction of Galaxies in the Universe is described. Dynamic Universe Model is another mathematical model for Universe. Its mathematics show that the movement of masses will be having a purpose or goal, Different Galaxies will be born and die (quench) etc...just have a look at my essay... "Distances, Locations, Ages and Reproduction of Galaxies in our Dynamic Universe" where UGF (Universal Gravitational force) acting on each and every mass, will create a direction and purpose of movement.....

            I think this is INTUTION and is inherited from Universe itself to all Biological systems

            For your information Dynamic Universe model is totally based on experimental results. Here in Dynamic Universe Model Space is Space and time is time in cosmology level or in any level. In the classical general relativity, space and time are convertible in to each other.

            Many papers and books on Dynamic Universe Model were published by the author on unsolved problems of present day Physics, for example 'Absolute Rest frame of reference is not necessary' (1994) , 'Multiple bending of light ray can create many images for one Galaxy: in our dynamic universe', About "SITA" simulations, 'Missing mass in Galaxy is NOT required', "New mathematics tensors without Differential and Integral equations", "Information, Reality and Relics of Cosmic Microwave Background", "Dynamic Universe Model explains the Discrepancies of Very-Long-Baseline Interferometry Observations.", in 2015 'Explaining Formation of Astronomical Jets Using Dynamic Universe Model, 'Explaining Pioneer anomaly', 'Explaining Near luminal velocities in Astronomical jets', 'Observation of super luminal neutrinos', 'Process of quenching in Galaxies due to formation of hole at the center of Galaxy, as its central densemass dries up', "Dynamic Universe Model Predicts the Trajectory of New Horizons Satellite Going to Pluto" etc., are some more papers from the Dynamic Universe model. Four Books also were published. Book1 shows Dynamic Universe Model is singularity free and body to collision free, Book 2, and Book 3 are explanation of equations of Dynamic Universe model. Book 4 deals about prediction and finding of Blue shifted Galaxies in the universe.

            With axioms like... No Isotropy; No Homogeneity; No Space-time continuum; Non-uniform density of matter(Universe is lumpy); No singularities; No collisions between bodies; No Blackholes; No warm holes; No Bigbang; No repulsion between distant Galaxies; Non-empty Universe; No imaginary or negative time axis; No imaginary X, Y, Z axes; No differential and Integral Equations mathematically; No General Relativity and Model does not reduce to General Relativity on any condition; No Creation of matter like Bigbang or steady-state models; No many mini Bigbangs; No Missing Mass; No Dark matter; No Dark energy; No Bigbang generated CMB detected; No Multi-verses etc.

            Many predictions of Dynamic Universe Model came true, like Blue shifted Galaxies and no dark matter. Dynamic Universe Model gave many results otherwise difficult to explain

            Have a look at my essay on Dynamic Universe Model and its blog also where all my books and papers are available for free downloading...

            http://vaksdynamicuniversemodel.blogspot.in/

            Best wishes to your essay.

            For your blessings please................

            =snp. gupta

            It is evident that our ideas are fairly different. As I see it singularities are quantum mechanical, being in effect topological quantum numbers. Classically they make little sense, but quantum mechanically they may hold deep information. Also instead of galaxies being generated continuously whole cosmologies are generated.

            Anyway one can't disprove a theory with a theory. I can't comment a lot on the astrophysics of galaxies, for that is not a specialty of mine. I would have a hard time benchmarking your hypotheses with what is the standard in astrophysics.

            Best luck on your essay,

            LC

            Hi Lawrence,

            That is funny I was thinking the same, I lost all hope. In that case I won't ask you to evaluate my revolutionary theory:)

            last year essay

            your this year essay

            Thanks.

            P.S. I hope you recover from bronchitis which I had a severe reaction( I thought I was going to die!) to an antibiotics that was giving to me for the same. That is why my essay was quick to the point.

            Thanks for your comment on my page. I am very aware of what Yukawa potential is. It is just that combined with coulomb potential the system seem to predict the electron and the proton naturally. and this combination I already get it from the simulation of my system.

            Thanks again

            Dear Doctor Crowell,

            "Very little of human action really involves reason."

            Probability learning, I would say-- for every possibility X sub i (i = 1 to n), regret at having chosen X sub i when the payoff occurs elsewhere; and in other situations, regret at having NOT chosen X sub i, when the payoff does occur there.

            This is the signature of a learning algorithm, evident in the Born rule when Bohm and Hiley (in The Undivided Universe) are taken to heart-- that the two sides of the simplest possible equation for the Born rule represent two different concepts.

            I just adopt/adapt this and instead say that the two sides of the equation are two different ALGORITHMS.

            So of course, I have to resort to game theory. Hence the probability learning game.

            The quantum particle doesn't KNOW the laws of physics, so it has to LEARN them.

            (Who is doing the teaching?)

            David Tong, in his notes for QFT which he teaches at Oxford, says there is a limit close to the Schwarzchild radius where some physicists believe that QFT will break down. And then there must be a different theory.

            If I understand this correctly, it can't be QFT because that depends on nice results for Lorentz transformations, which would be expected to have exceptions, I guess, in the neighborhood of the Schwarzchild radius.

            Then what would the new theory be, and how would QFT "emerge" from it, to use the popular term?

            More generally, it seems to me there must therefore be an "infomorphism" from this other kind of theory to field(s), or field, if we respect string theory.

            I have been thinking of this in terms of proper time.

            And instead of another kind of field theory at that scale, I've been imagining a different kind of "particle" theory.

            But instead of being an "object," I've been thinking of the particle as a "process" as in formally specifiable computer process. (algorithm)

            Then to get an infomorphism, the proper time of such a process must map to a SET of all possible (string theoretic) fields, as represented by their coordinate times.

            Hence there should be a game-theoretic selection of fields, and since the non-flat ones give indeterminate readings for number of particles created, we should expect that the field selected will be flat. Otherwise, any number of particles could be created. But we are looking for a deeper theory from which such fields will emerge, and therefore it is the "particles" (processes) that determine how many of themselves there are, not the fields, which are selected, and which do not in this idea determine the number of particles.

            Here's the start of another discussion about this in the contest.

            Do you agree with David Tong, as I interpret him, that the usefulness of QFT breaks down in the neighborhood of the Schwarzchild radius?

            If so how would you see the particle-- not as an object-- but as a (computer) process?

              When it comes to my statement "Very little of human action really involves reason," I can appeal to the science fiction comedy movie "Men in Black." In there Tommy Lee Jones says to Will Smith, "A person can be rational, but people are a panicky heard of dangerous animals."

              Quantum mechanics by itself is as far as I see dead as a doornail. As for what might happen near the Schwarzschild radius there is I think a twist on the Langlands S-duality. We have in physics the basic observables length [L], time [T], mass or momentum [1/L]. Time and length are related to each other by the speed of light c. The intertwiner between momentum and length is the Planck constant 徴. However, we have a curious intertwining between mass and length, which is the Schwarzschild radius r = 2GM/c^2. By way of contrast with the Planck constant that is a reciprocal relationship between length and momentum, or certainly the uncertainty spread of the two, here we have a direct relationship.

              The context where by complexity enters the world I think is due to the existence of quantum hair and its connection to open entanglement topology of states. The connection between the structure of quantum mechanics and general relativity is through the abelian translation symmetries of the Heisenberg group and the BMS symmetry. This connects with the above linear or direct connection between momentum and position.

              There are a lot of unknowns here. We will have to see how things develop in the future. We may all be surprised by how our understanding evolves.

              Cheers LC

              The holographic principle : You can find a solution for it in my book "THE FRACTAL RAINBOW":

              According to John Maldacena (See article Scientific American, January-2006):

              "HOLOGRAM theory states that a quantum theory of gravity within a space-time anti-De Sitter is equivalent to a theory of ordinary particles at the border."

              "Unfortunately not yet known any theory of boundary that results in an interior theory that includes just the four forces we observe in our universe [...] Since our universe has not a defined boundary (such as having a space of anti-De Sitter and as precise holographic theory), we are not sure how a holographic theory for Our Universe would be defined due that there is no appropriate place to put the hologram."

              One option could be to propose, as a boundary of Our Universe for the HOLOGRAM theory, that it will not be situated on higher scales (Cosmic Horizon), but it could be on the smaller scales (Planck Horizon) where we could also have a 2D space boundary.

              This 2D "virtual" surface at Planck scale could be the boundary to be considered for the HOLOGRAM theory: the Planck Horizon (Boundary).

                On a 2-d boundary you would have a simple conformal field theory of the form originally proposed by Zamolodchikov. One can have higher dimensional CFTs corresponding to SO(8) which has a triality condition in E8. E8 or E8xE8 ~ SO(32), which is a supergravity candidate. the AdS/CFT correspondence is one aspect of a more general system of entanglement symmetries on horizons and boundaries.

                Cheers LC

                Dear Lawrence B. Crowell,

                My essay was on a complete different point of view of yours, I am still studying to understanding it. I gave you a 10 because it seems to attack things from fundamental points of processing information. I took the point of view of organisms that process information. I define life basically as an ecosystem or an entire biosphere (I don't state that in the paper, It's something from the discussion I've been having with people) which is basically like a chemical clock. And as such, life began as a chemical clock reaction that spread like wildfire in the primitive ocean. As it variety due different conditions it met in different niches, it evolved in complexity, yielding life as we know, based on cell.

                But, in all scales, life strives to mimic the whole entirety of the ecosystem, given the need to transport energy all the time.And 1 organism or a chemical cycle within a cell needs always to put within larger and larger scales of ecosystems. So, you have multi cell life and colonies as expression of this expansion in gathering resources. The top of this is the use of mathematics in modern human life to organize societies, though, this is a reflex from the primitive instance of chemical clocks working by inequality as a threshold to work as a clock. Note that even the topological shapes of organisms are organized by inequalities given by thresholds of substances.

                This is my essay:

                http://fqxi.org/community/forum/topic/2846

                  Daniel,

                  Thanks for the positive assessment of my essay. I do propose the existence of complex adaptive systems is due to fundamental structure, which lay at the level of quantum gravity.

                  You might want to pursue this idea of life being large scale or even planet wide early on. There are ideas about how the earliest biology or precursor of biology was an open system of replicating molecules. It may have been RNA-protein complexes developed within this gemish. The RNA were stabilized in this form. From this ribosomes, which are strange proteins with RNA within them, developed this way.

                  I will take a look at your paper as soon as I can. I have been unfortunately rather ill the last couple of weeks, so I am moving at lower gear right now.

                  Cheers LC