It is the case here that I am a bit in the minority on this here on the FQXi contest. I will say there was a parallel development from the late 19th century that was popular through the 1920s and still has some popularity today. When Maxwell, Boltzmann and Gibbs laid down the foundations of statistical mechanics it solidified the no-go theorem for perpetual motion machines. There arose a sort of cottage industry to show this physics was wrong and to demonstrate a perpetual motion machine. This waned in the 1930s and 40s, largely because humanity was up to its eyebrows with other problems, which unfortunately seem to be returning. Since the 1970s there has been also a sort of cottage industry that is strikingly similar with respect to quantum mechanics.

The two trends have some analogous features as well. Thermodynamics has the generating e^{-硫E} = e^{-E/kT} in the partition function, while quantum mechanics has e^{-iEt/徴} in a path integral or as the evolutionary development of a state. The quantum mechanical path integral under a Wick rotation is a partition function in statistical mechanics. The equation or replacement 1/kT = it/徴 with the reciprocal of temperature as Euclidean time. This is a route towards quantum critical points and phase transitions induced by quantum fluctuations.

The idea of the perpetual motion machine had a bit of motivation with Maxwell's demon, who could open and close a valve between two regions to separate fast and slow moving molecules, However, as Szillard demonstrated this can't be done for free. The demon is a sort of computer who if restricted to resources of the system will not be able to perform this activity. The demon must appeal to outside resources. In doing so entropy over all still increases. Much the same happens in a quantum measurement. A measurement is a quantum decoherent event where superposition or entanglement phase is coupled to an outside system or open world. By this means the density matrix of a quantum system is reduced to diagonal form. However, the actual outcome is not predicted.

Now enter hidden variables, beables or classical-like descriptions. This would seems to be a way in which the actual outcome is obtained. However, this would imply that a quantum observable has some prior existence or objective outcome independent of the Born rule of quantum mechanics. This is that the spectrum of an observable has a one to one correspondence with probability amplitudes or probabilities. This is really where the fly in the ointment occurs with these ideas. It is a quantum version of the Maxwell demon that can obtain prior information about a system independent of the information = entropy constraints of the system.

This has connections to other areas of physics, such as black hole quantum mechanics and thermodynamics. Of course in science we do not have proof of things, but only go on the basis of evidence that supports known foundations and models. I have no assurance the future will not have anti-gravity warp drive space travel with sub-quantal instantaneous communications and so forth. On the other hand I have some pretty serious suspicions these will not happen. Since you mentioned Sarfatti, I do not take his ideas about UFOs as real alien spaceships at all seriously along with his claim these demonstrate his various claims.

Cheers LC

Lawrence

I don't blame anyone for not fully 'understanding' QM. Feynman was right, but there is no comparison with ANY other case. In this case 'interpretations' don't matter as a simple, repeatable and irrefutable experimental proof trumps everything. All illogicality then evaporates.

The challenge is simply to reproduce the orthogonal complementary pairs of Cos[sup2 curves with some physical mechanism. Bell and others show 'hidden variables can't do it, but I show Bell was on the right track with his idea that 'fermion numbers" might be the way, somehow.

I was a complex 3-part solution which has taken time to put together (the last bit was the photomultiplier 3D field 'cascade' amplification, derived then found already proven in QCD!) but now it's done and it works. It reveals a few flaws in the foundations of QM, the key one being NOT adopting Maxwells orthogonal momenta for 'entangled pair' particles. 'Spin up/down superposed' is incomplete and misleading - loosing the logic of the reality.

Of course although conclusive and irrefutable (you can reproduce it yourself at home, experimentally and mathematically) it stands zero chance of admittance as a new paradigm in the next decade, if at all! Indeed my essay identifies why. Our brains prefer pre-set patterns and reject new alien concepts as they require the much harder 'rational computation' processes. It also takes a real understanding of QM - without completely 'buying' it. A very rare combination it seems! Even the few like Joy Christian have their OWN hypothesis (quite incomplete physically) which blinds them to anything else.

I'm a realist Lawrence, so not stressed, desperate or wanting kudos. I'm not even entirely convinced mankind is evolutionarily ready for significant improvements in understanding nature. But I shall anyway present it, in my own way, as I do feel some duty not to 'keep it secret'.

Anyone who's like to collaborate, i.e. with the mathematics etc, is most welcome.

Very Best

Peter

Dear Lawrence,

I read with pleasure and interest your essay, which builds on advanced ideas on black hole information, holographic principle, computability, open worlds, hyper-Turing machines, Godel's theorem, and consciousness as creativity in the sense of Chaitin and self-reference in the sense of Hofstadter, and proposes new interesting ideas.

Best regards,

Cristi

Dear Lawrence,

I read with great interest your remarkable essay. Although the more technical parts (such as paragraph 2) are difficult for me, it contains some fascinating insights on the frontiers of scientific research. In particular, I am very interested in the issues of the possibility of a hyper-Turing machine and of the Malament-Hogarth spacetime, that did not know before reading your essay, and about which I will try to know more.

A question: you speak of a hyper-physical Turing machine as a truncated version of the ideal one. This suggests that the calculation of uncomputable functions, although it is an ideal, may be physically realizable, even if in a partial form. But how? Only in close proximity to blacks holes, or in some other forms?

One last note: I enjoyed the final reference to Stanislaw Lem, one of my favorite storytellers. The conscious ocean of Solaris is one of the finest inventions of the Twentieth-century literature.

Cheers, Giovanni

    Hi LC,

    As usual, you released a remarkable contribution. Your idea that an open universe implies the emergence of consciousness is consistent with the anthropic principle.

    Despite the holographic principle and firewall are interesting frameworks, I do not think that they solve the black hole information puzzle. You know that I have my proper semi-classical solution inspired by the work of Bohr and Schrodinger. Also, I do not like the idea to weaken the EP in order that unitary principle of QM holds. In any case, you wrote and intriguing and pleasant Essay deserving the highest score that I am going to give you. Good luck in the Contest.

    Cheers, Ch.

      • [deleted]

      This is this paper that gives the basics of how hyper-computation occurs in these spacetimes. The Wikepedia website also gives some references, including the paper above.

      The connection with MH spacetimes is really mysterious. The MH spacetime, such as the inner horizon of a Kerr or RN metric black hole, permits for the eternal black hole an infinite bit stream to reach an observer who crosses the inner horizon. The inner horizon is continuous with I^+ = r^в€ћ in the exterior, so this surface permits the infalling observer to witness the fate of the exterior universe. This means that any process outside the black hole that sends a bit stream of its state to the black hole will be received by the infalling observer. At the inner horizon this observer can check whether the system halts or not. Given an arbitrary number of such Turing machines the infalling observer or recording system can serve as a universal Turing machine.

      Now maybe there is something terribly wrong with this. For one thing the black hole has a finite time of existence. Hawking radiation will evaporate even the largest possible black holes in around 10^{110} years. Also the inner horizon is a Cauchy horizon that has possible properties of a singularity. The huge pile up of null geodesics might be a wall of sorts. Also as my essay deals with entangled black holes, it is likely that any black hole is entangled with a vast number of black holes. So any system or observer that crosses r_- or the inner horizon may be by ER = EPR blasted into a vast number of other black hole interiors across a vast number of ER bridges. This is a form of the so called mass inflation singularity. This then limits the hyper-Turing machine; in fact makes it no longer hyper-Turing.

      However, with the Chaitin halting probability this might adjust these probabilities. The Halting probability, called Chaitin's О© number, for the ideal Hyper-Turing case is either 1 or 0, is still uncomputable, but it may be adjusted sufficiently so the probability can be "guessed." We might even think of it as meaning a random probability outcome is adjusted to give yeah or nay for halting with greater fidelity. This is the domain where things are not understood well as I see it.

      The over-arching idea is that what happens in the UV limit, say quantum gravity etc, is dual to what happen in the IR limit, say in the low energy domain of chemistry. So this structure may in some way be fairly common in the universe. It may be common in what we know as biology.

      Thanks for the positive assessment. I will look at your paper today or as soon as possible.

      Cheers LC

      Dear Lawrence,

      I thank you very much for your kind and very detailed response. Now I have really a lot of material for reflection and study. What I need is, alas, time!

      With regard to my paper, you have already posted a positive comment on it. Thanks for this too.

      Cheers, Giovanni

      Thanks for the good word.

      My sense is that the equivalence principle and the unitarity principle are versions of the same thing. Because of this they do not generally hold completely for general experimental conditions. It is really similar to the duality between reality and locality in Bell's theorem. You can have one, but not the other. The same I think happens here in that if you can measure all quantum states in a nondestructive way (weak measurements, etc) you then have some small deformation of the equivalence principle. On the other hand if you measures the EP to complete accuracy this is traded off by some inability to account for quantum states in a unitary manner.

      Cheers LC

      Dear Lawrence Crowell,

      i just read your essay and must say, it is power-packed with several concepts which are hard to grasp at first glance. You seem to follow the maxime that to know what constitutes consciousness, aims and intentions it is necessary to first figure out how the inanimate nature works in detail.

      You state that "This means that a proposition that is a fi xed point of some predicate built from provable and true functions is equivalent to a functional combination of false statements."

      Isn't this a huge drawback to your approach to figure out how the inanimate nature works in detail - to then conclude what within this nature could lead to the phenomenon of consciousness? Your statement reads to me that there could be a whole landscape of inconsistencies, means, false statements which nontheless built - 'at the macrostate' a consistent system! How can one, under these circumstances, develop a realistic theory of consciousness? Does this not need what you - rightfully - wrote, namely that the world is open? I interpret the word open as a dimensional realm that resolves the deterministic character as well as the character of freedom in mathematics by transcending it. Don't you need such a transcendent realm to come from a network of possible false statements to some kind of reliable truth about the world? And if this cannot be done by mathematics alone - because therefore all assumptions which are imposed on a certain mathematical system would have to be necessarily true and not only possibly true - what is left over from the computational picture you describe in your essay?

      Best wishes,

      Stefan Weckbach

        The statement you quote pertains to Loeb's theorem. This is a form of Goedel's theorem, which say that any provability theorem in a mathematical system means the system has an inherent level of inconsistency. This is the odd thing about Doedel's theorem, either a system is incomplete with theorems that are true but unprovable or if everything about the system is provable then the system is inconsistent.

        Very little of human action really involves reason. Largely people base their actions on hunches or simply what feels good. While we have in recent times built a world that depends upon more reasoned and rational thinking, humans generally do not act as such. This might be a serious problem in fact. We seem to have evolved the ability for reasoning, but much of our behavior is based on other things. Often humans are very contradictory. Yet curiously this has served us well in our evolution, as it has for other animals, some of which are fairly intelligent.

        Consciousness is not at all a landscape of consistent statements and rational processes. It is really a cacophony of contradictory impulses, subconscious processes and inner mental images that compete with each other. I think you might agree that while you and I are able to sit down and work on mathematical problems for long periods of time, we also have our times of "stream of consciousness" that often have no particular rational basis.

        I could have maybe gone more into this, but I wanted mostly to lay down the idea that an open world with respect to quantum entanglement leads to the prospect for this sort of functioning that we might identify with life or consciousness.

        Cheers LC

        Dear Lawrence,

        thanks for your answer. I have to ponder about Loeb's theorem and investigate what it really says and how it comes to its conclusions. Doesn't it simply say that 'if P is provable, then P is provable'?

        Best wishes,

        Stefan Weckbach

        Dear Lawrence,

        as I stated my essay forum: very good essay (some misprints) and you got a high vote from me.

        This essay was inspiring for me (I'm also looking for EPR=ER currently). I'm a fan of Popper and an open world. You are certainly right that our essays are related. In my essaymy essay, I also consider networks with underlying hyperbolic structure but only for the signals going through the network. You used the tensor networks to describe the states itself. But nevertheless, we both got similar results. There must be a qualitative change to get intention or wandering towards a goal. Topology change is a good ansatz for this.

        Best

        Torsten

          • [deleted]

          Lawrence, I believe it is to your credit that you appreciate the fundamental question: In what sort of universe is consciousness possible.

          "It might be that consciousness is also a truncated hyper-Turing machine that approximates the ideal of a completely self-referential system that can jump out of an algorithm, or make a leap of imagination." And "The apparent ability of living systems to make choices and to perform actions far more subtle that computation may stem from the open universe..."

          What you and I are writing about in each our own way is expressed in the check required to submit a post, designed to confound a non-conscious spammer.

          That consciousness "can can jump out of an algorithm" and is "far more subtle that computation" is an important insight that seems to be lost on most essayists here. My solution to the question of how such a transcendence (equivalent to your "openness"?) is possible is more prosaic than yours, but maybe more comprehensible. I'd be interested in your evaluation.

            Dammit, it said I was logged in at the bottom of the page....

            I will try to respond tomorrow. I got the flu a few weeks ago and now I have bronchitis that is sort of dragging me down. I do have a question concerning the Uhlenbeck, Freed, Donaldson type of result, but I will have to wait until tomorrow if I am better.

            Cheers LC

            Consciousness or for that matter even just goal oriented behavior of a simple organism does seem different than just computation. I indicate how Loeb's theorem enters into this, and the upshot is that if there is some provable system inconsistencies must occur. In some sense that is the case, where contrary to Emmanuel Kant's idea of a rational life, often much of our thinking is a jumble. Underneath it is a tangle of competing subconscious messages, conflicting images and this can percolate into consciousness that is terribly inconsistent. We have all at times been there, or at least in a stream of consciousness moment.

            Cheers LC

            Dear Lawrence,

            Your essay contains a wealth of detailed material, and I cannot give all the items the attention they deserve. I want to focus on consciousness, which is an important topic in your paper and perhaps the main topic. Consciousness is also of particular interest to me. I think I understand your point that an open universe is a basis for the possibility of self-reference (page 2). I am also familiar with Douglas Hofstadter's belief that consciousness might be a form of self-reference. I am not clear, however, about how the idea of a truncated hyper-Turing machine is related to the idea of self-reference. Is it that a hyper-Turing machine is one way to implement a self-referential system? Or is there some other connection to notice?

            In any case, thanks for a stimulating essay.

            Laurence Hitterdale

              Dear Lawrence,

              as far as i understood it, Loeb's theorem says exactly what you wrote in the post above. This result indicates for me two things, firstly that there is no TOE which can be proven to be the 'real thing'. Because if one could prove it, it would be inconsistent and therefore wouldn't be the real thing, and therefore not the TOE.

              Secondly, if mathematics has these malicious properties as Loeb's results indicate, then, for the sake of consistency, we must differentiate between provability and truth. This is what naturally all authors in the essay contest not do: although their lines of reasoning cannot be proven, they assume them to be nonetheless the truth (including my essay).

              Claiming that one's result is the Truth in the absence of a proof, because these results appear to be so self-evident to the proponent would mean that the proponent equals self-evidence with a formal proof. But these both are different things. Self-evidence refers to the consistency of a certain line of reasoning, but does not say anything about the ontological status of its contents.

              Now let's make a more general point: If mathematics would indeed be the fundamental layer of reality in a platonic sense, it would obey Loeb's theorem. Since all of mathematics then resides in the platonic realm, it must be complete. Every new axiom, identified by human beings, would not be a human creation, but a discovery of a part of that platonic realm. But this cannot be the case, since in the platonic realm, mathematics must be considered as complete (and infinitely infinite). But if it is complete, every sentence that can be constructed could be proven. But this implies that this mathematics is inconsistent.

              Taking this scenario at face value, one then can return to the initial assumption and ask where the error lies. Does we find the error within Loeb's theorem or within Gödel's theorems? Or is it really true that mathematics does not encompass all of reality, even if this assumption cannot be proven to be true? I think the latter is the most probable answer: Mathematics cannot be the most fundamental level of reality, because otherwise we run into contradictions within our own lines of reasoning.

              If something such rational and calculatable as mathematics should not be the most fundamental level of reality, what then should be this level? I have argued in my essay that it needs an intelligent entity who at least invented mathematics. Otherwise one had to conclude that reality is an absurdity, producing or providing a system (mathematics) that mimics some rational and consistent behaviour but nonetheless, at its core, it must have arisen out of a sheer inconsistency, a kind of absurd nothing. Fortunately the latter can also not be proven and if one assumes it to be nonetheless true, how can one then be sure that even Loeb's theorem tells us something meaningful about reality?

              I think from a logical point of view one has to cope with the fact that mathematics has certain limits, limits which are a broad hint that mathematics cannot be the most fundamental level of reality - because in an inconsistent reality, the very notion of 'fundamental' may not carry any sense of ontology with it. For the existence of a most fundamental level of reality one could expect that it shows up from time to time in a manner that contrasts the widely held assumption of the omnipotence of mathematics (as i tried to show by the example of near-death experiences). Since we are not able to solve some 'simple' tasks like the 3- or 4-body problem and other physical tasks, the assumed omnipotence of mathematics seems not to be fully implemented at least in our physical universe. And if it nonetheless would, this 'omnipotence' necessarily would lead to inconsistencies due to Loeb's theorem. But this would lead us again to our initial assumption of how we then can validate the soundness of all of mathematics itself, including Loeb's and Gödel's theorems. As Loeb indicated, we can't do this, even in a world where the assumed omnipotence of mathematics is physically instantiated. Thus, the assumed omnipotence of mathematics is only assumed, but can never be reached, neither in a platonic realm nor in a physical realm, because incompleteness and inconsistence are mutually excluding each other. Therefore, neither an incomplete nor an inconsistent system is a good candidate for the most fundamental level of reality. By pondering about the alternative, i think one has simply to assume a teleological component behind it all (without ever being able to prove this mathematically).

              What do you think about these lines of reasoning?

              Best wishes,

              Stefan Weckbach

                The part about truncation is that this is a cut-off that prevents what might be called infinite navel gazing. A formal system with a countably infinite number of predicates that are provable, will be the Cantor diagonalization of the Godel numbering of these predicates result in ever more predicates that are not provable. Godel's theorem is really a form of Cantor's diagonalization or "slash" operation on a list of numbers. As a result a formal system has an uncountably infinite number of elements, and of course Bernays and Cohen used this to show the continuum hypothesis is unprovable in the Godelian sense, but consistent with ZF set theory. From a computation perspective we really do not want to go there!

                If we think of the most elementary hyper-computation consider the case of the switch flipped on and off according to Zeno's prescription. What will be the final state of the switch? The problem is that as time approaches zero the switch is moving with so much energy it becomes a black hole. The answer is not revealed to us. Hyper-computation has some funny connections with black holes. This sort of puts an event horizon over the ability to beat Turing and Godel.

                Thinking of this with Turing machines, the universal TM is a sort of Cantor diagonalization slasher on TMs, and it will always have TMs outside its list of halt and nonhalt. Then enters the MH spacetime which exploits properties of the inner Cauchy horizon of a black hole. It is in principle possible for an observer to cross this horizon and receive information about any possible algorithm process in the exterior. It is then in principle a sort of UTM that can make this list, even if it is uncountably infinite, and this is hyper-computation. However, this relies upon the properties of an eternal black hole. Black holes can exist for a long time, the largest that might exist in the future are a trillion solar masses (from the end point of galactic clusters say 10^{40} years from now) and these might endure for 10^{110} years. However, this is not eternal and it cuts off or truncates any possible hyper-computation. In reality I don't suspect much would be entering such as black hole as the exterior world will be a dark and cold void. The evaporation of a black hole even limits hyper-computation in the interior.

                What I do outline though is that this will adjust the Chaitin Ω-number for halting probability. If we had perfect hyper-computation available the Chaitin Ω-number would be 1 or 0. Without that we do not know it with any certainty. However, with truncated hyper-computation the Ω-number many be adjusted closer to 1 or 0, and in a quantum mechanical tunneling setting or just plain probabilities and loaded dice this may give outcomes. These outcomes may or may not work, but in a truncated hyper-Turing machine setting they will permit more favorable outcomes; in effect you can hedge your bet or there is some pink noise.

                Then if nature is dual, what happens at the extreme UV limit with black hole quantum hair is dual to low energy IR stuff, such as chemistry or biology, then ultimately this sort of structure is encoded into the nature of reality. The main argument I give then is the emergence of self-directed systems that exihibit intenionality is scripted into the structure of the universe.

                Cheers LC