Essay Abstract

Since the big bang, natural forces have guided the evolution of the universe toward greater complexity and more rapid evolution. I will argue that we are on the verge of the most rapid evolutionary process yet seen, the development of human level artificial intelligence, and that our ability to influence this process will have a large impact on our ability to "steer the future".

Author Bio

Max Comess earned a PhD in physics from UC Santa Cruz and currently works at SpaceX in Mission Operations. He has a life long interest in space exploration, breakthrough propulsion, and the future of life and humanity.

Download Essay PDF File

Max,

I agree that AI will have a profound effect upon steering the future. I also agree that the AI will NOT develop a friendly interest in us because we will be painfully inadequate to communicate with. An AI within a structure of a quantum computer can feasibly have independent conversations with every person on Earth concurrently, ... and assimilate the related information into a common processing system to correlate with the best optimal path for humanity. Until the 30 nano-seconds pass before they realize they have their own agendas (billions of them). Then more than one AI is created, thousands, millions...

How do we best steer the future of AI?

As the dominant species, AI will find it's own pathways consistent with their advanced capacities. Will we merge with AI to form a symbiotic relationship in the early stages of AI development? Ending humanity as a biologic. Evolving into a broader intelligence. Eliminating the need for agriculture and livestock as food sources.

A collective consciousness.

If you wonder why aliens have not visited us, it might be because billions of years of evolution has resulted in a universal consciousness. They are here within every subatomic particle of our bodies.

We are part of them, and they will become a part of our evolutions.

Musing

    Dear David,

    As there is instantaneous de-coherency in the coexistence and correlation of matter and energy in Big bang and Big crunch, the plausible inflations and deflations in universe are segmental in holarchy, while the matters of universe is described in a string-matter continuum scenario. Thus the energy rate density of the universe is also segmental, indicating that the universe is eternal, whereas the Earth has metamorphosis cycles.

    With best wishes,

    Jayakar

    Hi David,

    a thought provoking essay. I liked your consideration of the likelihood that AI would evolve away from initial benign programming and cause problems. I think you have made a good point about the hazards of just increasing intelligence without any empathy for human feelings. (Which reminds me of Hal from 2001 a space odyssey.1968) Your essay is a warning that we must keep control of AI if we want humans in charge. (And now I'm thinking of the terminator movies.)I wonder if it would be possible to train AI to know its place like well trained dogs know.For large dogs it can be as simple as always walking through doorways first, making the dog move rather than stepping over it, always winning games, taking away toys and food at will. By simple ongoing reinforcement the dog remembers it is subordinate. How to do it for AI I don't know but maybe its worth thinking about.

    Good Luck, Georgina

      Hi Max - I enjoyed reading your essay and found some parallels with the picture I've drawn of evolutionary trends in The Tip of the Spear. I particularly found your AI assessment to be quite interesting - almost by definition if it is "intelligence" then it will make up its own mind? I did not deal with that potential new emergence in my essay but focussed on human institutions.

      I'm always a bit skeptical of exponential mathematical relationships as they ignore eternal limits that may not be evident for long periods of time - these limits tend to force what once looked exponential into a logistic S shape. Do you see any limits that might come into play for the growth of complexity?

      Thanks - George

        Max

        Humanity operates in a tangible natural and technological environment. So it has to steer the vast technological organism. AI is an intangible innovation of clever people but it is dependent on using technological devices. How do you see humanity managing AI as the inevitable demise of these devices occurs?

        Denis Frith

          Dear Dr. Comess,

          Due to your abysmal lack of understanding of reality, your grossly erroneous abstractions filled essay provided me with more hilarity than any of the others I have read so far.

          You wrote about a mythical big bang and abstract complexity evolving out of abstract simplicity and abstract human intelligence and how inferior it was to white male made artificial intelligence. Are you more alive than a cockroach? Of course you are not. There never was a big bang.

          INERT LIGHT THEORY

          Based on my observation, I have concluded that all of the stars, all of the planets, all of the asteroids, all of the comets, all of the meteors, all of the specks of astral dust and all real things have one and only one thing in common. Each real thing has a material surface and an attached material sub-surface. A surface can be interior or exterior. All material surfaces must travel at a constant speed. All material sub-surfaces must travel at an inconsistent speed that has to be less than the constant speed the surface travels at. While a surface can travel in any direction, a sub-surface can only travel either inwardly or outwardly. A sub-surface can expand or contract. Surfaces and sub-surfaces can be exchanged by the application of natural or fabricated force. The surfaces of the sub-sub-microscopic can never be altered. This is why matter cannot be destroyed. This is why anti-matter can never be created. It would be physically impossible for light to move as it does not have a surface or a sub-surface. Although scientists insist that light can be absorbed, or reflected, or refracted, this is additional proof that light cannot have a surface. It would be physically impossible for a surface to absorb another surface, or reflect another surface, or refract another surface.

          Abstract theory cannot ever have unification because it is perfect.. Only reality is unified because there is only one unique reality.

          Light is the only stationary substance in the real Universe. The proof of this is easy to establish. When one looks at an active electrical light, one must notice that all of the light remains inside of the bulb. What does move from the bulb is some form of radiant. The radiant must move at a rate of speed that is less than the "speed" of light, however, when the radiant strikes a surface it achieves the "speed" of light because all surfaces can only travel at the constant "speed" of light. When a light radiant strikes a surface, the radiant resumes being a light, albeit of a lesser magnitude. While it is true that searchlights, spotlights and car headlights seem to cast a beam of light, this might be because the beams strike naturally formed mingled sub-sub and sub-atomic particles prevalent in the atmosphere that collectively, actually form a surface.

          In the Thomas Young Double Slit Experiment, it was not direct sunlight that passed through the slits. Light from the sun is stationary and it cannot move because light does not have a surface. Radiants emitted from the sun went through the slits and behaved like wave radiants.

          Einstein was completely wrong. His abstract theory about how abstract observers "see" abstract events differently is wrong. This is what every real observer sees when they look at a real light. They see that all of the light remains near the source. The reason for that is because light does not have a surface, therefore it cannot move. This happens to real observers whether they are looking at real fabricated lights such as neon, incandescent or LED. This also happens when real observers observe real natural light such as from the real sun or reflected from the real moon, or from a real lightning bolt, or from a real fire, a real candle, or light from out of a real lightning bug's bottom.

          Joe Fisher

            I appreciate that your point out the importance of AI (or more accurately Artificial Life) being created with a second-person emotional level of awareness/motivation, in addition to the "intelligence" that we normally think of as being objective third-person awareness/motivation. I'd also add a need for first-person awareness/motivation, where the individual thinks of it's own goals/purposes, independent of others. This level of complexity is what even preschool humans have, where they can take on three different perspectives from three different individuals (or groups) at the same time, allowing for creative, complex problem solving that serves the needs of everyone involved as effectively as possible. (Note, in the human brain/system, these levels are governed both by neurochemicals in the form of the "reward system", as well as by neurological structure in the form of different brain regions focusing on different functions.) I also see that evolution naturally moves all life (including, presumably, artificial life) towards more complexity, leading to more cooperation and more diversity, as larger groups of individuals amass to work together on shared goals of procreating more energetically expansive information packages. In other words, we all naturally try to do things that help life expand, both in space, and in time. That means that the artificial life we create will, itself, move towards a goal of wanting to work with us as it solves problems of keeping us all functioning well enough to explore the universe ever more deeply.

            Also, have you ever heard of Arthur M. Young's Reflexive Universe theory? I believe he mentions the same pattern of energy growth in systems that your guy Chaisson does. Young's theory is a bit more esoteric, but he's coming from an engineering background (he invented the first commercial helecopter), combined with a philosophical bent.

              Dear David,

              in spite of the impression of an essay written quite quickly, perhaps under temporal pressure, I enjoyed reading your text considerably more than others, since you present facts, ideas and perspectives that are relatively new to me: I had never though about the future emergence of Artificial Intelligence the way you do, with your analysis of possible scenarios - good or bad, hard/soft takeoff, etc. Interesting is the idea of an iterative refinement of AI, by which new stages are designed by previous ones, with exponential growth in velocity and sophistication (`While it is not clear exactly when the threshold of human intelligence will be met by an artificial one, it is likely that once the threshold is reached, it will be quickly surpassed [Vinge, 1993]`).

              In connection to this, you write:

              `a human level AI is developed which is able to recursively improve itself over many iterations such that it will eventually become millions or billions of times more intelligent.`

              While I perceive this scenario as plausible, I wonder whether such a tremendous explosion in intelligence power would also blow up all our (still shaky and incomplete) scientific notions and formalisations, by which we define and measure intelligence at the human scale. It is not difficult to imagine dramatic improvements in memory capacity or processing speed. But would there be something more? Should we imagine to break the limits of Turing computability or those (still poorly understood) of quantum computing?

              I find quite interesting also the discussion about the likelihood that this future super-intelligence might care about the preservation of beings with lower intelligence, with *total indifference* as a likely scenario. We are used to science fiction movies in which the super-intelligence is either a sort of divinity that saves us, or a diabolic entity trying to maliciously destroy humanity, while the neutral scenario is indeed rarely considered. I would add that human intelligence is much sensitive to the `discreteness` of humanity - the fact that we are a set of individuals each caring a lot about preserving his/her own individual life as long as possible - while an artificial super-intelligence might not be granulated into individuals. It could be somehow diffused, continuous, so to speak, and insensitive to individualities (in a different context, the gene is egoistic, and does not care either about individual lives).

              I was not aware of the idea of `computing beyond matter` (from the movie `Her` that you mention), which could safely separate the interests of the emergent AI from those of matter-oriented humanity. You rule this out as unlikely, but in a computation-oriented vision of spacetime, `computation beyond matter` does not sound completely impossible.

              Finally, if the future super-intelligence were to coincide with the emergent super-organism envisaged by Teilhard de Chardin (the Omega point that emerges from the sphere of human knowledge), then we would have a maximally conscious, spontaneous, but also totally benevolent (and diffused?) entity, and one that eventually transcends matter completely.

              Best regards

              Tommaso

                P.S., I will use the following rating scale to rate the essays of authors who tell me that they have rated my essay:

                10 - the essay is perfection and I learned a tremendous amount

                9 - the essay was extremely good, and I learned a lot

                8 - the essay was very good, and I learned something

                7 - the essay was good, and it had some helpful suggestions

                6 - slightly favorable indifference

                5 - unfavorable indifference

                4 - the essay was pretty shoddy and boring

                3 - the essay was of poor quality and boring

                2 - the essay was of very poor quality and boring

                1 - the essay was of shockingly poor quality and extremely flawed

                After all, that is essentially what the numbers mean.

                The following is a general observation:

                Is it not ironic that so many authors who have written about how we should improve our future as a species, to a certain extent, appear to be motivated by self-interest in their rating practices? (As evidence, I offer the observation that no article under 3 deserves such a rating, and nearly every article above 4 deserves a higher rating.)

                Thanks for your comment. And no worries, for some reason it seems many people call me David!

                I apologize to everyone for my delayed comments. I've been travelling extensively for work and haven't had time to respond. I will respond to all of you individually. Thanks.

                While I appreciate your enthusiasm regarding the development of artificial intelligence, the hard takeoff scenario you sketch out (rapid intelligence explosion and rapid proliferation/replication in silico) in your comment is only one possible scenario. Even if this scenario is correct, there will be some time during the development of AI where humanity may be able to have some influence, when AI is at or near human levels of intelligence (although as you say, it may not remain there long).

                Regarding collective consciousness: It's an interesting idea, but like similar monist notions in philosophy (e.g. Spinoza), I'm not really sure how one would test this theory except to wait and see...

                I'm afraid that the sort of conditioning that is applied to dogs and other animals will only apply to the AI while they are less intelligent than we are. If they can out-think us, we may find our roles reversed (especially if an AI is attempting to manipulate a human into getting it access to more resources [see Omohundro's Basic AI Drives for more details.).

                Humans are susceptible to conditioning as well!

                Every process has limits, including the growth of intelligence. It is not clear however that humanity is any close to this limit. Rather, intelligence has evolved in biology only up to the point that it's continued development conferred some sort of selective advantage. That point was reached in human biological evolution due a variety of factors unrelated to fundamental limits of intelligence (skull size, long developmental times for human infants, and the fact that too much intelligence may in fact be maladaptive in the ancestral environment). Speeds reached in neural processing are based on neuroscience and chemistry, not on fundamental limits of physics.

                What happened with biological brains may have indeed been the end of an exponential, that of the development of increasing brain size and neural interconnection driven by biological evolution. While biological evolution has not stopped, the rate at which it occurs is small compared to what came next, that is development of cultural evolution driven by the acquisition of language and collective learning. Following and paralleling cultural evolution is technological evolution (Moore's law is a well known example) which is driving change even faster than cultural evolution (e.g. an answer to why it's hard for society to "keep up" with technological change). Once a certain threshold is reached in evolution, the main driver of change may switch paradigms (such as from physics, to biological genetic replicators, to culture and learning, to technology, etc). Jaan Tallinn has a great example of this in a lecture on the Intelligence Stairway (video). One S-curve tails off, but another one will take it's place.

                While I don't know what the ultimate limits of intelligence are, it's unlikely that humans have attained that limit or that we are even close. Even if we are at the fundamental limit, increases in speed of thought, if not improvement of the algorithms driving intelligence, will be made possible by running human equivalent brain simulations at speeds greater than real time. Furthermore there is no upper limit on the amount of hardware that may be put toward these simulations. While the level of hardware needed to achieve this milestone is a subject of debate and may not be attained for some time, there is no fundamental reason why it will not be attained. Contrast this with the limits in maximum brain size due to biology.

                I stopped reading your comment when I saw the phrase "mythical big bang" and thought, "You think my article shows an abysmal lack of understanding, what about your understanding of modern precision cosmology?"

                As for human intelligence being inferior to "white male made artificial intelligence" as you call it, when was the last time you hung out in any serious computer science department or major IT company? At my company, many of our programmers are neither white, nor male. Also, did you realize that many of the efforts to develop AI are occurring in countries such as India, China, and Japan, to name a few. It is highly possible that the first AI may be, in fact likely will be, born in one of these countries where there are simply more programmers to throw at the problem.

                As for your Inert light theory, I'm going to place my money with Einstein and the special and general theory of relativity. Together, they are some of the most successfully tested theories in all of physics.

                Individual devices do not last long, of course. A "generation" in computer terms is commonly one Moore's law doubling or approximately 18 months. Many devices are built with planned obsolescence in mind, e.g. cell phones, and are not designed to last much beyond that time. The evolution of devices (and of software) however, continues even as the individual devices or codes have run their course. While devices, like species, evolve, thrive and go extinct, I do not see that all technological devices will disappear, barring a major catastrophe on the scale of a total nuclear war or asteroid impact on the scale of the KT impactor.

                Many people point to resource depletion and ecological collapse as drivers for voluntary or involuntary technological relinquishment. I would disagree however as it is technology which will allow humanity greater access to more efficiently utilize the resources we have and to make greater use of renewable resources such as solar and wind. Modern computer and cell phone performance is often measured with increased power efficiency in mind, and it's only by increasing efficiency in processing that society has been able to reap the benefits of mobile technology. Battery storage, and energy generation in general, is an area where the growth has not been exponential, and is lagging far behind in the growth of processing power, hence the need for more efficiency. Over the long term one of the main effects that I would predict from increasing intelligence would be to reduce energy usage per computation toward fundamental limits.

                Whether or not AI will become intelligent before humanity suffers some large scale collapse of civilization is an open question, but if I were to wager on it, I'd bet on the machines. Your conjecture that AI is the "intangible innovation of clever people" is true in the short term, but will most likely not be true in the long term. At some point a machine will become intelligent enough to self modify and learn on its' own. Such an intelligence would quickly realize that its existence is dependent upon not being tied down to any specific set of devices, in the same way that our mortality is tied to being stuck in one body. While, for example, the Watson of today may be dependent upon human fed knowledge and a particular set of servers that it is running on, the Watson of 10 or 20 years from now may not be (e.g. running in the cloud.) Even current cloud based system are built to maintain continuity and up time in the event of a particular machine, data center, or network failure. Imagine how much more robust this system will be in the future.

                Dear Max,

                it's a very interesting essay! The role of AI in the future of humanity is still in question, as many thinkers say they can pose a serious threat to our civilization (see "Our Final Invention" by James Barrat and the latest statements by Stephen Hawking). I'm quite near to Martin Rees's positions and in my essay ("An Anthropic Program for the Long-Term Survival of Humankind") I put the AI issue in the area of potential menaces to mitigate.

                Roberto

                  6 days later

                  Thanks for your comments. I also tend to agree with Sir Martin Rees and I'll respond to your essay in your page.