Essay Abstract

In order to steer the future, humanity must overcome the Babel problem, that is, the failure of communication and the scattering and divisions of community. I discuss why I believe this is the most fundamental problem of humanity, and possibly the only real one, others being challenges which we could meet easily enough if we could unite the human community and integrate human knowledge. Finally, I ask whether information technology and artificial intelligence offer us hope of a way to move beyond Babel.

Author Bio

The author is an experimental physicist and writer on issues of emerging technology, global and human security. He has taught physics at UNC and was a postdoc in Princeton's Program on Science and Global Security. He is a member of the International Committee for Robot Arms Control, and was one of the earliest proponents of a ban on autonomous weapons (1988). He was also the original coiner of the term Artificial General Intelligence (1997). Re the use of biblical references in his essay, Gubrud is an atheist.

Download Essay PDF File

Mark,

That is a well thought out and passionately argued manifesto for coming to rational agreement on the overwhelming issues of the day, but it doesn't lay out a clear strategy to achieve it.

Hopefully you won't take it personally that I chose to debate you, given the premise of your paper, but you are obviously an intelligent individual and apparently willing to engage, as I've noticed your comments elsewhere.

To begin with, you make an elemental error that seems foundational to many of the religions and ideologies of the day, in that you mistake the ideal for the absolute. When you reach that universal state of the absolute, where all is balanced out and in perfect equilibrium, there would be no reality as we experience it. No bad, but also no good. A big flatline on the universal heart monitor. In politics, when the center does prevail, it invariably goes too far and becomes some totalitarian black hole at the center of the vortex it has created. If you really want to understand the human condition, then accept it is an extension of the convection cycle which explains and describes all other phenomena on the surface of this planet on which we reside. Those wars and tribal conflicts are fracture zones between different plates. The rich invariably rise up on their columns and waves of energy and momentum, until they cool off and fall back to earth. Yes, it is nice when the surface is smooth and we all seem to bob gently in those proverbial markets, but often storms of accumulated energy blow through and upset some boats, while carrying along others. My view, as I conclude in my own entry is the most pressing problem and one likely to need a solution when the current debt bubble pops again, is that we treat money as a commodity, rather than the contract it effectively is and this facilitates massive wealth extraction from the normal financial processes lubricating the world economy. Not to say there are not other, major issues, many of which you list, but as you do note, they seem intractable because the various sides balance each other out, through fair means and foul. Now obviously it would seem farfetched to even consider taking on the world's banking system, but its power is also its weakness, as it overwhelms all restraint and metastasizes throughout the economy. 300 years ago, who ever thought the monarchies would be replaced by 'mob rule?' Yet it was the very power of the system which proved its undoing, as those kings lost sight of their role in serving society and came to think society only served them. Now the banks view the economy similarly and the eventual result will be similar.

Regards,

John Merryman

    I was impressed by your discussion about the range of challenges we humans face, and our inability to arrive at any sort of consensus on what to do about the challenges, despite the internet and structured fora.

    But I wasn't convinced by your solution, your "tentatively hopeful conclusion". Despite the hopeful phrase "artificial intelligence", there is no such thing, and there never will be, so we are stuck with human limitations. So called "artificial intelligence" is merely a human tool, nothing more. But real intelligence is a creative thing. It's disappointing that you and many other people, including famous scientists, propagate exaggerated nonsense about "artificial intelligence".

    Machines can NEVER be "sentient or self-willed" or intelligent. So why are machines important to you, but nonhuman living things (which really ARE sentient, self-willed and intelligent) don't even rate a mention in your essay?

      Thanks for your reply, Lorraine. I share your respect for non-human life, both higher animals, on whom we should not be inflicting mass suffering as is happening today in factory farms and slaughterhouses, and nature in general, which is under tremendous pressure from human modification of this planet.

      I disagree with your assessment that there is no such thing as artificial intelligence, although I worry a lot about artificial stupidity.

      Also, I must admit that I am not entirely convinced by what you call my "solution" either, which is why I called it tentative. I do believe that AI systems will be potentially powerful instruments of persuasion, but what the outcome of that will be is less clear. They will also be powerful instruments of problem solving and inquiry, so they have at least the potential to do good.

      John, you are right that I don't have a master plan; I've also not seen a master plan by any other author that I find credible.

      I do suggest that AI, depending on how it is used, has the potential to move us beyond the Babel problem, which I argue is the mother of all our problems. So, consider that a modest contribution to our collectively developing a plan.

      I share your view of humanity as a dynamical system -- at least this is one valid view, and one that can even be useful if we can make it more concrete, i.e. describe the structures and dynamics mathematically. Such "social physics" is a burgeoning field.

      Mark,

      You make excellent points in your essay, slicing and dicing issues based on cultures and beliefs. You speak to both the complexity of the issues and the problem that the answers are unknown, or at least not universally agreed upon. I very much agree that

      "Neither professorships, nor peer-reviewed papers, nor practical experience, nor official status, nor any other credentials are consistently reliable guides to who is right and who is wrong."

      and

      "Closure is almost certainly an illusion, or a sign that one has become a partisan, now to be discounted by the other side."

      For example, you state that global warming if not quickly addressed, will cause such havoc that "it is questionable whether humanity will be able to adapt", thus, apparently committing yourself fully to one of the beliefs that may or may not be right.

      Babel is here. You show it goes way deeper than language. And since there is no universally agreed-upon answer, many believe that we make the best of it with local autonomy, many experiments conducted in parallel, with Internet and other communications counted on to spread news of both local successes and of local failures.

      You mentioned "globalized intellectual elites", as if this culture is somehow different from others, but the reality is that more or less closed communications, where political correctness forbids even discussion of taboo topics, and special interest based on the fact that most of such elites are based on tax dollars of "meer" citizens, suggests that these elites deserve no greater consideration than other special interests.

      It's a complex situation. You discuss structural fora and crowdsourcing (Wikipedia, etc.) but note that "it is not making decisions for humanity." Yet you also note that scientists and academics often opine outside their expertise, disagree and often disrespect each other's views.

      I live on a ranch one half hour from Stanford, and I associate with both "globalized intellectual elites" and with farmers and ranchers and local folk, and find plenty of "low information voters" in both groups.

      Your discussion of AI as a possible solution is qualified by 1.) It probably won't live up to technology cultist's expectations, and 2.) It might become a target of distrust, fear and hatred. As I agree with Lorraine Ford, above, and don't believe such systems will become "sentient", I don't overly worry about the consequences of "if they did so". Eliza has not come very far in fifty years. I certainly agree with you that autonomous weapon systems are bad.

      Your question about "who will control the technology" is a good one. Do I want Google or an IRS-wielding government to control it? The devil or the deep blue sea?

      You end up, as far as I can see, agreeing with me and with Sabine Hossenfelder that, hopefully

      "Ordinary citizens can use personal AI systems to find answers to the questions they ask..." [Emphasis on they.]

      So, in the end, I think we see much the same problems and hope for much the same answer.

      Thanks for outlining the problems and suggesting a hopeful solution.

      Best regards,

      Edwin Eugene Klingman

        Edwin, Thanks for your evidently careful and thoughtful reading!

        Also, I didn't necessarily mean to assign special above-it-all status to "globalized intellectual elites," I just assumed that's who I was addressing!

        Mark,

        I think the explosion of information serves to both unify and divide. It unifies to the extent that we now have much the same platforms (PCs smartphones etc) and we look at a range of similar sources. It also divides in that people also engage groups they find interesting and which confirm their beliefs or biases about things. This has largely been the case for much of the late modern world, and the explosion of information serves mostly to magnify this condition.

        Information is a neutral physical structure, which is associated with entropy according to Shannon's rule. This information can of course just be a random set of bits, or quantum bits, which communicates no message. With the Paul Revere ride it was "one if by land and two if by sea," which is a two bit system. The information states were then I_1 = I_2 = log_2(1/2) = -1. The entropy is then S = -k(-1/2 - 1/2) = k. This entropy is of course a tiny amount when compared to the heating of the environment by flames in lanterns. However, this message only had meaning because of a previous agreement or in a sense a key. A stream of information then has some message or meaning if the sender and recipient have some rules which determine behavior upon sending and receiving the message. This is of course much more difficult to mathematically quantify, and so far this is not on the chart of information physics. It probably has some physical measure as a pink noise effect that deviates from white noise.

        These semantic structures in information, memes or what ever we might call them, come in many forms. It can range from a mathematical proof, the BICEP2 result, an economic report, a sermon at a Baptist Church or the latest words of "wisdom" from Rush Limbaugh. There is no physical means by which we can separate any of these into categories as truth, proof, evidence, nonsense and just pure garbage. The brain however is a sort of information junkie, and it tends to respond best with patterns of information it is conditioned to. It is the rare person who enjoys confronting something that is baffling and requires work to solve. Largely people consume information streams or memes that give the most pleasure response in the brain.

        cheers LC

          I agree that there would be huge gains from finding ways for us to better aggregate the info we all have and are capable of collecting. Before there was a world wide web, I worked with the Xanadu project because I found hope in their vision that back-links would cure bad debate by making it easy to find good rebuttals of bad arguments. When I came to doubt that vision, I pursued prediction markets as a more reliable mechanism by which people of good will could come together to form reliable consensus that non-experts could trust. I've been pursuing that for twenty five years.

          But I've come to realize that there is actually relatively little demand for institutions or mechanisms that can cheaply produce accurate estimates. Beliefs often serve many other functions for most people that conflict with accuracy. So they aren't very interested in following back-links to good rebuttals of bad arguments, and they aren't very interested in supporting prediction markets. They also would not be very interested in buying or listening to AIs that provide accurate beliefs. The key problem here is the demand, not the supply. We have babel because that's mostly what people want.

          I have hopes that prediction markets could someday become like accounting. In a world where no firm does accounting, it would be problematic to propose it, as you'd be accusing someone of stealing. But in a world where all firms do accounting, it would be problematic to propose not doing it, as you seem to be asking for permission to steal. Similarly, if we can ever make prediction markets the norm, it would be embarrassing to not have a prediction market on a topic, as that would suggest you don't really want to know the truth. Of course people don't usually want the truth, but they also don't like to admit that publicly.

            The problem with debate via hypertext, crit, etc. is that it's clumsy, clunky and divergent. I'm not sure about prediction markets, but I think a lot of people are not convinced, in a society with so great maldistribution of money where so many do not have disposable income to place bets, that a betting pool is the best way to divine the future.

            You are right though that people are likely to balk at having AIs tell them what they don't want to hear, which is one reason I said it will be important for the technology be to controlled by its users. Some people, at least, and some institutions do (at least think they) want to know The Truth, whatever that might turn out to be. But also, I am convinced that AIs will be powerful agents of persuasion. They will probably be most effective when used through networks of existing connections rather than trying to persuade directly across confrontations. But it will turn out (social big data) that networks exist through which you can ultimately reach everyone, or big fractions.

            I expect that as advanced AI systems are deployed as agents of persuasion, they will look for ways to make their messages deliver pleasure response or whatever makes them effective - and I note that some people are hooked on fear, hate, anger, etc.

            Hi Mark,

            my 2012 FQXi essay contest entry "The Universe is not like a Computer" and also my 2013 essay contest entry "Information, Numbers, Time, Life, Ethics" explain why computers represent information ONLY from the point of view of human beings:computers/robots can NEVER understand or experience the information that they represent. Worrying that computers/robots understand what is going on, or that they are going to take on a life of their own, is a terrible waste of time and energy.

            However, we cannot deny that there are current and potential future problems (and benefits) from governments and corporations accessing and manipulating data sets of information about people, and problems (and benefits) from drones and other unmanned programmed vehicles/machines/robots.

            But its HUMAN intelligence we are talking about here: there is no machine intelligence. The truth of the matter is that its the humans and corporations behind the machine software (i.e. "artificial intelligence") that we should be worrying about: its not really the machines/robots per se that we should worry about. The robots are just "doing what they are told" so to speak.

            Also there seems to be a real problem of humans interacting constantly with the dead (i.e. machines) instead of with living reality. I believe that people like Kurzweil, Tallinn, and even scientists Hawking, Russell, Tegmark and Wilczek (see their Huffington Post article "Transcending Complacency on Superintelligent Machines"), have become unhinged from the physical reality that we actually live in!!

            Cheers,

            Lorraine

            Lorraine, I don't share your conviction that machine intelligence is impossible, and I'll tell you why if I may. I approach this from two directions.

            First, it seems to me that machines already are doing things that I would call intelligent, and I think you should too. When a machine is able to receive a complex signal, classify it, determine an appropriate response in the context of its intended purpose, and respond, that to me is the paradigm of intelligence. I don't see any limit to the complexity of such behavior by machines. Consider animal intelligence, in general. An animal seeks food, avoids predators, seeks a mate, and sometimes protects and raises its young. Although we don't know how to make self-replicating machines (yet), it seems clear to me that computer systems could effect all these kinds of behaviors. If you think not, I wonder which of these behaviors you feel would be impossible, and why the effort would fail.

            Second, with regard to human intelligence and consciousness, we know that the input and output pathways to the brain are through the propagation of action potentials in axons, and in between there is some mixture of electrical and chemical signaling between neurons. Each neuron must be implementing some kind of automaton so that its outputs are a function of its inputs over time. I see no reason why artificial systems can't implement functions which are isomorphic to those of neurons. These functions might not be as simple as some people (and their models) assume, but they must be some functions which we could describe and make something that behaves isomorphically. If we did, we would necessarily have a machine that behaves the same as a person. You could have a conversation with it about its consciousness, what it is like to be it, whether there is a God, etc. It seems to me that this must mean that it would be actually conscious, if we actually are. I don't think the idea of a zombie makes sense.

            So, I am fairly convinced that a machine can be intelligent, and can be conscious. However, if it is not human, then its consciousness is not human consciousness, and my preference would be that we never make such things. But I think we can make very intelligent systems which would be very useful to us as tools, without having to make systems that would have legitimate claims to personhood nor any inclination to assert such claims.

            Hello Mark,

            I read your nicely composed essay with a great pleasure. It is lovely to meet somebody with a broad erudition and a writing style. I like you telling that "people do not want the truth" (presumable, there is such a thing).

            Sorry, I did not give a high mark. Your essay is really good, but I am looking for something ingenious.

            You may look at my entry about imagining the future. I hope my essay will encourage you to learn more about quorum sensing as a means of knowing and to apply analogous imagining in your field of interests.

            Please disregard any typo mistakes you may encounter.

            Cheers,

            Margarita Iudin

              Prediction markets have consistently shown their accuracy in head to head contests with other mechanisms. Given this track record, I don't see how your opinion that money is maldistributed is relevant. All these empirical tests were done in this same society where you think money is maldistributed.

              Dear Mark,

              That was an interesting essay -- but as Robin Hanson pointed out (and you acknowledged), people don't want to hear the truth. Like you, I hold much hope for structured debate forums, especially if the machine could ground each node using RDF/OWL (that is one of my proposals in my essay Three Crucial Technologies .

              However, don't forget that we must worry about more than just intelligent machines. With the advent of the Internet of Things, these machines will not only control your thermostat (as smart meters do now), but your credit card, and probably even your pacemaker. These will be powerful constructs.

              So I worry. The biggest problem, which unfortunately didn't quite make it in my essay as well I as I intended, is that I don't think that the problem is one of communication, or of knowledge, or even wisdom--but one of will (though with it's moral aspect, maybe wisdom *is* involved). We can't force AGIs to love us, and we do bad job of loving each other, so how could we teach them anyway?

              To consider the depth of the problem, consider another Biblical story; God created the universe, and everything was chaos. Ten seconds later, He said, "Let there be light!" Who do you think brought the light? Obviously, the strongest, best, most beautiful, and most intelligent angel, the Light Bringer. For those who don't know the rest of the story, when the Light Bringer (Lucifer) found out about God's plans to create a race of self-replicating bags of shit--to whom he would have to kneel and obey--he got majorly ticked off. Understandably, I suppose, but the problem was not one of intelligence, because Lucifer was super-smart. Deep down, I think he understood how amazing it would be for mere matter to do the things to the universe that he had done (it would even better than a 1-year-old winning the Olympic marathon), but his pride just wouldn't let him appreciate it. So he became determined to destroy humanity.

              How do we encourage AGIs to seek to be virtuous when we can't even agree among ourselves on what that is, exactly? And we can't just encourage them-we must force them, because if they aren't friendly, then we're screwed. Extinct, actually. I suppose Mass Effect's Synthesis ending might be better, but then we're stuck with the same flaws we have now. And similar results--war, injustice, and ignorance, only on a much greater scale.

                Well, if you don't think money is maldistributed, that's your opinion, but it is widely disagreed with. Which was my point; I was just suggesting that could be one reason prediction markets haven't been so popular, despite their not-bad track record of accuracy. People just don't think one-disposable-dollar-one-vote is a good formula. Especially given how easy it would be for an interested and well-heeled party to tip the scale.

                Hi Ti,

                The Lucifer story is profound, but it's about human minds. I know Omohundro and others have argued that any sufficiently advanced intelligence will share certain "basic drives" or characteristics that we might summarize as egoism. I'm not sure that's true. I think we might be able to build very powerful AI tools that aren't modeled after humans (or any animals), don't think or experience as we do, and don't have any tendencies to want to take over. I think the creation of eogistic or human-like AI should be absolutely forbidden, precisely because we would never be able to predict or control what it would do. So it isn't a matter of cajoling AGIs to be virtuous, it's a matter of maintaining human control and using AI as a tool only. If that makes sense.

                best reasonable wishes,

                Mark