I expect that as advanced AI systems are deployed as agents of persuasion, they will look for ways to make their messages deliver pleasure response or whatever makes them effective - and I note that some people are hooked on fear, hate, anger, etc.
Babel and Beyond: Can humanity unite? by Mark Avrum Gubrud
Hi Mark,
my 2012 FQXi essay contest entry "The Universe is not like a Computer" and also my 2013 essay contest entry "Information, Numbers, Time, Life, Ethics" explain why computers represent information ONLY from the point of view of human beings:computers/robots can NEVER understand or experience the information that they represent. Worrying that computers/robots understand what is going on, or that they are going to take on a life of their own, is a terrible waste of time and energy.
However, we cannot deny that there are current and potential future problems (and benefits) from governments and corporations accessing and manipulating data sets of information about people, and problems (and benefits) from drones and other unmanned programmed vehicles/machines/robots.
But its HUMAN intelligence we are talking about here: there is no machine intelligence. The truth of the matter is that its the humans and corporations behind the machine software (i.e. "artificial intelligence") that we should be worrying about: its not really the machines/robots per se that we should worry about. The robots are just "doing what they are told" so to speak.
Also there seems to be a real problem of humans interacting constantly with the dead (i.e. machines) instead of with living reality. I believe that people like Kurzweil, Tallinn, and even scientists Hawking, Russell, Tegmark and Wilczek (see their Huffington Post article "Transcending Complacency on Superintelligent Machines"), have become unhinged from the physical reality that we actually live in!!
Cheers,
Lorraine
Lorraine, I don't share your conviction that machine intelligence is impossible, and I'll tell you why if I may. I approach this from two directions.
First, it seems to me that machines already are doing things that I would call intelligent, and I think you should too. When a machine is able to receive a complex signal, classify it, determine an appropriate response in the context of its intended purpose, and respond, that to me is the paradigm of intelligence. I don't see any limit to the complexity of such behavior by machines. Consider animal intelligence, in general. An animal seeks food, avoids predators, seeks a mate, and sometimes protects and raises its young. Although we don't know how to make self-replicating machines (yet), it seems clear to me that computer systems could effect all these kinds of behaviors. If you think not, I wonder which of these behaviors you feel would be impossible, and why the effort would fail.
Second, with regard to human intelligence and consciousness, we know that the input and output pathways to the brain are through the propagation of action potentials in axons, and in between there is some mixture of electrical and chemical signaling between neurons. Each neuron must be implementing some kind of automaton so that its outputs are a function of its inputs over time. I see no reason why artificial systems can't implement functions which are isomorphic to those of neurons. These functions might not be as simple as some people (and their models) assume, but they must be some functions which we could describe and make something that behaves isomorphically. If we did, we would necessarily have a machine that behaves the same as a person. You could have a conversation with it about its consciousness, what it is like to be it, whether there is a God, etc. It seems to me that this must mean that it would be actually conscious, if we actually are. I don't think the idea of a zombie makes sense.
So, I am fairly convinced that a machine can be intelligent, and can be conscious. However, if it is not human, then its consciousness is not human consciousness, and my preference would be that we never make such things. But I think we can make very intelligent systems which would be very useful to us as tools, without having to make systems that would have legitimate claims to personhood nor any inclination to assert such claims.
Hello Mark,
I read your nicely composed essay with a great pleasure. It is lovely to meet somebody with a broad erudition and a writing style. I like you telling that "people do not want the truth" (presumable, there is such a thing).
Sorry, I did not give a high mark. Your essay is really good, but I am looking for something ingenious.
You may look at my entry about imagining the future. I hope my essay will encourage you to learn more about quorum sensing as a means of knowing and to apply analogous imagining in your field of interests.
Please disregard any typo mistakes you may encounter.
Cheers,
Margarita Iudin
Prediction markets have consistently shown their accuracy in head to head contests with other mechanisms. Given this track record, I don't see how your opinion that money is maldistributed is relevant. All these empirical tests were done in this same society where you think money is maldistributed.
Dear Mark,
That was an interesting essay -- but as Robin Hanson pointed out (and you acknowledged), people don't want to hear the truth. Like you, I hold much hope for structured debate forums, especially if the machine could ground each node using RDF/OWL (that is one of my proposals in my essay Three Crucial Technologies .
However, don't forget that we must worry about more than just intelligent machines. With the advent of the Internet of Things, these machines will not only control your thermostat (as smart meters do now), but your credit card, and probably even your pacemaker. These will be powerful constructs.
So I worry. The biggest problem, which unfortunately didn't quite make it in my essay as well I as I intended, is that I don't think that the problem is one of communication, or of knowledge, or even wisdom--but one of will (though with it's moral aspect, maybe wisdom *is* involved). We can't force AGIs to love us, and we do bad job of loving each other, so how could we teach them anyway?
To consider the depth of the problem, consider another Biblical story; God created the universe, and everything was chaos. Ten seconds later, He said, "Let there be light!" Who do you think brought the light? Obviously, the strongest, best, most beautiful, and most intelligent angel, the Light Bringer. For those who don't know the rest of the story, when the Light Bringer (Lucifer) found out about God's plans to create a race of self-replicating bags of shit--to whom he would have to kneel and obey--he got majorly ticked off. Understandably, I suppose, but the problem was not one of intelligence, because Lucifer was super-smart. Deep down, I think he understood how amazing it would be for mere matter to do the things to the universe that he had done (it would even better than a 1-year-old winning the Olympic marathon), but his pride just wouldn't let him appreciate it. So he became determined to destroy humanity.
How do we encourage AGIs to seek to be virtuous when we can't even agree among ourselves on what that is, exactly? And we can't just encourage them-we must force them, because if they aren't friendly, then we're screwed. Extinct, actually. I suppose Mass Effect's Synthesis ending might be better, but then we're stuck with the same flaws we have now. And similar results--war, injustice, and ignorance, only on a much greater scale.
Gee whiz, thanks. Or something.
Well, if you don't think money is maldistributed, that's your opinion, but it is widely disagreed with. Which was my point; I was just suggesting that could be one reason prediction markets haven't been so popular, despite their not-bad track record of accuracy. People just don't think one-disposable-dollar-one-vote is a good formula. Especially given how easy it would be for an interested and well-heeled party to tip the scale.
Hi Ti,
The Lucifer story is profound, but it's about human minds. I know Omohundro and others have argued that any sufficiently advanced intelligence will share certain "basic drives" or characteristics that we might summarize as egoism. I'm not sure that's true. I think we might be able to build very powerful AI tools that aren't modeled after humans (or any animals), don't think or experience as we do, and don't have any tendencies to want to take over. I think the creation of eogistic or human-like AI should be absolutely forbidden, precisely because we would never be able to predict or control what it would do. So it isn't a matter of cajoling AGIs to be virtuous, it's a matter of maintaining human control and using AI as a tool only. If that makes sense.
best reasonable wishes,
Mark
Hello Mark, May I offer a short, but sincere critique of your essay? I would ask you to return the favour. Here's my policy on that. - Mike
Hello Michael,
Yes, you may offer, and I'll reciprocate, but I've learned the hard way that people really do not want sincere criticism, even if they think they do. Perhaps you are an exception. As for myself, I find it hard to imagine that anyone would be more critical of my own essay than I am. They might be more dismissive, but that is not the same thing.
best reasonable wishes,
Mark
Mark,
In response to your first point, I can't understand why you are so bewitched by mere superficial behaviours and external appearances of machines: you are talking about mere simulacrums of intelligence and consciousness. Despite any seeming complexity, a computer/robot is a deterministic mechanism that does what its told (so to speak). But a micro-organism, an animal or other living thing is self-directing, non-deterministic, and creates unpredictable, but non-random, outcomes for itself in response to circumstances.
In response to your second point, I think you should look more closely at the nature of information. I'm using the word "information" in the sense of "knowledge communicated" or "knowledge gained" (i.e. perception/awareness/experience/consciousness). Something that was once a in person's conscious experience (e.g. some words in the French language) might, after typing at a keyboard, end up represented as a string of zeroes and ones in a computer. But the particular string depends on the character encoding system used. So this string of zeroes and ones is NOT information to a computer/robot UNLESS it happens to have a code book handy. But actually there are no strings of zeroes and ones in a computer - there is only physical hardware with appropriate properties that can REPRESENT zeroes and ones. So for the physical hardware of a robot/computer to correspond to information (consciousness), the robot/computer would need not only a code book, it would need to know that this physical hardware state represents "0" and that physical hardware state represents "1", and it would need to speak the French language!
Computer circuits have voltage levels, neurons have spiking action potentials. Either can be considered an encoding of information, which you may take in the engineering or physics rather than philosophy sense.
It is not true that computers are necessarily deterministic; one can easily introduce randomness from environmental noise or quantum sources.
You did not engage my point that if we could (and I do not see the reason why in principle we could not) create a machine composed of units which have input-output functions isomorphic to those of neurons in a human brain, it would behave like a brain, and we would be able to have philosophical conversations with it about the nature of consciousness. Do you think that such a machine would be a zombie, that says it is conscious but actually is not? Or do you think such a machine is impossible? If the latter, what hitherto unknown principle of physics would intervene to prevent it from being realized?
Mark,
You make a rather large assumption that we "could ...create a machine composed of units which have input-output functions isomorphic to those of neurons in a human brain". Though humans are extremely capable and clever and knowledgeable etc etc, we cannot make such a machine because of the nature of reality.
Clearly, I have a totally different view of the fundamental nature of reality to you (described in my essay in which I describe 3 invalid assumptions of physics).
Cheers,
Lorraine
Lorraine, you may wish to leave the conversation at that, but if so, I will just point out that you have not explained what aspect of the nature of reality would prevent the realization of a a machine composed of units which have input-output functions isomorphic to those of neurons in a human brain. It seems to me that, whatever that aspect of reality might be, it would not be consistent with known physics and would therefore entail a modification of physics. While that is logically possible, I do not expect that it will turn out to be true. best, Mark
translate.google.com attempts to consolidate language. But two different people speaking the same language understand things differently, even though they are reading identical written works.
If everyone shares the same perspectives, then there are very much limited opportunities to discover new relationships.
Diverse interactions spawn greater numbers of Moments of Inspiration.
Ideally, we would each speak a different language and our brains would be able to correlate the vast systems of relationships of all languages and related written works; correction, all works. Living from moment to moment immersed in moments of inspirations.
Perhaps parallel processing of quantum computers will make this possible.
Thanks for your comments, James. As a practical matter, in order to steer the future at all, let alone intelligently, we need to be able to understand one another and overcome differences. No doubt there are many ways to approach this problem in addition to the ones discussed in my essay.
Mark, a most excellent essay. You have a facility for penetrating to the rotting roots of our human condition; there are few essays that engage me from beginning to end like yours. Kudos.
We have a very similar worldview, though my own contribution focuses pretty narrowly on the content of your endnote (10).
I look forward to more dialogue.
Best,
Tom
Mark,
your views, and views like yours about "artificial intelligence", are of concern to me. This is because, according to my estimation, you see reality in an invalid light. And as I try to explain in my essay this year, the invalid assumptions underlying (the equations of) physics (though I'm not in any way implying that there is anything invalid about the equations themselves), are major contributors to the attitudes that are destroying our planet home.
You ask me to explain my view about "what aspect of the nature of reality would prevent the realization of a a machine composed of units which have input-output functions isomorphic to those of neurons in a human brain". As I have tried to explain in my essay, and in my posts to you above, reality is NOT 100% mechanism, and where reality is not mechanism, it is not random. Also, reality is inherently subjective and experiential.
So the subjects that comprise reality (e.g. particles, molecules, plant cells and other living things) are totally unlike deterministic machines. As I contend in my essay "the subjects that comprise the universe are wild and free, within the context of a mechanism that gives the necessary structure to the freedom".
As I said, I was impressed by your essay, but I disagreed with the bit that started on page 7 about "artificial intelligence". The essence of where our views differ is in our views about the underlying fundamental nature of reality.
Cheers,
Lorraine
Dear Mark,
You are certainly right that our inability to talk with and understand each other is a key to resolving most of our problems as a human race, and that the Tower of Babel story signifies just that. Your use of the Bible as an atheist as at least on occasion, good literature is admirable, and right to your point of engaging with "the other side".
I take the other view, that the Bible tells us the truth about both worldview and good news. I do not think the Bible is infallible, it just gets worldview and good news right - in a way that nothing else does. And the Biblical story is in its main lines historically correct. I think that can be shown by the evidence of reason and fact to be the case. Being in the ranks of this group of scholars, etc., to present the Biblical case is a roller coaster experience, and lots of fun.
You say on page 2 that "it does not seem that the bulk of evil is the result of purposeless malevolence." You might take a look at www.hawaii.edu on the purposeFUL malevolence all through the 20th century, most of which was not in war time but viciously secular governments destroying their own people (Hitler, Stalin, Pol Pot, Mao, et al).
I argue in my paper (How Shall We Then Live?) that it was specifically the Judeo-Christian West that began to turn around the grinding tyranny and poverty almost everywhere, and to give honor to every human being. Three things in Biblical religion did that: 1. the theme that all men are created equally in the image of God; 2. that God is sovereign over all things, including kings and other potentates (they no longer get a pass because they are big and powerful); and 3. that all human beings are bound by the law of God, to love God and to love each other just like we love ourselves. Seems to me a way of life hard to beat, and one which we will never do on our own, only with the help of God.
I trace some of the broad steps through Western history leading to the growing freedom of the lower classes, the development of a free-market of ideas leading to universities, and then to science itself. Add to that a limited government for a free people. None of that, I think, could have happened apart from the Biblical culture as it slowly found its way into modernity. It indeed produced modernity.
Then Christians betrayed themselves and God, rejecting the very science they had founded - for fear that science might disprove the Christian faith. Many Christians opposed reason to revelation, and so made Christianity irrelevant to modern culture which, then under the auspices of secularism was seeking to operate scientifically.
You point to the collapse of unity among us all, an effect of the Tower of Babel. The essential unity among the Hebrews and Christians was the moral unity generated by the law of God - Decalogue and the Two Great Commandments. People agreed on the difference between right and wrong. When moral consensus collapses, the culture collapses. No more consensus on "how things are done", or on "where are we going?" Precisely the problem of our topic.
This is a long-winded way of saying that perhaps, as the Tower of Babel story points to the problem of the human race, so God's answer to that problem might just be the real one, the restoration of His law and grace among us.
Computers can give us tons of information, but I think they are not capable, as you believe, of giving us wisdom. There is a break of kind between information and wisdom.
Best wishes, Earle