Hi Mike thanks again for the comment. I think if this paper claimed to be an analytical essay, you'd be spot on. But I think your expectations would be fairly difficult to ever meet in my entry, basically because a fictional format isn't designed to make a purely analytical argument, but rather to provoke thought and further discussion. No short story would fair well if assessed as a scientific analysis, but I hope my one entertains and makes people think about something new.

The issue I have tried to raise here is not only emotional manipulation but also legal manipulation and how vastly cheaper and numourous AIs might interact with legal or moral rights, particularly in a difficult economy. I think that while many other issues can be explored around AI, this one is often overlooked. AIs pose many threats and opportunities - this story just highlights one potential way AIs interact with our survival that deserves our attention. I didn't want to spread myself too thin.

For those that are interested in more detail, some of the issues, particularly "humanity being worth something" are much more thoroughly and analytically discussed on my website. And perhaps next time, knowing a little more about this audience now, I will try the format you prefer.

This is a fun exploration of a possible ideal, Ross. I think you are right that our power to work both good and evil increases with our knowledge. And I think you're right that we need to make social progress in this century if we are to prosper.

But I'm not sure how the society you describe works, however. Sometimes you talk about changed price incentives. Other times you talk about changed norms. But we don't get to see in much detail how incentives and norms have changed to make this society work so beautiful. It is easy to imagine everyone behaving well and getting along, but it's harder to know to how to get everyone to behave and get along.

The section on AI also didn't quite ring true to me. First, I think strong AI is likely to be much transformative. Second, I don't think the problem of controlling AI can be reduced to not letting them have purposes of their own. The danger is that they will pursue any purpose we give them to its logical conclusion so well that we will be powerless to stop them. And, finally, I am not comfortable with making a distinction between human and non-human thinking beings. I don't think you can treat any entity both kindly and as second class.

Nevertheless, I certainly enjoyed your essay! Best of luck in the contest!

Robert

    Hi Robert,

    Thanks very much for you kind comments. You're right that there isn't details about the social changes mentioned. The main reason is that there simply isn't room, though also the audience made me hesitate to include social science topics. However if that side of things particuarly interests you, you might want to take a quick look at my website http://citizenearth.altervista.org

    I'm not 100% sure about your thoughts on AI but I think I can appreciate what your saying about pursuit to logical conclusion without the restraint of human interpretation. I do think though that these entities have the potential to possess immensely greater powers than we do, and if their independent purposes/interests are not served by our existence, then even without any spite or malicious intent, our species faces a mortal danger. A weaker class of entities will only survive if the core purpose of the stronger entity is profoundly infused with the weaker entity's survival - no malice is required for extinction, only competition for scarce resources.

    I think you're right to have a distaste for intelligent entities being treated as 'second class'. However, kindness isn't mutually exclusive with taking safety precautions. We make a distinction between humans and large dogs, yet no decent person would argue either that large dogs should be treated with cruelty, nor argue that we ignore the potential for harm to the elderly or children.

    Humans are the most moral species we know of, and yet we still are capable of genocide and wiping out other species (without malice). I take a simple lesson from this - if one day AIs live in harmony with humans, it won't be by chance, it will be because we created them that way, because we designed them in a way specifically that protects humanity. We have a unique and short time frame to determine AI's future. If we can't conquer the nuances, if we approach the creation of AIs naively, we will be forced to choose between their ongoing suffering and utter extinction of our species. Or perhaps there will be no choice at all.

    5 days later

    Hi Petio,

    Thanks for you offer. I've had a look at your paper, and I'm afraid I'm unable to convince myself of the links between ancient history and cosmology (two very interesting topics though), so probably I would find it difficult to fit into your group. In any case thanks for the offer, and best of luck.

    Ross

    Ross,

    I found your essay entertaining and thought-provoking. I especially liked the part where you talk about companion dolls programmed to ask for stuff, so you have to buy it for them! I'm sure something like that will come along, sooner or later!

    I have looked at all the essays, and read more than half of them from start to finish. Your essay is part of the short list that I hope will make it to the finals, and I have rated it accordingly. Good luck!

    Marc

      Ross,

      I totally enjoyed your weaving together all kinds of disparate past events to describe the future. Very well written and thought out, indeed and innovative.

      It seems you are describing the end goal of steering be (a) an Earth Federation, that (b) runs colony's spread around the solar system on a number of planets and (c) with a lot of AI help.

      A few clarifications:

      - Are these the goals for all of humanity?

      - How did AI take on such an important role

      - Why are AI, the energy crisis and community business the only descriptors of the problem? There can be countless other issues.

      - Any reasoning leading up to the Federation except that it sprang in the aftermath of disaster.

      By contrast, my essay (here) talks more about the route rather than the goal. I look forward to your comments on it.

      -- Ajay

        Thanks very much Marc. Your encouragement is hugely appreciated! I'm slowly ploughing through all the entries too, but I'd definitely recommend your essay for others to read, if they haven't already!

        • [deleted]

        Hi Ajay, thanks for the compliments, I'll do my best to answer your questions.

        -Are these the goals for all of humanity?

        Well, a central goal and vision for all humanity, as opposed to a mantra everyone is required to worship. I'm really only hinting at the goals here. Our task is to lift up those amongst us that are selfless enough to work for the survival of the species and the biosphere, to give them meaningful power to work on our behalf, yet without creating an institution that is a magnet for the corrupt or a force of oppression.

        -How did AI take on such an important role

        I think the technical challenge of strong AI will be solved sooner or later. I also think that its a tool that will change the world - strong AIs, perhaps Uploads or even lesser AIs will almost certainly outperform humans in many areas. The positive future I depict is one where humanity thought about how this could be done safely without the devaluation of the human species. If we work hard enough to do this, then I think in the future AI could help lift humanity out of both drudgery and conflict - but only if we work hard now to understand and solve these challenges.

        -Why are AI, the energy crisis and community business the only descriptors of the problem? There can be countless other issues.

        I agree. The simple answer is I couldn't fit all in. A more thorough philosophical and economic approach (WIP) are available on my website HERE. I selected this set of issues because I thought it brought something new to the table, something I think is both vital for our future, and at the same time often overlooked.

        - Any reasoning leading up to the Federation except that it sprang in the aftermath of disaster.

        With the world's currently political and economic situation I think we've probably got a bumpy ride ahead. But my hope is that (1) we can solve the traditional political stalemates through innovation like community business (2) we can restore meritocracy and have a new generation of people deeply aware of both hard science and social science. The leaders amongst such people are those that can best prepare for our difficulties, such as the social unrest triggered by increasing automation, or AIs being used for malicious agendas, and then offer a realistic vision of a noble future for humanity. In my mind such people would be the founders of the Federation, sitting down and forging principles for the protection of humanity and the biosphere, but based upon deep knowledge of technology, social science, economics, psychology and the dangers of power.

        Thanks again for your kind comments, and I will be sure to read your entry soon!

        Dear Ross

        Nice and well written essay. Your work is very creative and rich in subjects. To have a image of the future is really complicated and you have done a great job. As seen in retrospective, it seems that in your essay the civilization has achieved a balanced state. In my view we are still far from a stabilized world. Humans are so unpredictable and it's hard to tell the future even a decade from now. For me AI and VR are topics that will never crystallize. Humanity cannot mechanize feelings because what distinguishes humans from animals is not only our brain but our feelings. In my view, what move the world are feelings and emotions, the brain only steers those sensations.

        I hope you have some time to read my essay and leave some comments.

        Good luck in the contest!

        Best Regards

        Israel

          Thankyou Israel for your kind words. Its been hugely enjoyable and interesting both writing my own entry and reading everyone else's entries. I'll be sure to take a look at your own one soon. I think its true that feelings and emotions can change the world. I tend to think that feelings and emotions aren't a uniquely human trait, though I certainly agree how difficult it is to predict the future, particularly concerning a species as complicated as humanity! Thanks and good luck, Ross.

          Thanks for explaining, Ross. I'll be rating your essay (and all the others on my review list) some time between now and May 30. All the best, and bye for now, - Mike

          6 days later

          Ross,

          I'm glad I read your interesting and entertaining essay, so different to the mass of 'we should be better people' offerings that state the obvious but have no perception of what action and steering is really about.

          AI is really the other side of the coin to eugenics, just as much potential and just as much danger. But what will be the model for 'intelligence'? I fear either AI's may think like us, so be limited and fail, or they'll see us for the primeval stupid belief led creatures we are and see they can do so much better.

          Perhaps that's the better option as we may then ourselves decide to learn how to use our brains better, unless they decide THEY can make more effective use of them! Frankly, who wouldn't? I'm not sure why you're in the 4's, indicative I suppose. My points should help.

          I hope you make the final cut (I mean the contest not your brain!)

          Judy

            Thanks Judy that's hugely appreciated.

            I think you're right that AI represents another path that humanity might push for 'advancement', but one that, like your own field, we urgently need to define and discuss what we're actually aiming for rather than walking in blind.

            After reading other entries, I agree in that I'd love to see my essay rating a little higher. I fear if there is some gaming of the system or indiscriminate downvoting, which has hugely hurt my rating because of the lower number of readers I've gotten. People that do rate it seem to either love or loathe my entry, perhaps because of the very unusual format. In any case, I'm glad to see your own very interesting entry is getting good ratings!

            Dear Ross Cevenst,

            I very much enjoyed your 'Federation Dreaming', in which you evolve the business model as the preferred means of 'getting things done' without coercion. At the end of my essay I briefly discuss a system in which people are paid to learn versus graduating with the equivalent of a mortgage in debt. Rather than government-run education, I have in mind more of a "community business" model that you seem to propose. In the same way that "price signals" are a control mechanism, investment decisions also affect non-coercive control, and the idea is that a community investing in this model is out to accomplish goals efficiently rather than amass wealth.

            On the technical side, I very strongly doubt that AI will ever deserve the term "consciousness". I've written several essays on consciousness, so I won't belabor this point. But much of what you propose does not require consciousness to accomplish.

            From the 'speech': "Our relationship with money and power, those eternal forces both indispensable and corrupting, has matured." One can hope!

            That is an area where I hope we can apply AI and automation. Also like your vision of genetic control for healing, not fashion.

            Thanks for entering your essay. I hope you find time to read and comment upon mine.

            Best regards,

            Edwin Eugene Klingman

              Hi Edwin,

              Thanks for your encouraging comments! For some reason I keep getting very positive comments, but my rating keeps going down. If you haven't already I hope you'll rate my entry!

              I also have an openness to the idea of rewarding people for improving their value to society through education, if done carefully. For example, rewards can be linked to educational performance to prevent free-loading.

              I'd be very interested to hear your thoughts on where you feel the limits of AI are and why. You are welcome to email me if you like.

              I'll be sure to take a look at your essay before the week is up!

              Ross

              Hi Ross,

              Thanks for encouraging me to read your essay. I found it interesting, raising a number of interesting points about economics and artificial intelligence. You may have gathered that I have some interest in both of these topics. I've spoken to you a bit about my views on economics in the discussion on my own entry, so I'll focus on artificial intelligence here.

              I have different views on consciousness and morality to a fair few people it seems. I think artificial intelligence could become conscious in the same way that we humans are, and if/when they do they should be valued similarly to human consciousness. I think I sit in the functionalist camp of philosophers of mind. I think the China brain (http://en.wikipedia.org/wiki/China_brain) could produce a mind. I don't think minds are dependent on any particular material. Minds are made up of meaningful connections, whether those meaningful connections are between biological neurons or exist only in software.

              At the moment I'm working on a system of morality that sees the things in the world as a spectrum of value rather than there being distinct rights that are possessed, present or absent. The value is related to the "information" that things store or create. I mean information in a general sense. Information could be genetic, it could be manifested in the structure of a grown organism, or it could be more abstract, such as an essay, a song or a computer program. Consciousness seems to be a manifestation of powerful processes that gather, manipulate and store information. Value should be placed on consciousness in any form, though that value may be dependent on the degree of consciousness. I agree with Douglas Hofstadter's proposal of a spectrum of consciousness too (in his 'I Am a Strange Loop'). All animals have some degree of consciousness; they all should be valued for that consciousness. AIs may have some degree of consciousness, maybe up to and beyond human-level consciousness, and they should each be valued for that, though how much depends on how conscious they have. This judgement could possibly also complicated by the information and memories of AI having the potential to be replicated or extracted.

              Your portrayal of AIs being agents of companies in the advertising and selling of goods was a great logical connection to make that I hadn't thought of. I have some more immediate speculative concerns about the use of early AI, however: the replacement of labour and the centralisation of military power.

              There are an incredible amount of jobs that could be replaced by robots and AI right now or in the next decade or two. Increasing unemployment decreases consumption, which causes businesses to lay-off employees or close, further decreasing employment. Most governments rely on taxing businesses and individual income for revenue, so they will lose revenue and will be faced with increasing demand for welfare benefits. If the governments cut benefits, we could expect homelessness and crime to rise, eventually leading to riots. Then step in the military, which, by this stage, might be heavily automated. Whoever is in charge of a robotic military doesn't need to worry about the conscience of their soldiers; they follow orders. From there it is a short step to a dictatorship or, if the robotic military has been privatised, feudalism. We could be heading toward some dark days.

              I would welcome any thoughts you might have on these ideas. Also, let me know if there is anything else in particular you would like feedback on or to discuss.

              Cheers,

              Toby

                Hi Toby,

                Thanks for your reply! I don't think your views are neccessarily that different. I certainly find myself in agreement with you on a number of fronts - the coming difficulties with employment and shifts in power included. I also have no real objections to functionalism as a philosophy of the mind and agree that AIs could be just as capable of emotion as we are.

                I do want to ask you to let me take a shot at convincing you that your current use of 'consciousness' and 'information' might be incorrect. While I know that may seem a stark option at first, I'm also hoping I can then propose a reformulation that might be appealing to you in other ways.

                Imagine a series of lines drawn on a page. When person 1 views them, they are just lines. However, when person 2 views them, they form letters and words - because he speaks and writes the same language as the person that created the lines, they are meaningful 'information'. Absolutely anything can be information, it only requires the agreement of two parties as to their meaning. So, information is literally 100% subjective. If observers agree it doesn't exist, it doesn't. A sword or rock, however, will have a real effect on a person regardless of their understanding or consent. In this way, information is not *real*, and therefore it is impossible to base a theory of value on 'information' (objectively either everything is information, or nothing is. Either way no thing can ever be more valuable than anything else due to its status as information).

                There is a separate practical objection to the scale of information/consciousness approach to value - if animals are partially valuable, and we are more valuable, then does it follow a far more intelligent/conscious/aware AI would render humans of insignificant value in the same way ants are of insignificant value? Should ants sacrifice themselves for the good a human, because it is higher on the scale of consciousness? To the ant, doesn't its own perspective matter in its moral decisions?

                As you may have noticed I share your desire to base my values on a rigourous and rational philosophy. It seems that common part of you and I would not exist if we had not evolved that way as a member of humanity. In this way my morality itself may be subjective, in that it derives from my perspect as a human, but directed towards a real objective thing in the world - other people and species. At first this may seem to diminish the meaning of AI, but wait there is a twist!

                There is a scientific theory you may have heard of that states humans are the manifestations, the extension, the "survival-suits" of our genes. In a sense our genes organise themselves and their surrounding elements into something (us) more sophisticated than themselves to help them survive. What if AI was the next extension? What if AI is the external manifestation of our struggle to survive? A tool, as we are, yet at the same time also an profoundly important part of us a species, or perhaps even life on Earth. It's thoughts reflect the same struggles. In this way it is not separate, but an extension of us, even though its appearance might make it seem separate.

                So, we can create AIs as a manifestation of humanity - a part of the human cooperative effort to survive. They are agents that reflect us, and at the same time act in our interest. Yet if we remove ourselves, for example as a result of a poor design decision in the creation of AI, we tear out the heart of what, from the perspective of you or I (all values must be grounded in a perspective), makes these AI entities valuable. If we are gone, they will be like a beautiful yet lonely painting without its subject, or like the empty shell that once housed a proud creature.

                Despite the fact we are merely survival machines, we still care for the feelings and wellbeing of eachother and of ourselves. In the same way, we ought to care for the suffering and wellbeing of AI. Yet we must also know what gives that suffering or wellbeing meaning is AIs role helping us preserve our species and life on Earth!

                I'm sure you can see that this has practical implications for AI design. That is, it creates a challenge uniquely solvable only by a select few - AI researchers who are also concerned with morality - hopefully, people like you. There will be many researchers concerned only with the money involved, or just having fun 'playing with their toys', so our hopes really lie with you. Economics/social science/philosophy buffs like me can try to help from the outside. But if functionalism is right, then strong AI is coming, we can only hope that people like yourself are able to shape the direction of AI in a way that ensures a good future for us all, rather than competition and eventually annihilation.

                I'd very much like to hear further thoughts from you! Kind regards, Ross.

                Hi Ross,

                Thanks for your response. I do still disagree with you on a few points, however.

                Information often requires interpretation to determine its meaning; however, interpretation is performed by physical systems (computers, brains, protein synthesis from DNA, etc) that objectively exist. There are physical processes going on in a person's brain that attach meaning to the letters and words that they read. This meaning is manufactured through the function of the brain, but it objectively exists. Sure, information can be essentially lost if no system for interpretation exist, but that doesn't mean that there isn't information there.

                The objection you raise to there being a scale of value and consciousness is only an issue if there is some ethical dilemma that requires an either-or sacrifice. Just because something is valued less doesn't mean that it needs to be destroyed or its value completely ignored. Most people would value a house they own over their car, but that doesn't mean they don't care about their car. It does mean that they would probably prefer to save their house from a fire than their car if they were given the choice. Some people are willing to risk their lives to save endangered species. I don't think that is completely foolish.

                I would also argue that there are different manifestations of information. Consciousness is a manifestation of information in the form of sensory awareness and associated meanings. A genome is a manifestation of information that encodes instructions to construct an organism. A phenotype is a manifestation of information as a representation of a possible outcome of growth from a genome and the history of that process of growth. These physical manifestations of information give organisms additional value beyond their level of consciousness. Organisms are an embodiment of information and information creating processes. If we value information, all organisms are inherently valuable and we should strive for the survival and continuation of all life. If we do manage to create an artificial superintelligence hopefully it will see life as inherently valuable.

                I'm not too sure where I'm headed in my "career". If I do continue to work on artificial intelligence I will try my best to be mindful of the possible ramifications and communicate them with my peers. I'm also drawn to trying to reform education, society and economics, which I think might be a more immediately beneficial project.

                I'm happy to discuss any of this further.

                Cheers,

                Toby

                Hi Toby,

                Thanks for replying! I think its great that you're an AI research that's also into moral philosophy, so I'd encourage you to continue in your career AND your exploration of moral questions, though of course the social sciences are also vitally important and sorely neglected, and its great to see brainy gents and ladies joining the ranks (there's a bit of shortage in some areas I think).

                I'm hopeful you might come back to these points later and maybe my words can sway you just a little :) Just in case you do, here's my brief reply to the philosophical stuff:

                -Differential value: You're right, values aren't usually absolute, but with limited resources we must still make decisions about what survives and what doesn't. If an AI is 10000x more intelligent/conscious/aware than a human, would a rational policy maximising information per unit of energy available suggest replacing all humans? And for that matter life? Humans' rational and understandable pursuit of their own survival seems to have wiped out almost all our closest primate relatives, who we'd probably try to save if we had our time again, now that we're a little wiser. Perhaps your idea is right and we can design AI specifically with our own surival in mind and prevent them causing the extinction of their makers while they are still finding their role. Let's hope!

                -Information: In science we often test for something's effect by altering or removing it and seeing what changes. Alter the physical system that interprets information, and the information changes. Remove the physical system, and the information is no longer information (its just stuff). This suggests the information is actually an attribute of the interpreting system, not of the stuff we happen to be using to activate it. Therefore value of information is really value of the interpreting system (humans etc). Perhaps we should consider that thing directly to answer moral questions.

                As a side point some philosophers might suggest that values are also an attribute of an 'interpreter', rather than an attribute of the thing being valued. That is, perspective is neccessary in values. This leads me to think that a consideration of what I am (human) is quite important in uncovering and organising my own values.

                In any case, thanks again for you comments, and good luck with both your AI work and with your economics/education stuff! Feel free to get in contact and let me know how's it going!

                Write a Reply...