Many thanks for the kind words, and your sense of humour Peter! I get the impression your own entry is much favoured, so I considered myself well-complimented. I'll defintely engage my bio-processing of your entry very soon when my CPU is at full efficiency!!

I like how you make your point using humour. Actually its a great point - how can I solve this philosophical anxiety?

I find when I become worried about possibly being an AI, I can stick myself in an MRI and see my own brain function, or I can get myself genetically tested for humanness. Of course there's always the chance our entire galaxy/universe etc. could be simulated, but then the AI/human distinction is still equally valid - we'd just be talking about a simluated simulation vs a simulated person - and so long as all my knowledge is of this simulated world then it still seems valid to value things in it. And if good, bad, real, illusion, get all a bit too difficult there is always reductionism!

Thanks again Peter, looking forward to reading your essay soon!

Ross,

It's also quite serious for some, many mathematicians and physicists are genuinely convinced we're just a computer simulation and the universe is entirely mathematical! Where does morality fit in all that? Do look out the Gluck essay, my findings agree entirely with the 'infinite recession' logic he cites, where truths are never absolute but are self apparent and our biggest problem is stupidity, so the ability to see them.

My 'much favoured' essay has just been hit with a hail of 1's so is sinking fast and needs all the help it can get.

Best wishes

Peter

Thanks Ross, that helps me to understand your intention. Your main argument ("the heart of the story", p. 3) is that artificially intelligent beings are liable to manipulate people through emotional appeals (a looming problem) unless we make them dispassionate, mortal and tightly controlled (solution). Immediately I feel the need of context: Why focus on the problem of emotional manipulation? What other problems are posed by the fabrication of such beings? What other solutions are available?

Without a methodical exploration of the problem-solution space, I cannot weigh your argument. A more conventional form of essay would probably serve you better here than a story. Stories can be powerful, I agree, and the reader is usually prepared to suspend disbelief in small details, but not in the heart of the story. - Mike

P.S., I will use the following rating scale to rate the essays of authors who tell me that they have rated my essay:

10 - the essay is perfection and I learned a tremendous amount

9 - the essay was extremely good, and I learned a lot

8 - the essay was very good, and I learned something

7 - the essay was good, and it had some helpful suggestions

6 - slightly favorable indifference

5 - unfavorable indifference

4 - the essay was pretty shoddy and boring

3 - the essay was of poor quality and boring

2 - the essay was of very poor quality and boring

1 - the essay was of shockingly poor quality and extremely flawed

After all, that is essentially what the numbers mean.

The following is a general observation:

Is it not ironic that so many authors who have written about how we should improve our future as a species, to a certain extent, appear to be motivated by self-interest in their rating practices? (As evidence, I offer the observation that no article under 3 deserves such a rating, and nearly every article above 4 deserves a higher rating.)

Hi Mike thanks again for the comment. I think if this paper claimed to be an analytical essay, you'd be spot on. But I think your expectations would be fairly difficult to ever meet in my entry, basically because a fictional format isn't designed to make a purely analytical argument, but rather to provoke thought and further discussion. No short story would fair well if assessed as a scientific analysis, but I hope my one entertains and makes people think about something new.

The issue I have tried to raise here is not only emotional manipulation but also legal manipulation and how vastly cheaper and numourous AIs might interact with legal or moral rights, particularly in a difficult economy. I think that while many other issues can be explored around AI, this one is often overlooked. AIs pose many threats and opportunities - this story just highlights one potential way AIs interact with our survival that deserves our attention. I didn't want to spread myself too thin.

For those that are interested in more detail, some of the issues, particularly "humanity being worth something" are much more thoroughly and analytically discussed on my website. And perhaps next time, knowing a little more about this audience now, I will try the format you prefer.

This is a fun exploration of a possible ideal, Ross. I think you are right that our power to work both good and evil increases with our knowledge. And I think you're right that we need to make social progress in this century if we are to prosper.

But I'm not sure how the society you describe works, however. Sometimes you talk about changed price incentives. Other times you talk about changed norms. But we don't get to see in much detail how incentives and norms have changed to make this society work so beautiful. It is easy to imagine everyone behaving well and getting along, but it's harder to know to how to get everyone to behave and get along.

The section on AI also didn't quite ring true to me. First, I think strong AI is likely to be much transformative. Second, I don't think the problem of controlling AI can be reduced to not letting them have purposes of their own. The danger is that they will pursue any purpose we give them to its logical conclusion so well that we will be powerless to stop them. And, finally, I am not comfortable with making a distinction between human and non-human thinking beings. I don't think you can treat any entity both kindly and as second class.

Nevertheless, I certainly enjoyed your essay! Best of luck in the contest!

Robert

    Hi Robert,

    Thanks very much for you kind comments. You're right that there isn't details about the social changes mentioned. The main reason is that there simply isn't room, though also the audience made me hesitate to include social science topics. However if that side of things particuarly interests you, you might want to take a quick look at my website http://citizenearth.altervista.org

    I'm not 100% sure about your thoughts on AI but I think I can appreciate what your saying about pursuit to logical conclusion without the restraint of human interpretation. I do think though that these entities have the potential to possess immensely greater powers than we do, and if their independent purposes/interests are not served by our existence, then even without any spite or malicious intent, our species faces a mortal danger. A weaker class of entities will only survive if the core purpose of the stronger entity is profoundly infused with the weaker entity's survival - no malice is required for extinction, only competition for scarce resources.

    I think you're right to have a distaste for intelligent entities being treated as 'second class'. However, kindness isn't mutually exclusive with taking safety precautions. We make a distinction between humans and large dogs, yet no decent person would argue either that large dogs should be treated with cruelty, nor argue that we ignore the potential for harm to the elderly or children.

    Humans are the most moral species we know of, and yet we still are capable of genocide and wiping out other species (without malice). I take a simple lesson from this - if one day AIs live in harmony with humans, it won't be by chance, it will be because we created them that way, because we designed them in a way specifically that protects humanity. We have a unique and short time frame to determine AI's future. If we can't conquer the nuances, if we approach the creation of AIs naively, we will be forced to choose between their ongoing suffering and utter extinction of our species. Or perhaps there will be no choice at all.

    5 days later

    Hi Petio,

    Thanks for you offer. I've had a look at your paper, and I'm afraid I'm unable to convince myself of the links between ancient history and cosmology (two very interesting topics though), so probably I would find it difficult to fit into your group. In any case thanks for the offer, and best of luck.

    Ross

    Ross,

    I found your essay entertaining and thought-provoking. I especially liked the part where you talk about companion dolls programmed to ask for stuff, so you have to buy it for them! I'm sure something like that will come along, sooner or later!

    I have looked at all the essays, and read more than half of them from start to finish. Your essay is part of the short list that I hope will make it to the finals, and I have rated it accordingly. Good luck!

    Marc

      Ross,

      I totally enjoyed your weaving together all kinds of disparate past events to describe the future. Very well written and thought out, indeed and innovative.

      It seems you are describing the end goal of steering be (a) an Earth Federation, that (b) runs colony's spread around the solar system on a number of planets and (c) with a lot of AI help.

      A few clarifications:

      - Are these the goals for all of humanity?

      - How did AI take on such an important role

      - Why are AI, the energy crisis and community business the only descriptors of the problem? There can be countless other issues.

      - Any reasoning leading up to the Federation except that it sprang in the aftermath of disaster.

      By contrast, my essay (here) talks more about the route rather than the goal. I look forward to your comments on it.

      -- Ajay

        Thanks very much Marc. Your encouragement is hugely appreciated! I'm slowly ploughing through all the entries too, but I'd definitely recommend your essay for others to read, if they haven't already!

        • [deleted]

        Hi Ajay, thanks for the compliments, I'll do my best to answer your questions.

        -Are these the goals for all of humanity?

        Well, a central goal and vision for all humanity, as opposed to a mantra everyone is required to worship. I'm really only hinting at the goals here. Our task is to lift up those amongst us that are selfless enough to work for the survival of the species and the biosphere, to give them meaningful power to work on our behalf, yet without creating an institution that is a magnet for the corrupt or a force of oppression.

        -How did AI take on such an important role

        I think the technical challenge of strong AI will be solved sooner or later. I also think that its a tool that will change the world - strong AIs, perhaps Uploads or even lesser AIs will almost certainly outperform humans in many areas. The positive future I depict is one where humanity thought about how this could be done safely without the devaluation of the human species. If we work hard enough to do this, then I think in the future AI could help lift humanity out of both drudgery and conflict - but only if we work hard now to understand and solve these challenges.

        -Why are AI, the energy crisis and community business the only descriptors of the problem? There can be countless other issues.

        I agree. The simple answer is I couldn't fit all in. A more thorough philosophical and economic approach (WIP) are available on my website HERE. I selected this set of issues because I thought it brought something new to the table, something I think is both vital for our future, and at the same time often overlooked.

        - Any reasoning leading up to the Federation except that it sprang in the aftermath of disaster.

        With the world's currently political and economic situation I think we've probably got a bumpy ride ahead. But my hope is that (1) we can solve the traditional political stalemates through innovation like community business (2) we can restore meritocracy and have a new generation of people deeply aware of both hard science and social science. The leaders amongst such people are those that can best prepare for our difficulties, such as the social unrest triggered by increasing automation, or AIs being used for malicious agendas, and then offer a realistic vision of a noble future for humanity. In my mind such people would be the founders of the Federation, sitting down and forging principles for the protection of humanity and the biosphere, but based upon deep knowledge of technology, social science, economics, psychology and the dangers of power.

        Thanks again for your kind comments, and I will be sure to read your entry soon!

        Dear Ross

        Nice and well written essay. Your work is very creative and rich in subjects. To have a image of the future is really complicated and you have done a great job. As seen in retrospective, it seems that in your essay the civilization has achieved a balanced state. In my view we are still far from a stabilized world. Humans are so unpredictable and it's hard to tell the future even a decade from now. For me AI and VR are topics that will never crystallize. Humanity cannot mechanize feelings because what distinguishes humans from animals is not only our brain but our feelings. In my view, what move the world are feelings and emotions, the brain only steers those sensations.

        I hope you have some time to read my essay and leave some comments.

        Good luck in the contest!

        Best Regards

        Israel

          Thankyou Israel for your kind words. Its been hugely enjoyable and interesting both writing my own entry and reading everyone else's entries. I'll be sure to take a look at your own one soon. I think its true that feelings and emotions can change the world. I tend to think that feelings and emotions aren't a uniquely human trait, though I certainly agree how difficult it is to predict the future, particularly concerning a species as complicated as humanity! Thanks and good luck, Ross.

          Thanks for explaining, Ross. I'll be rating your essay (and all the others on my review list) some time between now and May 30. All the best, and bye for now, - Mike

          6 days later

          Ross,

          I'm glad I read your interesting and entertaining essay, so different to the mass of 'we should be better people' offerings that state the obvious but have no perception of what action and steering is really about.

          AI is really the other side of the coin to eugenics, just as much potential and just as much danger. But what will be the model for 'intelligence'? I fear either AI's may think like us, so be limited and fail, or they'll see us for the primeval stupid belief led creatures we are and see they can do so much better.

          Perhaps that's the better option as we may then ourselves decide to learn how to use our brains better, unless they decide THEY can make more effective use of them! Frankly, who wouldn't? I'm not sure why you're in the 4's, indicative I suppose. My points should help.

          I hope you make the final cut (I mean the contest not your brain!)

          Judy

            Thanks Judy that's hugely appreciated.

            I think you're right that AI represents another path that humanity might push for 'advancement', but one that, like your own field, we urgently need to define and discuss what we're actually aiming for rather than walking in blind.

            After reading other entries, I agree in that I'd love to see my essay rating a little higher. I fear if there is some gaming of the system or indiscriminate downvoting, which has hugely hurt my rating because of the lower number of readers I've gotten. People that do rate it seem to either love or loathe my entry, perhaps because of the very unusual format. In any case, I'm glad to see your own very interesting entry is getting good ratings!

            Dear Ross Cevenst,

            I very much enjoyed your 'Federation Dreaming', in which you evolve the business model as the preferred means of 'getting things done' without coercion. At the end of my essay I briefly discuss a system in which people are paid to learn versus graduating with the equivalent of a mortgage in debt. Rather than government-run education, I have in mind more of a "community business" model that you seem to propose. In the same way that "price signals" are a control mechanism, investment decisions also affect non-coercive control, and the idea is that a community investing in this model is out to accomplish goals efficiently rather than amass wealth.

            On the technical side, I very strongly doubt that AI will ever deserve the term "consciousness". I've written several essays on consciousness, so I won't belabor this point. But much of what you propose does not require consciousness to accomplish.

            From the 'speech': "Our relationship with money and power, those eternal forces both indispensable and corrupting, has matured." One can hope!

            That is an area where I hope we can apply AI and automation. Also like your vision of genetic control for healing, not fashion.

            Thanks for entering your essay. I hope you find time to read and comment upon mine.

            Best regards,

            Edwin Eugene Klingman

              Hi Edwin,

              Thanks for your encouraging comments! For some reason I keep getting very positive comments, but my rating keeps going down. If you haven't already I hope you'll rate my entry!

              I also have an openness to the idea of rewarding people for improving their value to society through education, if done carefully. For example, rewards can be linked to educational performance to prevent free-loading.

              I'd be very interested to hear your thoughts on where you feel the limits of AI are and why. You are welcome to email me if you like.

              I'll be sure to take a look at your essay before the week is up!

              Ross