Thankyou indeed. I think its a common mistake for us technologically minded people to assume that mainstream technology development is primarily driven by utility. Utility certainly helps but marketing consdierations plays a huge part in what technologies are pushed out to the masses (consider the advanced nature of Unix as developed by comp scientists decades ago, and then the rubbish that has in many cases been pushed out to the masses over the years instead). I have no doubt that everyday AI, VR and the like will be shaped by similar forces - much to the annoyance of technology enthusiasts everywhere. So that little part of the story reflects that.

Hey, that was nice, Ross! An entertaining blend of science fiction and fact, with plausible outcomes, given our present state of development. One can more appreciate the critical state of history we inhabit.

Thanks for commenting in my own forum, and all best in the essay compeition --

Tom

Don't forget to rate my entry! If you're new to FQXi you received your voting code in an email from FQXi.

I tried to read the essay portion after the dialog, but the prose was too purply, it was a real headache to read because instead of talking about the idea it keept invoking imagery that didn't really contribute anything to the argument... What were you trying to get at anyway? I didn't manage to figure out the point until my migraine drove me to something else.

    Others seem to like it, but you are well within your rights to dislike the style I chose. I'm not sure what the purpose of your comment is though, it seems deliberately pointless. Perhaps you dislike the message of my writing. I hope your headache clears up.

    Hi Ross, I must re-read your essay because I didn't finish my first reading. It wasn't that I had a headache at the time (unlike Alan, I hope he's feeling better now), but your interviewer didn't inspire trust in me. He seemed too uncritical of the society, which therefore came across as unauthentic. I'll give you a chance to reply, then re-read. - Mike

    Hi Mike,

    I think that's a good (and scientific) reaction to have. Anyone that isn't skeptical of visions of new societies doesn't haven't their thinking-cap fastened properly.

    On the other hand, in the context of the story, the interviewer is a journalist now actually living IN the future society, so in a sense for him there is empirical proof that the society does in-fact work. I assume for any of us, to arrive in the future and speak to one of its key-figures would probably result in a fairly uncritical reaction, and so that character reflects that.

    Rather than aiming for a dry proposal, I've chosen to use the story as a vehicle to convey a couple of what I think are original and important ideas, while also attempting to mix in an inspirational tone. Why? Because I want people to be aware of the challenges our future poses, but also excited about the possibilities. As you can probably tell I am a fan of how science-fiction does just that.

    On the other hand, the philosophical analysis the story is based upon is a result of careful and, I hope, unsentimental investigation into some of the problems facing humanity. If you're interested in the philosophy and thought behind the story, there is more information on my website. You can also present me with any questions you might have wished the interviewer had asked, and I can do my best to answer!

      Ross,

      I'm seriously concerned by your essay. Is there a test to find is someone's an AI or an old bio model? It's just that I think I may be an AI! I don't have fear or self interest, yet don't mind the idea of a perfect sex-bot at all. I also seem to have a natural lifetime and be emotionally and socially distinct from most others. I also appear to have few human rights and be far more intelligent that most! Should I not have been told if I wasn't human? Do I have any recourse? I now start to see a lot of things far more clearly, such as why courgettes leave a horrid slimy taste in my mouth.

      No but really. Great essay mate. Up among the pro's, or rather it should be. Lots of stuff that needs thinking about now, not too late as is our propensity. I see you have a BA (Hons). That should uniquely qualify you (as undoctrinated) to understand my own essay, a romantic trip into the future.... no but honest guv! Bob and Alice end up finding a simple classical geometric (up your street?) derivation of Quantum Mechanics "predictions" to get rid of the "Chasm" (Penrose) between Classical and quantum physics, giving the biggest quantum leap in scienti...etc etc.. I targeted it for the average bio model so do hope you grasp it.

      Well done. Really enjoyed it. Must go, it's time to plug in.

      Best wishes

      Peter

        Bizarrely on the day my entry went live here on FXQi the IP address of my site was apparently listed by someone as a 'spammer' on at list one non-public blacklist. Needless to say this is false. Since then I've had at least one person unable to connect to my site, though it is still working for most people. If anyone has any trouble viewing my website could you please email me at the-citizen att safe-mail dott net! Thanks!

        Many thanks for the kind words, and your sense of humour Peter! I get the impression your own entry is much favoured, so I considered myself well-complimented. I'll defintely engage my bio-processing of your entry very soon when my CPU is at full efficiency!!

        I like how you make your point using humour. Actually its a great point - how can I solve this philosophical anxiety?

        I find when I become worried about possibly being an AI, I can stick myself in an MRI and see my own brain function, or I can get myself genetically tested for humanness. Of course there's always the chance our entire galaxy/universe etc. could be simulated, but then the AI/human distinction is still equally valid - we'd just be talking about a simluated simulation vs a simulated person - and so long as all my knowledge is of this simulated world then it still seems valid to value things in it. And if good, bad, real, illusion, get all a bit too difficult there is always reductionism!

        Thanks again Peter, looking forward to reading your essay soon!

        Ross,

        It's also quite serious for some, many mathematicians and physicists are genuinely convinced we're just a computer simulation and the universe is entirely mathematical! Where does morality fit in all that? Do look out the Gluck essay, my findings agree entirely with the 'infinite recession' logic he cites, where truths are never absolute but are self apparent and our biggest problem is stupidity, so the ability to see them.

        My 'much favoured' essay has just been hit with a hail of 1's so is sinking fast and needs all the help it can get.

        Best wishes

        Peter

        Thanks Ross, that helps me to understand your intention. Your main argument ("the heart of the story", p. 3) is that artificially intelligent beings are liable to manipulate people through emotional appeals (a looming problem) unless we make them dispassionate, mortal and tightly controlled (solution). Immediately I feel the need of context: Why focus on the problem of emotional manipulation? What other problems are posed by the fabrication of such beings? What other solutions are available?

        Without a methodical exploration of the problem-solution space, I cannot weigh your argument. A more conventional form of essay would probably serve you better here than a story. Stories can be powerful, I agree, and the reader is usually prepared to suspend disbelief in small details, but not in the heart of the story. - Mike

        P.S., I will use the following rating scale to rate the essays of authors who tell me that they have rated my essay:

        10 - the essay is perfection and I learned a tremendous amount

        9 - the essay was extremely good, and I learned a lot

        8 - the essay was very good, and I learned something

        7 - the essay was good, and it had some helpful suggestions

        6 - slightly favorable indifference

        5 - unfavorable indifference

        4 - the essay was pretty shoddy and boring

        3 - the essay was of poor quality and boring

        2 - the essay was of very poor quality and boring

        1 - the essay was of shockingly poor quality and extremely flawed

        After all, that is essentially what the numbers mean.

        The following is a general observation:

        Is it not ironic that so many authors who have written about how we should improve our future as a species, to a certain extent, appear to be motivated by self-interest in their rating practices? (As evidence, I offer the observation that no article under 3 deserves such a rating, and nearly every article above 4 deserves a higher rating.)

        Hi Mike thanks again for the comment. I think if this paper claimed to be an analytical essay, you'd be spot on. But I think your expectations would be fairly difficult to ever meet in my entry, basically because a fictional format isn't designed to make a purely analytical argument, but rather to provoke thought and further discussion. No short story would fair well if assessed as a scientific analysis, but I hope my one entertains and makes people think about something new.

        The issue I have tried to raise here is not only emotional manipulation but also legal manipulation and how vastly cheaper and numourous AIs might interact with legal or moral rights, particularly in a difficult economy. I think that while many other issues can be explored around AI, this one is often overlooked. AIs pose many threats and opportunities - this story just highlights one potential way AIs interact with our survival that deserves our attention. I didn't want to spread myself too thin.

        For those that are interested in more detail, some of the issues, particularly "humanity being worth something" are much more thoroughly and analytically discussed on my website. And perhaps next time, knowing a little more about this audience now, I will try the format you prefer.

        This is a fun exploration of a possible ideal, Ross. I think you are right that our power to work both good and evil increases with our knowledge. And I think you're right that we need to make social progress in this century if we are to prosper.

        But I'm not sure how the society you describe works, however. Sometimes you talk about changed price incentives. Other times you talk about changed norms. But we don't get to see in much detail how incentives and norms have changed to make this society work so beautiful. It is easy to imagine everyone behaving well and getting along, but it's harder to know to how to get everyone to behave and get along.

        The section on AI also didn't quite ring true to me. First, I think strong AI is likely to be much transformative. Second, I don't think the problem of controlling AI can be reduced to not letting them have purposes of their own. The danger is that they will pursue any purpose we give them to its logical conclusion so well that we will be powerless to stop them. And, finally, I am not comfortable with making a distinction between human and non-human thinking beings. I don't think you can treat any entity both kindly and as second class.

        Nevertheless, I certainly enjoyed your essay! Best of luck in the contest!

        Robert

          Hi Robert,

          Thanks very much for you kind comments. You're right that there isn't details about the social changes mentioned. The main reason is that there simply isn't room, though also the audience made me hesitate to include social science topics. However if that side of things particuarly interests you, you might want to take a quick look at my website http://citizenearth.altervista.org

          I'm not 100% sure about your thoughts on AI but I think I can appreciate what your saying about pursuit to logical conclusion without the restraint of human interpretation. I do think though that these entities have the potential to possess immensely greater powers than we do, and if their independent purposes/interests are not served by our existence, then even without any spite or malicious intent, our species faces a mortal danger. A weaker class of entities will only survive if the core purpose of the stronger entity is profoundly infused with the weaker entity's survival - no malice is required for extinction, only competition for scarce resources.

          I think you're right to have a distaste for intelligent entities being treated as 'second class'. However, kindness isn't mutually exclusive with taking safety precautions. We make a distinction between humans and large dogs, yet no decent person would argue either that large dogs should be treated with cruelty, nor argue that we ignore the potential for harm to the elderly or children.

          Humans are the most moral species we know of, and yet we still are capable of genocide and wiping out other species (without malice). I take a simple lesson from this - if one day AIs live in harmony with humans, it won't be by chance, it will be because we created them that way, because we designed them in a way specifically that protects humanity. We have a unique and short time frame to determine AI's future. If we can't conquer the nuances, if we approach the creation of AIs naively, we will be forced to choose between their ongoing suffering and utter extinction of our species. Or perhaps there will be no choice at all.

          5 days later

          Hi Petio,

          Thanks for you offer. I've had a look at your paper, and I'm afraid I'm unable to convince myself of the links between ancient history and cosmology (two very interesting topics though), so probably I would find it difficult to fit into your group. In any case thanks for the offer, and best of luck.

          Ross

          Ross,

          I found your essay entertaining and thought-provoking. I especially liked the part where you talk about companion dolls programmed to ask for stuff, so you have to buy it for them! I'm sure something like that will come along, sooner or later!

          I have looked at all the essays, and read more than half of them from start to finish. Your essay is part of the short list that I hope will make it to the finals, and I have rated it accordingly. Good luck!

          Marc

            Ross,

            I totally enjoyed your weaving together all kinds of disparate past events to describe the future. Very well written and thought out, indeed and innovative.

            It seems you are describing the end goal of steering be (a) an Earth Federation, that (b) runs colony's spread around the solar system on a number of planets and (c) with a lot of AI help.

            A few clarifications:

            - Are these the goals for all of humanity?

            - How did AI take on such an important role

            - Why are AI, the energy crisis and community business the only descriptors of the problem? There can be countless other issues.

            - Any reasoning leading up to the Federation except that it sprang in the aftermath of disaster.

            By contrast, my essay (here) talks more about the route rather than the goal. I look forward to your comments on it.

            -- Ajay

              Thanks very much Marc. Your encouragement is hugely appreciated! I'm slowly ploughing through all the entries too, but I'd definitely recommend your essay for others to read, if they haven't already!