Dear Ross Cevenst,

I very much enjoyed your 'Federation Dreaming', in which you evolve the business model as the preferred means of 'getting things done' without coercion. At the end of my essay I briefly discuss a system in which people are paid to learn versus graduating with the equivalent of a mortgage in debt. Rather than government-run education, I have in mind more of a "community business" model that you seem to propose. In the same way that "price signals" are a control mechanism, investment decisions also affect non-coercive control, and the idea is that a community investing in this model is out to accomplish goals efficiently rather than amass wealth.

On the technical side, I very strongly doubt that AI will ever deserve the term "consciousness". I've written several essays on consciousness, so I won't belabor this point. But much of what you propose does not require consciousness to accomplish.

From the 'speech': "Our relationship with money and power, those eternal forces both indispensable and corrupting, has matured." One can hope!

That is an area where I hope we can apply AI and automation. Also like your vision of genetic control for healing, not fashion.

Thanks for entering your essay. I hope you find time to read and comment upon mine.

Best regards,

Edwin Eugene Klingman

    Hi Edwin,

    Thanks for your encouraging comments! For some reason I keep getting very positive comments, but my rating keeps going down. If you haven't already I hope you'll rate my entry!

    I also have an openness to the idea of rewarding people for improving their value to society through education, if done carefully. For example, rewards can be linked to educational performance to prevent free-loading.

    I'd be very interested to hear your thoughts on where you feel the limits of AI are and why. You are welcome to email me if you like.

    I'll be sure to take a look at your essay before the week is up!

    Ross

    Hi Ross,

    Thanks for encouraging me to read your essay. I found it interesting, raising a number of interesting points about economics and artificial intelligence. You may have gathered that I have some interest in both of these topics. I've spoken to you a bit about my views on economics in the discussion on my own entry, so I'll focus on artificial intelligence here.

    I have different views on consciousness and morality to a fair few people it seems. I think artificial intelligence could become conscious in the same way that we humans are, and if/when they do they should be valued similarly to human consciousness. I think I sit in the functionalist camp of philosophers of mind. I think the China brain (http://en.wikipedia.org/wiki/China_brain) could produce a mind. I don't think minds are dependent on any particular material. Minds are made up of meaningful connections, whether those meaningful connections are between biological neurons or exist only in software.

    At the moment I'm working on a system of morality that sees the things in the world as a spectrum of value rather than there being distinct rights that are possessed, present or absent. The value is related to the "information" that things store or create. I mean information in a general sense. Information could be genetic, it could be manifested in the structure of a grown organism, or it could be more abstract, such as an essay, a song or a computer program. Consciousness seems to be a manifestation of powerful processes that gather, manipulate and store information. Value should be placed on consciousness in any form, though that value may be dependent on the degree of consciousness. I agree with Douglas Hofstadter's proposal of a spectrum of consciousness too (in his 'I Am a Strange Loop'). All animals have some degree of consciousness; they all should be valued for that consciousness. AIs may have some degree of consciousness, maybe up to and beyond human-level consciousness, and they should each be valued for that, though how much depends on how conscious they have. This judgement could possibly also complicated by the information and memories of AI having the potential to be replicated or extracted.

    Your portrayal of AIs being agents of companies in the advertising and selling of goods was a great logical connection to make that I hadn't thought of. I have some more immediate speculative concerns about the use of early AI, however: the replacement of labour and the centralisation of military power.

    There are an incredible amount of jobs that could be replaced by robots and AI right now or in the next decade or two. Increasing unemployment decreases consumption, which causes businesses to lay-off employees or close, further decreasing employment. Most governments rely on taxing businesses and individual income for revenue, so they will lose revenue and will be faced with increasing demand for welfare benefits. If the governments cut benefits, we could expect homelessness and crime to rise, eventually leading to riots. Then step in the military, which, by this stage, might be heavily automated. Whoever is in charge of a robotic military doesn't need to worry about the conscience of their soldiers; they follow orders. From there it is a short step to a dictatorship or, if the robotic military has been privatised, feudalism. We could be heading toward some dark days.

    I would welcome any thoughts you might have on these ideas. Also, let me know if there is anything else in particular you would like feedback on or to discuss.

    Cheers,

    Toby

      Hi Toby,

      Thanks for your reply! I don't think your views are neccessarily that different. I certainly find myself in agreement with you on a number of fronts - the coming difficulties with employment and shifts in power included. I also have no real objections to functionalism as a philosophy of the mind and agree that AIs could be just as capable of emotion as we are.

      I do want to ask you to let me take a shot at convincing you that your current use of 'consciousness' and 'information' might be incorrect. While I know that may seem a stark option at first, I'm also hoping I can then propose a reformulation that might be appealing to you in other ways.

      Imagine a series of lines drawn on a page. When person 1 views them, they are just lines. However, when person 2 views them, they form letters and words - because he speaks and writes the same language as the person that created the lines, they are meaningful 'information'. Absolutely anything can be information, it only requires the agreement of two parties as to their meaning. So, information is literally 100% subjective. If observers agree it doesn't exist, it doesn't. A sword or rock, however, will have a real effect on a person regardless of their understanding or consent. In this way, information is not *real*, and therefore it is impossible to base a theory of value on 'information' (objectively either everything is information, or nothing is. Either way no thing can ever be more valuable than anything else due to its status as information).

      There is a separate practical objection to the scale of information/consciousness approach to value - if animals are partially valuable, and we are more valuable, then does it follow a far more intelligent/conscious/aware AI would render humans of insignificant value in the same way ants are of insignificant value? Should ants sacrifice themselves for the good a human, because it is higher on the scale of consciousness? To the ant, doesn't its own perspective matter in its moral decisions?

      As you may have noticed I share your desire to base my values on a rigourous and rational philosophy. It seems that common part of you and I would not exist if we had not evolved that way as a member of humanity. In this way my morality itself may be subjective, in that it derives from my perspect as a human, but directed towards a real objective thing in the world - other people and species. At first this may seem to diminish the meaning of AI, but wait there is a twist!

      There is a scientific theory you may have heard of that states humans are the manifestations, the extension, the "survival-suits" of our genes. In a sense our genes organise themselves and their surrounding elements into something (us) more sophisticated than themselves to help them survive. What if AI was the next extension? What if AI is the external manifestation of our struggle to survive? A tool, as we are, yet at the same time also an profoundly important part of us a species, or perhaps even life on Earth. It's thoughts reflect the same struggles. In this way it is not separate, but an extension of us, even though its appearance might make it seem separate.

      So, we can create AIs as a manifestation of humanity - a part of the human cooperative effort to survive. They are agents that reflect us, and at the same time act in our interest. Yet if we remove ourselves, for example as a result of a poor design decision in the creation of AI, we tear out the heart of what, from the perspective of you or I (all values must be grounded in a perspective), makes these AI entities valuable. If we are gone, they will be like a beautiful yet lonely painting without its subject, or like the empty shell that once housed a proud creature.

      Despite the fact we are merely survival machines, we still care for the feelings and wellbeing of eachother and of ourselves. In the same way, we ought to care for the suffering and wellbeing of AI. Yet we must also know what gives that suffering or wellbeing meaning is AIs role helping us preserve our species and life on Earth!

      I'm sure you can see that this has practical implications for AI design. That is, it creates a challenge uniquely solvable only by a select few - AI researchers who are also concerned with morality - hopefully, people like you. There will be many researchers concerned only with the money involved, or just having fun 'playing with their toys', so our hopes really lie with you. Economics/social science/philosophy buffs like me can try to help from the outside. But if functionalism is right, then strong AI is coming, we can only hope that people like yourself are able to shape the direction of AI in a way that ensures a good future for us all, rather than competition and eventually annihilation.

      I'd very much like to hear further thoughts from you! Kind regards, Ross.

      Hi Ross,

      Thanks for your response. I do still disagree with you on a few points, however.

      Information often requires interpretation to determine its meaning; however, interpretation is performed by physical systems (computers, brains, protein synthesis from DNA, etc) that objectively exist. There are physical processes going on in a person's brain that attach meaning to the letters and words that they read. This meaning is manufactured through the function of the brain, but it objectively exists. Sure, information can be essentially lost if no system for interpretation exist, but that doesn't mean that there isn't information there.

      The objection you raise to there being a scale of value and consciousness is only an issue if there is some ethical dilemma that requires an either-or sacrifice. Just because something is valued less doesn't mean that it needs to be destroyed or its value completely ignored. Most people would value a house they own over their car, but that doesn't mean they don't care about their car. It does mean that they would probably prefer to save their house from a fire than their car if they were given the choice. Some people are willing to risk their lives to save endangered species. I don't think that is completely foolish.

      I would also argue that there are different manifestations of information. Consciousness is a manifestation of information in the form of sensory awareness and associated meanings. A genome is a manifestation of information that encodes instructions to construct an organism. A phenotype is a manifestation of information as a representation of a possible outcome of growth from a genome and the history of that process of growth. These physical manifestations of information give organisms additional value beyond their level of consciousness. Organisms are an embodiment of information and information creating processes. If we value information, all organisms are inherently valuable and we should strive for the survival and continuation of all life. If we do manage to create an artificial superintelligence hopefully it will see life as inherently valuable.

      I'm not too sure where I'm headed in my "career". If I do continue to work on artificial intelligence I will try my best to be mindful of the possible ramifications and communicate them with my peers. I'm also drawn to trying to reform education, society and economics, which I think might be a more immediately beneficial project.

      I'm happy to discuss any of this further.

      Cheers,

      Toby

      Hi Toby,

      Thanks for replying! I think its great that you're an AI research that's also into moral philosophy, so I'd encourage you to continue in your career AND your exploration of moral questions, though of course the social sciences are also vitally important and sorely neglected, and its great to see brainy gents and ladies joining the ranks (there's a bit of shortage in some areas I think).

      I'm hopeful you might come back to these points later and maybe my words can sway you just a little :) Just in case you do, here's my brief reply to the philosophical stuff:

      -Differential value: You're right, values aren't usually absolute, but with limited resources we must still make decisions about what survives and what doesn't. If an AI is 10000x more intelligent/conscious/aware than a human, would a rational policy maximising information per unit of energy available suggest replacing all humans? And for that matter life? Humans' rational and understandable pursuit of their own survival seems to have wiped out almost all our closest primate relatives, who we'd probably try to save if we had our time again, now that we're a little wiser. Perhaps your idea is right and we can design AI specifically with our own surival in mind and prevent them causing the extinction of their makers while they are still finding their role. Let's hope!

      -Information: In science we often test for something's effect by altering or removing it and seeing what changes. Alter the physical system that interprets information, and the information changes. Remove the physical system, and the information is no longer information (its just stuff). This suggests the information is actually an attribute of the interpreting system, not of the stuff we happen to be using to activate it. Therefore value of information is really value of the interpreting system (humans etc). Perhaps we should consider that thing directly to answer moral questions.

      As a side point some philosophers might suggest that values are also an attribute of an 'interpreter', rather than an attribute of the thing being valued. That is, perspective is neccessary in values. This leads me to think that a consideration of what I am (human) is quite important in uncovering and organising my own values.

      In any case, thanks again for you comments, and good luck with both your AI work and with your economics/education stuff! Feel free to get in contact and let me know how's it going!

      3 months later
      Write a Reply...