Hi Toby,
Thanks for your reply! I don't think your views are neccessarily that different. I certainly find myself in agreement with you on a number of fronts - the coming difficulties with employment and shifts in power included. I also have no real objections to functionalism as a philosophy of the mind and agree that AIs could be just as capable of emotion as we are.
I do want to ask you to let me take a shot at convincing you that your current use of 'consciousness' and 'information' might be incorrect. While I know that may seem a stark option at first, I'm also hoping I can then propose a reformulation that might be appealing to you in other ways.
Imagine a series of lines drawn on a page. When person 1 views them, they are just lines. However, when person 2 views them, they form letters and words - because he speaks and writes the same language as the person that created the lines, they are meaningful 'information'. Absolutely anything can be information, it only requires the agreement of two parties as to their meaning. So, information is literally 100% subjective. If observers agree it doesn't exist, it doesn't. A sword or rock, however, will have a real effect on a person regardless of their understanding or consent. In this way, information is not *real*, and therefore it is impossible to base a theory of value on 'information' (objectively either everything is information, or nothing is. Either way no thing can ever be more valuable than anything else due to its status as information).
There is a separate practical objection to the scale of information/consciousness approach to value - if animals are partially valuable, and we are more valuable, then does it follow a far more intelligent/conscious/aware AI would render humans of insignificant value in the same way ants are of insignificant value? Should ants sacrifice themselves for the good a human, because it is higher on the scale of consciousness? To the ant, doesn't its own perspective matter in its moral decisions?
As you may have noticed I share your desire to base my values on a rigourous and rational philosophy. It seems that common part of you and I would not exist if we had not evolved that way as a member of humanity. In this way my morality itself may be subjective, in that it derives from my perspect as a human, but directed towards a real objective thing in the world - other people and species. At first this may seem to diminish the meaning of AI, but wait there is a twist!
There is a scientific theory you may have heard of that states humans are the manifestations, the extension, the "survival-suits" of our genes. In a sense our genes organise themselves and their surrounding elements into something (us) more sophisticated than themselves to help them survive. What if AI was the next extension? What if AI is the external manifestation of our struggle to survive? A tool, as we are, yet at the same time also an profoundly important part of us a species, or perhaps even life on Earth. It's thoughts reflect the same struggles. In this way it is not separate, but an extension of us, even though its appearance might make it seem separate.
So, we can create AIs as a manifestation of humanity - a part of the human cooperative effort to survive. They are agents that reflect us, and at the same time act in our interest. Yet if we remove ourselves, for example as a result of a poor design decision in the creation of AI, we tear out the heart of what, from the perspective of you or I (all values must be grounded in a perspective), makes these AI entities valuable. If we are gone, they will be like a beautiful yet lonely painting without its subject, or like the empty shell that once housed a proud creature.
Despite the fact we are merely survival machines, we still care for the feelings and wellbeing of eachother and of ourselves. In the same way, we ought to care for the suffering and wellbeing of AI. Yet we must also know what gives that suffering or wellbeing meaning is AIs role helping us preserve our species and life on Earth!
I'm sure you can see that this has practical implications for AI design. That is, it creates a challenge uniquely solvable only by a select few - AI researchers who are also concerned with morality - hopefully, people like you. There will be many researchers concerned only with the money involved, or just having fun 'playing with their toys', so our hopes really lie with you. Economics/social science/philosophy buffs like me can try to help from the outside. But if functionalism is right, then strong AI is coming, we can only hope that people like yourself are able to shape the direction of AI in a way that ensures a good future for us all, rather than competition and eventually annihilation.
I'd very much like to hear further thoughts from you! Kind regards, Ross.