Dear David,
in spite of the impression of an essay written quite quickly, perhaps under temporal pressure, I enjoyed reading your text considerably more than others, since you present facts, ideas and perspectives that are relatively new to me: I had never though about the future emergence of Artificial Intelligence the way you do, with your analysis of possible scenarios - good or bad, hard/soft takeoff, etc. Interesting is the idea of an iterative refinement of AI, by which new stages are designed by previous ones, with exponential growth in velocity and sophistication (`While it is not clear exactly when the threshold of human intelligence will be met by an artificial one, it is likely that once the threshold is reached, it will be quickly surpassed [Vinge, 1993]`).
In connection to this, you write:
`a human level AI is developed which is able to recursively improve itself over many iterations such that it will eventually become millions or billions of times more intelligent.`
While I perceive this scenario as plausible, I wonder whether such a tremendous explosion in intelligence power would also blow up all our (still shaky and incomplete) scientific notions and formalisations, by which we define and measure intelligence at the human scale. It is not difficult to imagine dramatic improvements in memory capacity or processing speed. But would there be something more? Should we imagine to break the limits of Turing computability or those (still poorly understood) of quantum computing?
I find quite interesting also the discussion about the likelihood that this future super-intelligence might care about the preservation of beings with lower intelligence, with *total indifference* as a likely scenario. We are used to science fiction movies in which the super-intelligence is either a sort of divinity that saves us, or a diabolic entity trying to maliciously destroy humanity, while the neutral scenario is indeed rarely considered. I would add that human intelligence is much sensitive to the `discreteness` of humanity - the fact that we are a set of individuals each caring a lot about preserving his/her own individual life as long as possible - while an artificial super-intelligence might not be granulated into individuals. It could be somehow diffused, continuous, so to speak, and insensitive to individualities (in a different context, the gene is egoistic, and does not care either about individual lives).
I was not aware of the idea of `computing beyond matter` (from the movie `Her` that you mention), which could safely separate the interests of the emergent AI from those of matter-oriented humanity. You rule this out as unlikely, but in a computation-oriented vision of spacetime, `computation beyond matter` does not sound completely impossible.
Finally, if the future super-intelligence were to coincide with the emergent super-organism envisaged by Teilhard de Chardin (the Omega point that emerges from the sphere of human knowledge), then we would have a maximally conscious, spontaneous, but also totally benevolent (and diffused?) entity, and one that eventually transcends matter completely.
Best regards
Tommaso