Okay, page 1 is all about arguing that attempts to steer the future are futile. We just have to surrender to whatever technology, that autonomous force, is bringing us.
Page 2 introduces the idea that actually, we can do a reasonable job of foreseeing what technology, like it or not, is bringing us.
Then we get to the fun part: Ems. They're like Fisher-Price little people, only smarter and less physical. There's going to be this whole economy and society of ems, or uploads in conventional parlance. Why? Because otherwise AI is going to fail, but uploading will succeed.
The essay, and apparently the book, is full of confident assertions about ems, what they will be like, how they will fit into our world. And I mean confident. There's not the slightest doubt - Ems are going to do all the work, and they're going to have a class hierarchy, money, retirement....
Call me skeptical. I don't see how you get to whole-brain scanning at the level of detail necessary to make an upload work, even assuming you have the hardware to run it on. If you did have that level of technology, I don't see why you wouldn't have an understanding of the brain's circuits and learning mechanisms and more generally, how to make AI of human and superhuman capability that is tailored to what you want it to do, rather than being inherently egoist and potentially dangerous. Claims of expertise in AI research from more than a decade ago don't impress me that much, I'm afraid. Anyone who says there's not been much progress since then isn't paying attention.
I may agree that we're heading into the rapids, but that suggests maybe we'd best paddle to the shore, or look for a rock to wash up on or a branch to grab, i.e. STOP. At least until we can figure out what we do and don't want to do. I don't think the smart move would be to start planning for the age of the elves, I mean, ems.