Hello Daniel,
I enjoyed your essay, and I agree with its central thesis to the point of thinking it is essential that we do deal with the existential risks that face humanity, but some of your intermediate points fall apart for me. Premise 2 on page 2 is almost too easy to disprove or discredit, and appears to be of no value, while abandoning that premise reveals a host of phenomena to be breadth-transformative - all because of context dependencies which follow from premise 1, which I think is universal.
If we took a nuclear physicist and dropped him back in ancient times, and even gave him a few samples from his laboratory to carry along; what could he do? He might manage a few parlor tricks like turning a sample of lead into gold, and create the legend of a magical 'Philosopher's stone,' but he (or she) could not manage to convey enough knowledge to lead to an enduring understanding of radioactivity - so we would only hear tales of 'alchemical fire' and that's about all that would remain. Paul Pilzer goes further, basing his theory of Economics on the assumption that premise 2 is false, and that the value of any commodity is determined by available technology and other factors that determine its usability and the efficiency thereof. So premise 2 is disproved. Still; I think your conclusion is valid, and that we should be aiming for a Large future, if we want to have a future at all.
I agree with your conclusion that we must take seriously the need to address existential risks, and your assessment that engineered biohazards and the AI singularity are two of our most pressing looming problems, where if left unaddressed; they certainly could lead to humanity's extinction, or relegate us to a future that is both Small and unpleasant. I will leave aside the first, except to say that GMO food crops could be such a problem, and that the burden should be on the creators of modified seeds to show their safety long-term - through scientific studies conducted in isolation - rather than making the whole world their lab or guinea pig and leaving the burden of proof (that there are unforeseen risks) to us. If there are complications, a large part of our food supply has already been contaminated, and Nature will further spread the 'contagion' around, so this might be a pressing issue.
The problem of existential risk from the AI singularity is one I've given considerable thought to and I have definite ideas about how we must act to head problems off. Specifically; we have a window of opportunity to develop machines and software capable of qualitative analysis - subjective databases and search engines - before the machines reach intelligence or self awareness due to the brute force of massively parallel processing. Such an intelligence would be formidable, but it would lack any subtlety or finesse, and would be both brutish and tyrannical. This makes for a very dismal future for humans.
I will conclude by copying some comments I made on the essay page of Leo KoGuan, as they also apply here. "I have been working for a number of years now to create a framework for qualitative or subjective search engines and databases, and I've even included some of the fruits of my research in that area in my FQXi essays, so it will be clear to all that this model follows from my prior work. Personally; I'd rather work with R2-D2 and C3PO than work for a Terminator style robot, and this is a necessary step in that direction. However; if we did create this technology, and fed into the computer works of the great philosophers, religious texts, legal documents, and so on; it would calculate percentage truth-values for various assertions contained therein.
Of course; it will cause the worst scandal in history when people realize that a computer is being made the arbiter of their religion. This is why such things must be handled with some sensitivity. It is also why I think the proposal of Jens Niemeyer for a repository of knowledge is important to humanity's survival, and deserves the development and use of such technology. This goes way beyond the Dewey decimal system (no pun intended - ed), and could be a way to achieve a scientific level of fair representation - which is a necessary step in your plan - but will ordinary humans be willing to set cherished beliefs aside, in order to realize a bright future instead of dystopia?"
How would you deal with that issue?
Regards,
Jonathan