Hi Judy,
Thanks for your response; I think I understand your feedback better now.
I'm glad you figured out which Daniel the comment was from. I must not have been logged in!
Best,
Daniel
Hi Judy,
Thanks for your response; I think I understand your feedback better now.
I'm glad you figured out which Daniel the comment was from. I must not have been logged in!
Best,
Daniel
Hi Laurence,
Thanks for reading, and your comments! I agree that existential risk should be a top priority. I'm honestly not sure how existentially risky the next few decades are relative to later times this century or next, but I'd welcome more information about those facts.
Best,
Daniel
Hi Tommasso,
Thanks for your feedback. It does seem that many people would have been helped by more concrete examples, whether in crucial phenomena, in ideas like breadth and "size" of the future, or in assertions like the one about cosmology.
I'm glad you liked the ending :)
Best of luck to you as well!
Thanks,
Daniel
Hello Daniel,
I enjoyed your essay, and I agree with its central thesis to the point of thinking it is essential that we do deal with the existential risks that face humanity, but some of your intermediate points fall apart for me. Premise 2 on page 2 is almost too easy to disprove or discredit, and appears to be of no value, while abandoning that premise reveals a host of phenomena to be breadth-transformative - all because of context dependencies which follow from premise 1, which I think is universal.
If we took a nuclear physicist and dropped him back in ancient times, and even gave him a few samples from his laboratory to carry along; what could he do? He might manage a few parlor tricks like turning a sample of lead into gold, and create the legend of a magical 'Philosopher's stone,' but he (or she) could not manage to convey enough knowledge to lead to an enduring understanding of radioactivity - so we would only hear tales of 'alchemical fire' and that's about all that would remain. Paul Pilzer goes further, basing his theory of Economics on the assumption that premise 2 is false, and that the value of any commodity is determined by available technology and other factors that determine its usability and the efficiency thereof. So premise 2 is disproved. Still; I think your conclusion is valid, and that we should be aiming for a Large future, if we want to have a future at all.
I agree with your conclusion that we must take seriously the need to address existential risks, and your assessment that engineered biohazards and the AI singularity are two of our most pressing looming problems, where if left unaddressed; they certainly could lead to humanity's extinction, or relegate us to a future that is both Small and unpleasant. I will leave aside the first, except to say that GMO food crops could be such a problem, and that the burden should be on the creators of modified seeds to show their safety long-term - through scientific studies conducted in isolation - rather than making the whole world their lab or guinea pig and leaving the burden of proof (that there are unforeseen risks) to us. If there are complications, a large part of our food supply has already been contaminated, and Nature will further spread the 'contagion' around, so this might be a pressing issue.
The problem of existential risk from the AI singularity is one I've given considerable thought to and I have definite ideas about how we must act to head problems off. Specifically; we have a window of opportunity to develop machines and software capable of qualitative analysis - subjective databases and search engines - before the machines reach intelligence or self awareness due to the brute force of massively parallel processing. Such an intelligence would be formidable, but it would lack any subtlety or finesse, and would be both brutish and tyrannical. This makes for a very dismal future for humans.
I will conclude by copying some comments I made on the essay page of Leo KoGuan, as they also apply here. "I have been working for a number of years now to create a framework for qualitative or subjective search engines and databases, and I've even included some of the fruits of my research in that area in my FQXi essays, so it will be clear to all that this model follows from my prior work. Personally; I'd rather work with R2-D2 and C3PO than work for a Terminator style robot, and this is a necessary step in that direction. However; if we did create this technology, and fed into the computer works of the great philosophers, religious texts, legal documents, and so on; it would calculate percentage truth-values for various assertions contained therein.
Of course; it will cause the worst scandal in history when people realize that a computer is being made the arbiter of their religion. This is why such things must be handled with some sensitivity. It is also why I think the proposal of Jens Niemeyer for a repository of knowledge is important to humanity's survival, and deserves the development and use of such technology. This goes way beyond the Dewey decimal system (no pun intended - ed), and could be a way to achieve a scientific level of fair representation - which is a necessary step in your plan - but will ordinary humans be willing to set cherished beliefs aside, in order to realize a bright future instead of dystopia?"
How would you deal with that issue?
Regards,
Jonathan
Hi Jonathan,
Thanks! I'm glad you enjoyed it.
Re: your first point: I think I can clear this up. As your example points out, the extrinsic or instrumental value of things is very time sensitive; this is quite right. What I meant was that *intrinsic* value is time-insensitive. For example, if you think that suffering is of intrinsic disvalue, then it doesn't make much sense to think that that intrinsic value is more or less depending on what day, year, or millennium that suffering takes place in. That's all I meant to say by premise 2.
I'm glad we're in agreement about existential risk from AI (though I don't think "self-awareness" is relevant; it seems to me that un-"self-aware" AI could probably have all of the effects I'm worried about).
I'll have to go take a look at your essay to learn more about the issue you point out! Unfortunately, I can't promise I'll get to it before the end of the month.
Best of luck,
Daniel
Thanks Daniel,
I especially resonate with one statement in your essay "given the knowledge of how Nature sets its phenomena, Humanity could act to maximize the value of their play." Since my essay is focused on the value of play as a learning tool, I find that idea especially appealing.
Regardless of how soon you get to my essay, I think you will find it of value to your efforts, and I hope to stay in contact to discuss the issues you raise, even after the contest has concluded.
All the Best,
Jonathan
Hi Daniel,
Thanks for the really interesting essay. I agree that the two phenomena you identified are crucial, and propose a third. It is research on processes and systems that lead humans to interact "productively" (free from bias and destructive conflict while sharing information freely and making effective decisions).
Some support for this suggestion is provided in my essay on computationally intelligent personal dialogic agents. I've developed a prototype of such a system as part of a US National Science Foundation CAREER award.
I'd appreciate a rating on my essay, if you can do that, since I am a bit short on ratings. Also, I'm interested in collaborators in furthering the development of the dialogic system, if you know of anyone that might be interested. Have them contact me at my gmail address, my username is my first name, then a period, then my last name.
Thanks,
Ray Luechtefeld, PhD
Hello Daniel, May I post a short, but sincere critique of your essay? I'd ask you to return the favour. Here's my policy on that. - Mike
Hello Daniel
I am impressed by your analysis. I am impressed by much of the work of the Future of Humanity Institute. However, I wonder if your suggestions will work, and I wonder if they really lower risk. Your examples, biological engineering and AI, are good examples to illustrate my concerns too. 1) It seems difficult to stop either biological engineering or AI research. Note that the biological containment labs from which you cite escape as being "shockingly common" were the result of one of science's few attempts to restrain dangerous experiments. 2) I agree that biological engineering and AI present existential problems, but they also might solve others. If Willard Wells and Martin Rees are right about our prospects, we may have to take some risks to lower the background level of risk. As an example, your colleague Stuart Armstrong warns persuasively about AI risk in his booklet "Smarter than Us." However, AI is a critical part of his proposal to settle the universe, a proposal that if workable would give us a broad and safe future. [Stuart Armstrong & Anders Sandberg, "Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox," Acta Astronautica, Aug-Sept 2013.] AI seems to be a component of many projects that would reduce risk.
Of course, really good science and good use of that science would take care of these concerns. Perhaps I am too cynical about scientists.
My solution is an attempt to crowdsource work in the area of what I call management of positive and negative singularities. I wonder if it will really work either.
The split of phenomena into limiting and transformative is interesting. (Instead of "transformative phenomenon", which suggests, to the uninitiated reader, "phenomenon that rearranges some existing part of the world into a better part organized along different principles", we'd suggest a term like "mediating phenomenon" or "controlling phenomenon", which suggests "phenomenon that, if it exists, will translate a difference in human action into a difference in the value of the future".) We wonder if there are other fruitful ways to draw distinctions between different kinds of crucial phenomena: for example, phenomena that determine a default expectation for what will happen in the absence of coordinated action, or phenomena for which research must be initiated far in advance (as opposed to on-demand).
Although your exposition of the concept "crucial" stands on its own, an expanded version could relate it to the standard decision-theoretical concept of "value of information", which applies whenever an agent faces the problem of allocating resources between (1) gathering additional information about alternatives and (2) improving the execution of the alternatives themselves. This would help to draw connections for readers primarily familiar with economics, statistics, or psychology, as well as for for readers entirely unfamiliar with decision theory or its relation to moral philosophy arguments.
Your general conclusion seems hard to dispute: there are aspects of the world that could have major effects on long-term outcomes, but that we understand incompletely in a way that would benefit substantially from the investment of far greater resources. We agree particularly about the need for improved understanding of the problem of AI safety (in your essay, "the difficulty of designing tasks for [anticipated] superintelligent AIs that would result in non-Disastrous outcomes"). We also agree about the related problem of formally identifying what kinds of potentially unrecognized value the future stands to win or lose (touched on in your essay in the scenario of "unconsummated realization"). Our essay is a brief note about the latter problem. On the surface, it describes a potential approach to mitigating dangers from well-meaning top-down activism (such as we expected many of the entries in the contest to be). However, it tries to be general enough to apply to any entity that potentially "steers who steers", so it could fit just as well in the context of the "indirect normativity"/"ideal preference" approach to AI safety.
Steven Kaas & Steve Rayhawk
Sorry, I had meant to post that from my account, but apparently it logged me out.
Steven & Steve,
Hey! I didn't realize you were in the contest; there are so many essays that I missed yours. Thanks for commenting!
I agree with your note that "transformative" is confusing, but I'm not sure about what would best replace it--- I'd like to represent the possibility of huge flips and swings in the way actions are mapped to values. I'll have to think about that. Mediating might be best.
Thanks also for the link to value of information, that makes sense.
Your essay sounds quite interesting; I'll give it a read and go comment over there.
Best,
Daniel