Hi Judith,

It's clear that you found something unsatisfactory about the essay, but I am having trouble understanding exactly what it is. I'd appreciate your help in figuring it out.

I have tried to do three things with my essay. In order from most abstract to most concrete: first, I point out that Bostrom and Beckstead's views imply that we should steer the future primarily by trying to achieve Large and avoid Small futures; second, that this is a reason that societies and their governments should support and invest in scientific research on crucial phenomena; and third, that extinction risks from biological engineering and AI are concrete crucial phenomena that ought to be invested in. This does seem to me to "identify key areas" and "give a direction to steer". What element seemed missing, to you? If you wanted proposed solutions, I'm afraid I don't have good ones; it seems to me that we know little enough about the problems that more study is needed before solutions can be found. I have more ideas about what topics can be studied in AI risk here, in case you're interested.

You say "I needed the commitment of a short list of suggested focuses at the end". I had intended the bio and AI risks to be that short list of suggested focuses. I can't really see the list getting shorter; did you want my recommendations to be more specific?

Thanks for pointing me back at Peter's question; I see he's posted again, and I'll be going back and trying to explain myself more clearly to him; hopefully you'll find my reply to him useful.

Thanks for commenting, and I'll go check your essay out; your topic sounds very interesting!

Best,

Daniel

Thanks for the comment, Lawrence! I'll check out your essay.

Best,

Daniel

Hi Tom,

Thanks! I agree, working with folks like Bostrom and Beckstead has given me a healthy respect for using theory to guide us towards really high-stakes issues.

Thanks for the link to your essay; I'm quite interested in science policy and governance. I look forward to reading it.

Best of luck,

Daniel

Hi Eckard,

Hamming window, nice :)

As to your questions: by crucial phenomenon, I mean an empirical regularity or relationship that holds between sets of real-world conditions, that is especially important in determining how our choices affect the size of humanity's long-term future. Crucial phenomena could be properties of natural systems, like cells or black holes, or they could be properties of man-made systems, like the LHC or computers with particular programs on them. Does that help?

I do think that biological "instability" or "robustness" might not be the ideal phrases, and I'll be on the lookout for better ones.

The closing quote was questionable to a friend of mine who proofread the essay, so you're not alone there :) "Humility" means "having a modest or low view of one's own importance", thinking that we can't do much of significance. The quote isn't from anything; I put it in quotation marks to figuratively indicate that when we encounter the conflict between our modesty and our duty to humanity's future, humanity ought to "respond" by denying its humility and embracing its duty. Thanks for the feedback, that'll need some work for a future draft.

I look forward to reading your essay!

Best,

Daniel

Hi Peter,

If I understand right, you're saying that your proposed unification of quantum mechanics and relativity will also advance understanding in ecology? That's pretty unintuitive to me. Would you like to explain more?

In response to your previous post: "Most eminent physicists seem to agree unification is the key. Do you not?" I assume that they think it's key to the mission of physics, that is, to a mathematical understanding of the fundamental laws that govern the universe. I was just asking whether you had a concrete idea of how that affects humanity's future, and how you'd rank it against other kinds of science we could do if our priority was to steer humanity's future. For example, given the choice between accelerating progress in theoretical physics and accelerating progress in epidemiology, I would choose epidemiology, on the grounds that pandemics are becoming an increasingly large risk, whereas theoretical physics seems to have little to no urgency. Given the choice, how would you prioritize theoretical physics like the kind you propose relative to the other investments available?

Side note: I would never advocate cutting off one field of inquiry entirely in favour of another (except in the most dire of emergencies), so I hope I'm not coming across as disliking physics. I love physics, and it's a very deep, beautiful, and significant field; however, that doesn't mean that I think it's particularly relevant to how humanity should steer the future.

"I thought your essay showed you understood the importance of identifying and focussing on the right things. Was I wrong?" Well, I hope not, but I have been known to make mistakes! ;)

Best,

Daniel

I've replied to your later post, just to keep things tidy.

Best,

Daniel

Hi Daniel,

Thank you for your comments on my essay. I appreciate also this opportunity to read and think about what you have to say. I think you have succeeded in identifying crucial phenomena, and your approach seems sensible and insightful. I also like the fact that you connect your proposals to significant recent work. Where my outlook might differ from yours is that I would judge the next few decades to be a time of serious existential risk (in Bostrom's sense). It might be hard enough just to avoid the dangers, so maybe we can't be guided by much more than Bostrom's maxipok. Aim for Large might be too ambitious for the rest of this century. In other words, if disaster is avoided, then there will be time to work on maximizing the probability of a large future. At present, though, steering past the dangers will take the resources available. However that may be, your long-range vision can motivate people to face the tasks immediately before us.

I have looked at your Web site. I intend to keep in touch with your future research and writing.

Laurence Hitterdale

    Dear Daniel,

    I read your essay with interest, and I agree with the essence of your proposal. In particular, I fully subscribe to the idea that producing and disseminating technical, publicly understandable knowledge of critical phenomena is . . . critical.

    I only have a minor remark about your style of presentation.

    In Section 2 the exposition is kept to a high level of abstraction, with advantages in terms of elegance and conciseness; but I feel it would have benefitted from some concrete, coloured examples of the future scenarios that you classify, just to please the reader`s imagination, and perhaps to better match the nature of this contest. (It is clear from your bio that you work full time on these topics: you probably take for granted a number of ideas that normal readers would need to see expanded, and you prefer to focus on the overall logical architecture of your exposition.)

    In presenting the four categories of crucial phenomena in Section 3 you do provide some more examples - but again I had the impression that the exposition is a bit more concerned with the logical symmetry and elegance of the classification than to the effectiveness (and appropriateness) of the examples.

    The notion of `breadth` of humanity`s future, measured by the controlled resources, also sounds a bit vague (to this reader).

    In some cases it may be hard to quantify the duration of humanity, i.e. to decide where to place the mark of termination, in light of the hybrid solutions (cyborgs etc.) mentioned in several other essays in this contest.

    Talking about abstraction, this is another passage which I would have loved to see expanded: `humanity`s future size could be dramatically impacted by the cosmological rate of expansion, which determines how much matter is ultimately reachable by humans.` This sounds abstractly reasonable, but, to some reader, it may be hard to attach some definite meaning to the concept.

    Ah! The ending of your text (on collective humility) is cute!

    Good luck for your important work, and best regards.

    Tommaso

      • [deleted]

      Hi Daniel,

      Thank you for the explanations. Peter Jackson has perhaps anything but a modest view of his own importance. I don't deny, I have to humbly admit being not in position to understand and embrace what he claims.

      You wrote: "phenomena could be properties of natural systems". A phenomenon is something that is observed to happen or exist. The properties of a substance or an object are the ways in which it behaves in particular conditions.

      Still trying to understand what you meant with "the size of humanity's future", I think you meant the desired property of future being a bright alias great one. I know that great means only in German nearly the same as does big. A big woman is a fat one.

      I hope my current essay does not contain too much of such embarrassing mistakes. Please don't hesitate asking me if something seems to be strange.

      Your topic susceptibility to misuse is the same that motivated Alfred Nobel.

      Best,

      Eckard

      • [deleted]

      Daniel,

      Tricky without listing them, but I'll start. I'm pointing out that intuition is commonly wrong, as initially assumed 'effects' are invariably not the actual effects. We fail to 'think through' consequences carefully enough. So a few at random from the interminable list, all interrelated and all influencing others;

      1. Logic. Famously all logical systems are 'ultimately beset by paradox', except the 'mundane' truth function logic (TFL). Unification exposes the application of TFL's hierarchical structure to ALL nature, and even maths ('brackets'). See my 2012 essay. The value of Godel's n-value 'fuzzy' logic emerges. Logical QM = logical nature and the universe. Man's whole view and comprehension of life is affected.

      2. Epidemiology. At the most fundamental level we need to anticipate how nature and viruses will behave in advance and stop just playing 'catch up'. The recursive quantum gauge structure to matter inherent in the unification model is a tool to allow that, translating much of the 'noise' in the Shannon channel (see last years essay). For instance in setting any empirical parameters (i.e. sexual transmittance etc. etc.) and modelling the fractal 'chance' distributions we could have anticipated and understood Aids long in advance of it's arrival.

      3. Genetics. Judy has recognised the need for far better understanding of the dangerous areas we're getting into, and that removing the 'impasse' of the division of nature into two inconsistent halves will bring it all coherently together. i.e. Classical mechanistic descriptions are essential in our ability to properly understand biology.

      4. Space Travel. Astrophysics is riddled with anomalies, none less than the "superluminal" quasar jets found at up to 46c. The unification mechanism, re-scattering at c in the local rest frame, resolves the problem within the SR postulates. We CAN then film bullets fired within a passing train travelling at greater than muzzle velocity in the camera without the present paradox. Suddenly 'inhospitable' space is our back yard.

      5. Energy. The same unified mechanism points to exactly how nuclear tokamaks and AGN's really work, combining much know but unlinked physics; accelerating and re-ionizing matter in the helical toroidal magnetic field to a plasma 'z-pinch venturi, ejected by the Lorentz force perpendicular to the rotation. Fusion is shown as potentially very dangerous AND it seems potentially providing almost limitless energy if harnessed correctly - a real; 'kill or cure!'. (paper also deriving a cyclic galaxy evolution sequence now passed review, accepted and in print - but not in a major journal).

      That's about 3% but the list seems endless. All sciences are informed. We use the term 'physics' but the unification is of 'nature', which always WAS unified, it's only our understanding that's been poor. Pick a topic and if we've explored it I'll explain the relevance.

      So yes. Having studied consequences I suggest there is NOTHING of more prime importance than escaping the current fundamental theoretical rut of our understanding of nature. The problem is that it's 'unfamiliar', and unfamiliar always means 'wrong' to those who judge against prior beliefs not evidence and consequences (shockingly many it seems). The problem may be we've forgotten what 'big steps' are.

      I see Eckard suggests this is about; 'my importance' but nothing could be further from the truth. Anyone can take what they will and have all the credit so I can get back sailing! Not only is it a joint venture not just me, but we agree truth belongs to nature not any man.

      Best wishes

      Peter

      Daniel,

      You mistook my essay perhaps without even reading it carefully. Since you seem to speak for a "Future of Humanity Institute" in Oxford, and at least the wording of your essay did not meet my quality standards while I consider Oxford's colleges still renowned, I tried to learn a bit about Beckstead and Bostrom who seem to be rather young fellows, and I searched for the still strange to me term "large future" with the result that Yahoo only returned links to "Big future" with one exception: "Large Future - Image Results", a glittering perspective that make it understandable to me why you mistook my essay.

      As an engineer, I see large and big only reasonable in connection to something a size refers to. More worse, I see the future something to which one cannot even ascribe a size. If I was forced to rate your essay, this logical flaw did cause me to rate it one, although your command of English is definitely better than mine. Maybe, I mistook you. Please correct me if you can.

      Eckard

      Dear Daniel,

      Thanks for reading and for your questions on my thread.

      You note that humans have free will and can pursue common goals without economic incentives. That is surely true, and is a counterargument against a too narrow interpretation of my approach.

      I suggest in the essay that there is still "motion" in the case of equality, but the movement resembles "diffusion" more than directed activity. I do think that this aspect of reality (the existence of gradients) intrudes even into human affairs. Very little seems to get accomplished without resources being applied, despite that we can, many of us, agree to pursue a common goal.

      I do hope to continue work on the idea. The Science magazine I received in today's mail has a front cover dedicated to "the Science of Inequality". The special section is quite lengthy and I haven't read it yet, but it seems to indicate that these ideas are worth developing.

      Thanks again for your response, and congratulations on your current very high ranking.

      Best regards,

      Edwin Eugene Klingman

      Daniel,

      I suppose I find your approach rather two dimensional, like a slice through a pyramid. Yes, you've picked out the odd current 'hot topic' but seemingly as much from familiarity as from any fundamental analysis of consequential effects on other areas.

      I see subjects as all connected but entirely 'layered' in a hierarchy. At the head of the pyramid are the fundamentals which inform everything so should have far higher priority. In the middle layers the subjects are largely insulated from each other. We use disconnected science - as a few authors here also point out, so there's too little cross pollination.

      I'd have preferred to see you identify a methodology for assessment of where the most valuable long term returns apply. As Peter says, these are not always immediately apparent. Peter correctly identifies the peak of the pyramid, connecting to everything but you seem to treat the whole structure as 'flat' and cellular. Surely that's no improvement on what we do now.

      This is all in a way connected to my proposals that we need greatly improved thinking methods, going to deeper level in assessing consequences. I feel we j have great unrealised potential in our own brains and focussing too much on AI is likely to distract and may even be dangerous.

      I've tracked you down from the anonymous 'Daniel' post on my blog. Thanks for your comment but such research is presently impractical due to paucity of required data.

      Judy

      Hi Edwin,

      First, my apologies for mixing up your first and second names!

      Second, thanks for your response. I hope your continuing work goes well; if physical laws were found to be very predictive of societies in certain circumstances, that would be very useful.

      Best,

      Daniel

      Hi Judy,

      Thanks for your response; I think I understand your feedback better now.

      I'm glad you figured out which Daniel the comment was from. I must not have been logged in!

      Best,

      Daniel

      Hi Laurence,

      Thanks for reading, and your comments! I agree that existential risk should be a top priority. I'm honestly not sure how existentially risky the next few decades are relative to later times this century or next, but I'd welcome more information about those facts.

      Best,

      Daniel

      Hi Tommasso,

      Thanks for your feedback. It does seem that many people would have been helped by more concrete examples, whether in crucial phenomena, in ideas like breadth and "size" of the future, or in assertions like the one about cosmology.

      I'm glad you liked the ending :)

      Best of luck to you as well!

      Thanks,

      Daniel

      Hello Daniel,

      I enjoyed your essay, and I agree with its central thesis to the point of thinking it is essential that we do deal with the existential risks that face humanity, but some of your intermediate points fall apart for me. Premise 2 on page 2 is almost too easy to disprove or discredit, and appears to be of no value, while abandoning that premise reveals a host of phenomena to be breadth-transformative - all because of context dependencies which follow from premise 1, which I think is universal.

      If we took a nuclear physicist and dropped him back in ancient times, and even gave him a few samples from his laboratory to carry along; what could he do? He might manage a few parlor tricks like turning a sample of lead into gold, and create the legend of a magical 'Philosopher's stone,' but he (or she) could not manage to convey enough knowledge to lead to an enduring understanding of radioactivity - so we would only hear tales of 'alchemical fire' and that's about all that would remain. Paul Pilzer goes further, basing his theory of Economics on the assumption that premise 2 is false, and that the value of any commodity is determined by available technology and other factors that determine its usability and the efficiency thereof. So premise 2 is disproved. Still; I think your conclusion is valid, and that we should be aiming for a Large future, if we want to have a future at all.

      I agree with your conclusion that we must take seriously the need to address existential risks, and your assessment that engineered biohazards and the AI singularity are two of our most pressing looming problems, where if left unaddressed; they certainly could lead to humanity's extinction, or relegate us to a future that is both Small and unpleasant. I will leave aside the first, except to say that GMO food crops could be such a problem, and that the burden should be on the creators of modified seeds to show their safety long-term - through scientific studies conducted in isolation - rather than making the whole world their lab or guinea pig and leaving the burden of proof (that there are unforeseen risks) to us. If there are complications, a large part of our food supply has already been contaminated, and Nature will further spread the 'contagion' around, so this might be a pressing issue.

      The problem of existential risk from the AI singularity is one I've given considerable thought to and I have definite ideas about how we must act to head problems off. Specifically; we have a window of opportunity to develop machines and software capable of qualitative analysis - subjective databases and search engines - before the machines reach intelligence or self awareness due to the brute force of massively parallel processing. Such an intelligence would be formidable, but it would lack any subtlety or finesse, and would be both brutish and tyrannical. This makes for a very dismal future for humans.

      I will conclude by copying some comments I made on the essay page of Leo KoGuan, as they also apply here. "I have been working for a number of years now to create a framework for qualitative or subjective search engines and databases, and I've even included some of the fruits of my research in that area in my FQXi essays, so it will be clear to all that this model follows from my prior work. Personally; I'd rather work with R2-D2 and C3PO than work for a Terminator style robot, and this is a necessary step in that direction. However; if we did create this technology, and fed into the computer works of the great philosophers, religious texts, legal documents, and so on; it would calculate percentage truth-values for various assertions contained therein.

      Of course; it will cause the worst scandal in history when people realize that a computer is being made the arbiter of their religion. This is why such things must be handled with some sensitivity. It is also why I think the proposal of Jens Niemeyer for a repository of knowledge is important to humanity's survival, and deserves the development and use of such technology. This goes way beyond the Dewey decimal system (no pun intended - ed), and could be a way to achieve a scientific level of fair representation - which is a necessary step in your plan - but will ordinary humans be willing to set cherished beliefs aside, in order to realize a bright future instead of dystopia?"

      How would you deal with that issue?

      Regards,

      Jonathan

        Hi Jonathan,

        Thanks! I'm glad you enjoyed it.

        Re: your first point: I think I can clear this up. As your example points out, the extrinsic or instrumental value of things is very time sensitive; this is quite right. What I meant was that *intrinsic* value is time-insensitive. For example, if you think that suffering is of intrinsic disvalue, then it doesn't make much sense to think that that intrinsic value is more or less depending on what day, year, or millennium that suffering takes place in. That's all I meant to say by premise 2.

        I'm glad we're in agreement about existential risk from AI (though I don't think "self-awareness" is relevant; it seems to me that un-"self-aware" AI could probably have all of the effects I'm worried about).

        I'll have to go take a look at your essay to learn more about the issue you point out! Unfortunately, I can't promise I'll get to it before the end of the month.

        Best of luck,

        Daniel

        Thanks Daniel,

        I especially resonate with one statement in your essay "given the knowledge of how Nature sets its phenomena, Humanity could act to maximize the value of their play." Since my essay is focused on the value of play as a learning tool, I find that idea especially appealing.

        Regardless of how soon you get to my essay, I think you will find it of value to your efforts, and I hope to stay in contact to discuss the issues you raise, even after the contest has concluded.

        All the Best,

        Jonathan