• [deleted]

Daniel,

Tricky without listing them, but I'll start. I'm pointing out that intuition is commonly wrong, as initially assumed 'effects' are invariably not the actual effects. We fail to 'think through' consequences carefully enough. So a few at random from the interminable list, all interrelated and all influencing others;

1. Logic. Famously all logical systems are 'ultimately beset by paradox', except the 'mundane' truth function logic (TFL). Unification exposes the application of TFL's hierarchical structure to ALL nature, and even maths ('brackets'). See my 2012 essay. The value of Godel's n-value 'fuzzy' logic emerges. Logical QM = logical nature and the universe. Man's whole view and comprehension of life is affected.

2. Epidemiology. At the most fundamental level we need to anticipate how nature and viruses will behave in advance and stop just playing 'catch up'. The recursive quantum gauge structure to matter inherent in the unification model is a tool to allow that, translating much of the 'noise' in the Shannon channel (see last years essay). For instance in setting any empirical parameters (i.e. sexual transmittance etc. etc.) and modelling the fractal 'chance' distributions we could have anticipated and understood Aids long in advance of it's arrival.

3. Genetics. Judy has recognised the need for far better understanding of the dangerous areas we're getting into, and that removing the 'impasse' of the division of nature into two inconsistent halves will bring it all coherently together. i.e. Classical mechanistic descriptions are essential in our ability to properly understand biology.

4. Space Travel. Astrophysics is riddled with anomalies, none less than the "superluminal" quasar jets found at up to 46c. The unification mechanism, re-scattering at c in the local rest frame, resolves the problem within the SR postulates. We CAN then film bullets fired within a passing train travelling at greater than muzzle velocity in the camera without the present paradox. Suddenly 'inhospitable' space is our back yard.

5. Energy. The same unified mechanism points to exactly how nuclear tokamaks and AGN's really work, combining much know but unlinked physics; accelerating and re-ionizing matter in the helical toroidal magnetic field to a plasma 'z-pinch venturi, ejected by the Lorentz force perpendicular to the rotation. Fusion is shown as potentially very dangerous AND it seems potentially providing almost limitless energy if harnessed correctly - a real; 'kill or cure!'. (paper also deriving a cyclic galaxy evolution sequence now passed review, accepted and in print - but not in a major journal).

That's about 3% but the list seems endless. All sciences are informed. We use the term 'physics' but the unification is of 'nature', which always WAS unified, it's only our understanding that's been poor. Pick a topic and if we've explored it I'll explain the relevance.

So yes. Having studied consequences I suggest there is NOTHING of more prime importance than escaping the current fundamental theoretical rut of our understanding of nature. The problem is that it's 'unfamiliar', and unfamiliar always means 'wrong' to those who judge against prior beliefs not evidence and consequences (shockingly many it seems). The problem may be we've forgotten what 'big steps' are.

I see Eckard suggests this is about; 'my importance' but nothing could be further from the truth. Anyone can take what they will and have all the credit so I can get back sailing! Not only is it a joint venture not just me, but we agree truth belongs to nature not any man.

Best wishes

Peter

Daniel,

You mistook my essay perhaps without even reading it carefully. Since you seem to speak for a "Future of Humanity Institute" in Oxford, and at least the wording of your essay did not meet my quality standards while I consider Oxford's colleges still renowned, I tried to learn a bit about Beckstead and Bostrom who seem to be rather young fellows, and I searched for the still strange to me term "large future" with the result that Yahoo only returned links to "Big future" with one exception: "Large Future - Image Results", a glittering perspective that make it understandable to me why you mistook my essay.

As an engineer, I see large and big only reasonable in connection to something a size refers to. More worse, I see the future something to which one cannot even ascribe a size. If I was forced to rate your essay, this logical flaw did cause me to rate it one, although your command of English is definitely better than mine. Maybe, I mistook you. Please correct me if you can.

Eckard

Dear Daniel,

Thanks for reading and for your questions on my thread.

You note that humans have free will and can pursue common goals without economic incentives. That is surely true, and is a counterargument against a too narrow interpretation of my approach.

I suggest in the essay that there is still "motion" in the case of equality, but the movement resembles "diffusion" more than directed activity. I do think that this aspect of reality (the existence of gradients) intrudes even into human affairs. Very little seems to get accomplished without resources being applied, despite that we can, many of us, agree to pursue a common goal.

I do hope to continue work on the idea. The Science magazine I received in today's mail has a front cover dedicated to "the Science of Inequality". The special section is quite lengthy and I haven't read it yet, but it seems to indicate that these ideas are worth developing.

Thanks again for your response, and congratulations on your current very high ranking.

Best regards,

Edwin Eugene Klingman

Daniel,

I suppose I find your approach rather two dimensional, like a slice through a pyramid. Yes, you've picked out the odd current 'hot topic' but seemingly as much from familiarity as from any fundamental analysis of consequential effects on other areas.

I see subjects as all connected but entirely 'layered' in a hierarchy. At the head of the pyramid are the fundamentals which inform everything so should have far higher priority. In the middle layers the subjects are largely insulated from each other. We use disconnected science - as a few authors here also point out, so there's too little cross pollination.

I'd have preferred to see you identify a methodology for assessment of where the most valuable long term returns apply. As Peter says, these are not always immediately apparent. Peter correctly identifies the peak of the pyramid, connecting to everything but you seem to treat the whole structure as 'flat' and cellular. Surely that's no improvement on what we do now.

This is all in a way connected to my proposals that we need greatly improved thinking methods, going to deeper level in assessing consequences. I feel we j have great unrealised potential in our own brains and focussing too much on AI is likely to distract and may even be dangerous.

I've tracked you down from the anonymous 'Daniel' post on my blog. Thanks for your comment but such research is presently impractical due to paucity of required data.

Judy

Hi Edwin,

First, my apologies for mixing up your first and second names!

Second, thanks for your response. I hope your continuing work goes well; if physical laws were found to be very predictive of societies in certain circumstances, that would be very useful.

Best,

Daniel

Hi Judy,

Thanks for your response; I think I understand your feedback better now.

I'm glad you figured out which Daniel the comment was from. I must not have been logged in!

Best,

Daniel

Hi Laurence,

Thanks for reading, and your comments! I agree that existential risk should be a top priority. I'm honestly not sure how existentially risky the next few decades are relative to later times this century or next, but I'd welcome more information about those facts.

Best,

Daniel

Hi Tommasso,

Thanks for your feedback. It does seem that many people would have been helped by more concrete examples, whether in crucial phenomena, in ideas like breadth and "size" of the future, or in assertions like the one about cosmology.

I'm glad you liked the ending :)

Best of luck to you as well!

Thanks,

Daniel

Hello Daniel,

I enjoyed your essay, and I agree with its central thesis to the point of thinking it is essential that we do deal with the existential risks that face humanity, but some of your intermediate points fall apart for me. Premise 2 on page 2 is almost too easy to disprove or discredit, and appears to be of no value, while abandoning that premise reveals a host of phenomena to be breadth-transformative - all because of context dependencies which follow from premise 1, which I think is universal.

If we took a nuclear physicist and dropped him back in ancient times, and even gave him a few samples from his laboratory to carry along; what could he do? He might manage a few parlor tricks like turning a sample of lead into gold, and create the legend of a magical 'Philosopher's stone,' but he (or she) could not manage to convey enough knowledge to lead to an enduring understanding of radioactivity - so we would only hear tales of 'alchemical fire' and that's about all that would remain. Paul Pilzer goes further, basing his theory of Economics on the assumption that premise 2 is false, and that the value of any commodity is determined by available technology and other factors that determine its usability and the efficiency thereof. So premise 2 is disproved. Still; I think your conclusion is valid, and that we should be aiming for a Large future, if we want to have a future at all.

I agree with your conclusion that we must take seriously the need to address existential risks, and your assessment that engineered biohazards and the AI singularity are two of our most pressing looming problems, where if left unaddressed; they certainly could lead to humanity's extinction, or relegate us to a future that is both Small and unpleasant. I will leave aside the first, except to say that GMO food crops could be such a problem, and that the burden should be on the creators of modified seeds to show their safety long-term - through scientific studies conducted in isolation - rather than making the whole world their lab or guinea pig and leaving the burden of proof (that there are unforeseen risks) to us. If there are complications, a large part of our food supply has already been contaminated, and Nature will further spread the 'contagion' around, so this might be a pressing issue.

The problem of existential risk from the AI singularity is one I've given considerable thought to and I have definite ideas about how we must act to head problems off. Specifically; we have a window of opportunity to develop machines and software capable of qualitative analysis - subjective databases and search engines - before the machines reach intelligence or self awareness due to the brute force of massively parallel processing. Such an intelligence would be formidable, but it would lack any subtlety or finesse, and would be both brutish and tyrannical. This makes for a very dismal future for humans.

I will conclude by copying some comments I made on the essay page of Leo KoGuan, as they also apply here. "I have been working for a number of years now to create a framework for qualitative or subjective search engines and databases, and I've even included some of the fruits of my research in that area in my FQXi essays, so it will be clear to all that this model follows from my prior work. Personally; I'd rather work with R2-D2 and C3PO than work for a Terminator style robot, and this is a necessary step in that direction. However; if we did create this technology, and fed into the computer works of the great philosophers, religious texts, legal documents, and so on; it would calculate percentage truth-values for various assertions contained therein.

Of course; it will cause the worst scandal in history when people realize that a computer is being made the arbiter of their religion. This is why such things must be handled with some sensitivity. It is also why I think the proposal of Jens Niemeyer for a repository of knowledge is important to humanity's survival, and deserves the development and use of such technology. This goes way beyond the Dewey decimal system (no pun intended - ed), and could be a way to achieve a scientific level of fair representation - which is a necessary step in your plan - but will ordinary humans be willing to set cherished beliefs aside, in order to realize a bright future instead of dystopia?"

How would you deal with that issue?

Regards,

Jonathan

    Hi Jonathan,

    Thanks! I'm glad you enjoyed it.

    Re: your first point: I think I can clear this up. As your example points out, the extrinsic or instrumental value of things is very time sensitive; this is quite right. What I meant was that *intrinsic* value is time-insensitive. For example, if you think that suffering is of intrinsic disvalue, then it doesn't make much sense to think that that intrinsic value is more or less depending on what day, year, or millennium that suffering takes place in. That's all I meant to say by premise 2.

    I'm glad we're in agreement about existential risk from AI (though I don't think "self-awareness" is relevant; it seems to me that un-"self-aware" AI could probably have all of the effects I'm worried about).

    I'll have to go take a look at your essay to learn more about the issue you point out! Unfortunately, I can't promise I'll get to it before the end of the month.

    Best of luck,

    Daniel

    Thanks Daniel,

    I especially resonate with one statement in your essay "given the knowledge of how Nature sets its phenomena, Humanity could act to maximize the value of their play." Since my essay is focused on the value of play as a learning tool, I find that idea especially appealing.

    Regardless of how soon you get to my essay, I think you will find it of value to your efforts, and I hope to stay in contact to discuss the issues you raise, even after the contest has concluded.

    All the Best,

    Jonathan

    Hi Daniel,

    Thanks for the really interesting essay. I agree that the two phenomena you identified are crucial, and propose a third. It is research on processes and systems that lead humans to interact "productively" (free from bias and destructive conflict while sharing information freely and making effective decisions).

    Some support for this suggestion is provided in my essay on computationally intelligent personal dialogic agents. I've developed a prototype of such a system as part of a US National Science Foundation CAREER award.

    I'd appreciate a rating on my essay, if you can do that, since I am a bit short on ratings. Also, I'm interested in collaborators in furthering the development of the dialogic system, if you know of anyone that might be interested. Have them contact me at my gmail address, my username is my first name, then a period, then my last name.

    Thanks,

    Ray Luechtefeld, PhD

    Hello Daniel, May I post a short, but sincere critique of your essay? I'd ask you to return the favour. Here's my policy on that. - Mike

    6 days later
    • [deleted]

    Hello Daniel

    I am impressed by your analysis. I am impressed by much of the work of the Future of Humanity Institute. However, I wonder if your suggestions will work, and I wonder if they really lower risk. Your examples, biological engineering and AI, are good examples to illustrate my concerns too. 1) It seems difficult to stop either biological engineering or AI research. Note that the biological containment labs from which you cite escape as being "shockingly common" were the result of one of science's few attempts to restrain dangerous experiments. 2) I agree that biological engineering and AI present existential problems, but they also might solve others. If Willard Wells and Martin Rees are right about our prospects, we may have to take some risks to lower the background level of risk. As an example, your colleague Stuart Armstrong warns persuasively about AI risk in his booklet "Smarter than Us." However, AI is a critical part of his proposal to settle the universe, a proposal that if workable would give us a broad and safe future. [Stuart Armstrong & Anders Sandberg, "Eternity in six hours: Intergalactic spreading of intelligent life and sharpening the Fermi paradox," Acta Astronautica, Aug-Sept 2013.] AI seems to be a component of many projects that would reduce risk.

    Of course, really good science and good use of that science would take care of these concerns. Perhaps I am too cynical about scientists.

    My solution is an attempt to crowdsource work in the area of what I call management of positive and negative singularities. I wonder if it will really work either.

    • [deleted]

    The split of phenomena into limiting and transformative is interesting. (Instead of "transformative phenomenon", which suggests, to the uninitiated reader, "phenomenon that rearranges some existing part of the world into a better part organized along different principles", we'd suggest a term like "mediating phenomenon" or "controlling phenomenon", which suggests "phenomenon that, if it exists, will translate a difference in human action into a difference in the value of the future".) We wonder if there are other fruitful ways to draw distinctions between different kinds of crucial phenomena: for example, phenomena that determine a default expectation for what will happen in the absence of coordinated action, or phenomena for which research must be initiated far in advance (as opposed to on-demand).

    Although your exposition of the concept "crucial" stands on its own, an expanded version could relate it to the standard decision-theoretical concept of "value of information", which applies whenever an agent faces the problem of allocating resources between (1) gathering additional information about alternatives and (2) improving the execution of the alternatives themselves. This would help to draw connections for readers primarily familiar with economics, statistics, or psychology, as well as for for readers entirely unfamiliar with decision theory or its relation to moral philosophy arguments.

    Your general conclusion seems hard to dispute: there are aspects of the world that could have major effects on long-term outcomes, but that we understand incompletely in a way that would benefit substantially from the investment of far greater resources. We agree particularly about the need for improved understanding of the problem of AI safety (in your essay, "the difficulty of designing tasks for [anticipated] superintelligent AIs that would result in non-Disastrous outcomes"). We also agree about the related problem of formally identifying what kinds of potentially unrecognized value the future stands to win or lose (touched on in your essay in the scenario of "unconsummated realization"). Our essay is a brief note about the latter problem. On the surface, it describes a potential approach to mitigating dangers from well-meaning top-down activism (such as we expected many of the entries in the contest to be). However, it tries to be general enough to apply to any entity that potentially "steers who steers", so it could fit just as well in the context of the "indirect normativity"/"ideal preference" approach to AI safety.

    Steven Kaas & Steve Rayhawk

      Sorry, I had meant to post that from my account, but apparently it logged me out.

      Steven & Steve,

      Hey! I didn't realize you were in the contest; there are so many essays that I missed yours. Thanks for commenting!

      I agree with your note that "transformative" is confusing, but I'm not sure about what would best replace it--- I'd like to represent the possibility of huge flips and swings in the way actions are mapped to values. I'll have to think about that. Mediating might be best.

      Thanks also for the link to value of information, that makes sense.

      Your essay sounds quite interesting; I'll give it a read and go comment over there.

      Best,

      Daniel

      Write a Reply...