Benjamin,

I have no idea how this excellent essay of yours could have remained at the "bottom of the pile". You voiced many of the thoughts I have been wrestling with for some time and above all how our future relates to our past. Long lasting institutions and commitments through "ancient" instruments such as laws are one of the best ways for us to chart a course through an uncertain future.

I would love for you to share your thoughts and evaluation of my own essay. "The Cartography of the Future".

http://fqxi.org/community/forum/topic/2063

Best of luck!

Rick Searle

Hello Benjamin, May I offer a short, but sincere critique of your essay? I would ask you to return the favour. Here's my policy on that. - Mike

Benjamin,

Thank you for your essay. I teach at a few small colleges, most of the places I have taught (and the school I graduated) are open admission, the other end of the spectrum from Oxford. The mass of humanity moves mostly due to where they can find work, sometimes freedom of religion or freedom from some law or custom. I would like to think that colonization of space would be based on high principles from thinkers at Oxford. Sadly, if we can us the past as a guide, colonization of space will be done by people like my students who need a job and cannot find it here.

No one knows what extra-terrisial life is like. If the closest star to an intelligent life form has a planet they can "farm" (to put it in human terms) that might take all their focus and time. How long did it take us to explore the Earth? How long will it take us to explore a second earth? There could be many planets with intelligent like in our galaxy, but with the large volume of space between us and them, it is little surprise we have not found each other.

I have seen a few essays worry about robots or AI taking over, which I feel will not happen because a robot has no "needs". The idea of a "post-human" augmented humans taking over is possible. A car augments humans more than anything else in history, we might call the present "post-human".

Hope your essay does well,

Jeff Schmitz

Your paper is interesting, and you draw out some interesting points. I weigh in on some related concerns with my paper http://www.fqxi.org/community/forum/topic/2010 with respect to the limits any intelligent life form might face in the universe.

I am very guarded on the idea of our species moving out into space. I question whether any of those scenarios can happen. It appears that manned space travel and related issues of colonizing space are as far removed from us as a practical reality as they ever were. The singularity is something I have been aware of, but I have taken no particular stance on this. It seems to me this could imply some type of complete breakdown of any social construction. The exponential rise in technological complexity may not just be about the cybernetic world. It most likely will involve biotechnology as well. Before long we may be less concerned with computer viruses and more so with designer viruses of a biological kind. I suspect it will not be long before anyone with a bit of money and training can design up life as they choose. This could include designing up new diseases.

The singularity seems to imply that our power over nature and the removal of social constraints upon us will lead to a sort of anarchy. I have never been terribly impressed by the idea of anarchy. The fusion of brains and minds into a global network by a future cyber-neural technology and other sorts of power available does suggest that almost all socio-economic structures that currently exist will dissolve away by this sort of irresistible "acid."

We humans have been very good at exploiting our environment. Our ability to figure out problems, learn, and communicate this information has permitted us exploit our world in new and more complete ways. As a result we have increasingly taken our selves off the fitness landscape. It probably began when Homo erectus took themselves off the menu by throwing rocks at leopards and using fire at night to keep them away. This has lead to the current age where there are over 7 billion humans and we exploit our world in ways no other animal ever has, such as petroleum, uranium, metal ores, and ... . With a population of 7 billion and total mass of around 400 million tons no animal with comparable size and dietary requirements in the natural history of this planet has come even close.

In the environmental debate it is interesting to ponder the idea that the conservatives are in a certain perspective right. The continual expansion of human power, our increased use of resources and the wasteful damage done to the environment has been the human program from almost the start of our species. They are right in the sense that we have always managed to press on this way. For most of our natural and recorded history the exploitation and demolition of the world has been very slow and comparatively small. Now of course the problem is that as this trend is exponential it appears there is a prospect that this will lead to finis Homo sapiens. To rein in our growth and exploitation of the world is out of character with our species. On the other hand failure to do so means we will inevitably reach certain limits. If nothing else our world is becoming bewilderingly complex and we may at some point be no longer to manage this growth in scale and complexity.

Largely political leaders do not exist to solve problems. We sometimes call political leaders "problem solvers," and this is really only true from a certain perspective. Political leaders largely serve to protect or expand the wealth and power of those in the most elite positions. If you are in that exclusive class then in one sense political leaders are "problem solvers" if they permit you to keep business as usual or to increase your share of the pie. The idea that power structures of any sort, whether government/political, or business/corporate and we might as well include military and religious, exist to actually solve problems in the world is a bit of a delusion. We tend to focus on the rather exceptional occasions where there is leadership that does actually solve problems, where the normalcy is really a banal form of management that greases various palms.

So the future will doubtless prove to be interesting if nothing else. The odds frankly do not look in our favor, and between dystopia and utopia I would tend to say the former looks more likely. It really should not be looked upon as something that horrible. In 50 million years the Earth will be doing just fine, but we wont be there. The world will no more cry for the loss of our species than it does now over the loss of Tyrannosaurus rex.

LC

    The essay fails to distinguish between human survival and a "posthuman" future featuring "a transition to simulated humans or artificial intelligences".

    The author is not agnostic about this; he condemns "the simple, knee-jerk reaction saying that all human augmentation and all artificial intelligence, or all such technologies deemed to 'go too far',should be banned." He explains that "it seems unreasonable to prevent the great deal of good that can also be envisaged in a posthuman world...." Ummm, good for whom? What makes it good?

    The author bemoans the endurance of the Catholic Church, not one of my personal favorite institutions, but one of great importance to a large number of people. He fears a posthuman future in which robots have been programmed to be religious. To paraphrase Feynman, what do you care what robots think? Particularly if there aren't any people left who might be affected by it.

    Although polished and erudite, this essay is basically an uncritical rehash of familiar transhumanist ideas. It's a good example of what humanity must avoid.

      When you only have a hammer, every problem looks like a nail.

      You mention these possible problems: using up finite resources, competition over resources provoking long-lasting conflict, uncontrolled climate change, Darwinian economic competition, wasteful competition, unsustainable resource harvesting, splintering of humanity into competing factions, and unequal price-based availability of human augmentations, the formation of biological elites, and rogue artificial intelligences.

      Amazingly, for *all* of these problems, your preferred solution is to "establish a п¬Ѓrm regulatory framework imposed by a very powerful international organization or unilaterally by a military superpower." That is, you want strong global government regulation to prevent actions that might risk such outcomes. If you can't have that, you want very strong very-stably-entrenched social norms that severely punish actions that might risk such outcomes.

      You don't seem to consider any other possible solutions to these problems, nor do you consider whether in some cases these cures might be worse than the diseases. That is, you don't consider possible costs and risks of attempting these sorts of solutions to these problems, costs and risks that if big enough should make one reluctant to go these routes. If you have reasons for thinking that these solutions are always better solutions than all alternatives to all of these problems, you do not mention them.

      This suggests to me either that you are not aware of other options, that you are not aware of substantial costs and risks in invoking your favored solutions, that you think the answer is so obvious as to not be worth bothering to explain, or that you do not respect your audience enough to discuss your reasoning here. Have I missed something?

        Really, in an essay of this length it is not at all possible to address every single one of the issues involved in a topic as large as the future of humanity. I did hope to make this clear that considerations such as 'is posthumanism good?' were beyond the scope of the essay, and have made these assumptions explicit.

        I am well aware that there are arguments put forward against transhumanism, but am basically personally convinced that there exist realizations of a posthuman future that are highly desirable, and have argued in the remainder of the essay for this being the case.

        I care what robots think because I consider that at least some realizations of AI should be regarded as conscious, and having some quality of personhood or similar, as is very often argued in the literature and basically flows from naturalism directly. Again, arguing this case is far beyond the scope of what was already a tightly-constrained essay and I have had to leave out a lot of material that I wanted to cover - putting forward arguments for every single point when they are discussed in detail widely would be a death by a thousand cuts for a short piece like this. We are sticking to known physics and mainstream transhumanism for this very reason: I wanted to make a particular point, and I regret a little that I didn't dwell on the main point of the essay a little longer and take away even more of the retreading of context, but I feared it wouldn't make sense unless I tried to at least make some of the context explicit. Perhaps I haven't managed either quite as well as I'd have liked.

        The key issue is that if any of these transhumanist or other major economic-technological changes occur outside of a globally controlled context, the resultant fracturing and competition, writ large on a space expansion, would be permanently locked in. It is then difficult to conceive of how to remedy this once it has happened, given the nonlinearly increasing costs of competition as you increase scale. If you have such a permanent conflict fixed in from the start, it is then quite clear that a very sub-optimal future has been achieved.

        If the worst thing we have to avoid is very short essays being limited to one or two topics, then I think humanity has a fairly bright future indeed.

        I think you're right, in one sense, in that if by some disaster of circumstance or physics not yet known to us we can't colonise space, the future we have to look forward to has a very different character. We will indeed have to build toward a future of sustainable development to a degree not even envisioned by most environmentalists, in which million-year timescales are important.

        In this context, it is probably also true that avoiding conflict and capitalist growth is a priority, but this is sort of trivial and of a different character to what I discuss in my essay above. I was focused on if we do enter space, that features of our civilization might be permanently locked in to our great detriment, whereas there's no simple phase transition here. It's also not clear that, provided we don't use our resources up, there are any particular practices that can't be changed in the long term, as the Earth is a small place and ideas and social entities can spread quickly on human timescales. History as we have experienced it so far can more or less continue, with the proviso that if we use resources recklessly, we are in a great deal of trouble.

        When you only have 25000 characters, I thought my best bet was to write about a hammer and discuss a few nails it was worth hitting.

        You make a fair point that I don't address the costs of implementing such a framework. Here is a possible answer:

        The common character that all the problems entail is that a unilateral action by relatively few people puts them at a considerable advantage in expansion and competition, and also separates them so strongly from other groups that it can meaningfully be considered a lasting split in humanity. Whereas at the moment competition is a standard feature of society, and though we may argue about the extent to which capitalism should be controlled, it doesn't fundamentally alter the nature of human physiology, or induce permanent and irrecoverable splits. Indeed, one of the sites of present conflict is large states which suffered badly in Western expansion are now undergoing rapid economic growth and acquiring technological parity, and in absence of transhumanism or spaceflight, an international capitalist future might be plausible.

        On the other hand, if you can form a posthuman/AI elite which is much more dominant than other factions, this reflects a more permanent split. One can imagine a brief, horrible period in which posthumans/AIs destroy or assimilate the rest of humanity, which is undeniably a bad outcome but probably not an existential risk (where in the asymptotic future somewhere, things will be pretty ok for a very long time).

        The alternative to this is persistent fracturing of humanity with conflict, or one dominant group subjugating another, which I think should be fairly regarded as a very bad outcome and an instance of flawed realisation of technological maturity.

        I think the alternative to very bad fracturing scenarios is that we establish norms now, and ensure that they persist until at least the point where space colonization is relatively developed. I am deliberately agnostic about what these might entail, but I do not think that it is beyond the capacity of liberal/social democracy to deal with this.

        A right-libertarian would argue that this cure is very much worse than the disease! This person might say that a world in which individuals are not free to do anything they like with any resources they like is a permanent state of tyranny that should be avoided at all costs! I doubt, however, that this argument really has any merit, from any perspective other than an extreme libertarian one. Can we really say that this is a good future, when the selfishness of a few should permanently close off fruitful lives for astronomically many people? I really think that sort of argument is unreasonable and requires a very strong ideological commitment to the idea that people with the power to do so should be free to abuse others as they choose.

        Aside from the libertarian argument, is there any other downside to a pluralist future constrained by some hard rules - no interstellar war, no runaway colonization, and (in the medium term before expansion) no wasting our resources and no substantial inequality in transhumanism?

        Well, here are a few examples, but I don't think they stand up. One is that maybe the best future really is where one particular realization of consciousness dominates all matter everywhere as quickly as possible - for instance, some kind of euphoric AI that should be simulating ever more exquisite pleasures for ever more separate instances of consciousness. For such a society it is perfectly rational to conquer the Galaxy as quickly as possible and turn every piece of cold matter into a computer and every star into engines to run them. I think this is pretty clearly a utility monster sort of argument and fairly minimal requirements for an open future or pluralism should put that to rest.

        It might also be possible that, as is often considered in science fiction, there are modes of existing that are infinitely better than what we presently experience, depending on exotic, unknown physics. Consider Banks' Sublimed, for instance. It might be possible for some posthumans to attain this nirvana without imposing any large cost upon the rest of civilization, as is otherwise incurred in the previous argument. In this case, it might be reasonable, if not strictly obligatory, to try and run towards this as fast as possible, and the fracturing is then not detrimental to anybody. In general, it might be the case that among a small posthuman community, some particular experiences are considered so valuable that they dominate planning, but are not resource-intensive. I think it is perfectly fair that something like this is not prevented, although I struggle to think of what this might be under present physics. Indeed, an argument from pluralism can be fairly made here that this is no different to any other variety of posthuman experience and if some people want to participate and others don't, there should be no problem with this at all.

        I think in general, though, within the scope of my essay, it is true that there will be a threshold (provided posthuman and space colonization developments do happen on the expected trajectory) beyond which key aspects of this civilization will be frozen in by physiology and the economics of spaceflight. Without advocating a totalitarian approach in any way, certain aspects of widespread governance which are quite rigid in character are necessary to prevent us from freezing in a flawed realization of humanity which cannot be undone. It is not easy to see any way around this, at least with respect to the problems I have outlined.

        Benjamin,

        To echo some of the previous comments, this essay is broad in scope, but lacking in significant depth. As the old saying goes, the devil is in the details.

        What drew me to physics in the first place was the understanding that underneath all the words, history, hopes, rules, beliefs, feelings, thoughts, ideals, etc. of humanity, were some basic physical realities that while they might give us much space to operate within, were definite enough that we didn't want to hit the end of the leash at a run. Right now it seems humanity is getting close to various different edges and it seems any effort to apply some brakes or steering is overwhelmed by primal desires and momentum. These are the sort of issues we need to address in the next few decades and if we can navigate those shoals, future generations will continue to have other issues to shape their eras.

        For one thing you criticize the various christian churches for being overly conservative ideological hinderances, without really considering why their appeal remains strong among broad sectors of society and that might be a far more interesting and useful discussion than positing the need for 'strong institutions' to govern future interstellar colonization. In a nutshell, the premise of monotheism offers up the concept of a spiritual, moral and judgmental ideal, of which the various versions fill in according to different criteria. The essential fallacy of this premise, that an ideal is an absolute, runs very deep in western culture. Everything from political ideologies, to scientific theories of everything, are based to some degree on this assumption. The problem being that the absolute, as universal state, is one of perfect equilibrium, where all forces, etc. balance out. No bad, but no good either. While ideals are simply a collection of preferred characteristics and often vary from one formulation to another. Thus one person's ideal might be another's nightmare. Yet if you keep it vague enough, it can still maintain broad appeal, especially if backed by a strong social organization.

        If you really want to understand human activity, then try framing it in terms of convection. This concept pretty much manages to describe the physical processes on the surface of the planet and we usually end up manipulated by them. For example, think of social territories in terms of plate tectonics. When one is rubbing against another, it creates friction, heat, earthquakes, volcanos, wars, etc. Strong central governing systems tend to evolve into forms of totalitarianism because they are like storms that draw in heat energy and get bigger, washing over the landscape and drawing up more energy. Then either fade as stores of energy are depleted, or crash into other storms, resulting in wars, that resemble supercell storms. The rich and powerful ride waves of accumulated energy and while it seems they create these waves, often they are simply riding much deeper processes. In my own entry I make the argument that it is our assumption of money as a form of commodity, rather than the contract which it actually is, that enables, not only the enormous wealth vacuum the financial sector has become, but much of the resulting commercial value extraction from society and the environment that is fringe of this central vortex. Given that radical change is only possible in times of crisis, the resulting imminent bursting of history's largest debt bubble will actually provide and compel opportunity for change across a broad spectrum of institutions and practices, so that what emerges will be those continuing social structures we hand down to the next generation.

        Regards,

        John Merryman

        Here are some costs and risks of regulation. Regulation usually imposes costs of monitoring, enforcement, and administration. These costs tend to be larger for activities that are harder to observe, and when there are not clear bright lines separating desired from undesirable behavior. When regulation intends to discourage or encourage some activities, it usually accidentally discourages and encourages other nearby activities. Regulation is usually slow in adapting to changing conditions, and thus tends to discourage unforeseen innovation in activities. Most of these costs rise as variance in the activity context rises, and as regulators know less about typical details; these costs are usually less for more local regulation.

        Regulator agencies are often captured by the industries that they regulate, and use regulations to prevent entry and competition. Regulation are also often captured to benefit some groups at the expense of others in society. Stronger regulation creates stronger desires to "rent-seek" by controlling the political process that determines regulation. With an open political system, regulation that tries to discourage competition in activities ends up creating stronger competition to control the political process. These problems tend to be less the more citizens know about activities and regulations, and the more easily they can influence the political process. So such problems tend to be less at the more local level.

        There are many different scales of governments that could run regulation, from community to city to region to nation to world. There are many kinds of regulation of behavior. Some behaviors are regulated by custom and reputation. Some are regulated by liability law. Some are regulated by creating and enforcing property rights, such as to support emissions trading. Some are regulated by explicit rules constraining acceptable behavior. Sometimes governments endorse private regulatory bodies, such as medical licensing boards or bond rating organizations. Some industries self-regulate under the threat of possible government intervention. Sometimes local governments regulate and then try to coordinate regulations on larger scales.

        I hope this makes it clearer why strong regulation directly by global governments is an extreme solution, and thus why it seems surprising that you always endorse this solution for all of the long list of problems you consider, These include using up finite resources, competition provoking conflict, wasteful competition, splintering of our descendants into competing factions, unequal price-based availability of innovations, and the formation of elites. Given the high costs of the regulation you favor, and the vagueness of these problems, it seems far from obvious that weaker, more local, less formal, and less governmental regulation would not be better. It isn't even obvious to me that all of these are really problems at all; we might do better to just live with them.

        Fairly standard points, and I have tried to avoid broadly speaking making any particular arguments about economic socialism for these reasons, namely to avoid being bogged down.

        Rent-seeking is a real issue in general, but I hope it's clear that there is relatively little scope for this for institutions designed predominantly to avoid international war. Institutions such as the UN and EU may be inefficient and sometimes act in perverse ways, but by and large they are not astonishingly expensive and have (arguably, but I will side with Pinker on this) largely prevented great power war for more than half a century. I think this is a completely different topic from economic regulation and I hope that you'll agree that at least as a minimum, institutions such as these both necessary and desirable for international peace. I'm not advocating North Korea here, I'm advocating an expansion of the scope of the existing, successful institutions in the prevention of global commons risks. The government doesn't have to control your thoughts, heaven forbid - but they do have to restrain you from nuclear war or unilateral geoengineering!

        As for the self-regulation of economic activity, it seems to fly directly in the face of evidence to claim that letting individuals and corporations regulate themselves with regard to global risks has ever been effective in any way at all. The current crisis with regard to global warming, and long-term energy supply, illustrates this extremely well and it is very hard to see how anything other than international treaty can regulate such activities. Moreover on inequality, this is precisely and definitively what unregulated capitalism creates, and the current Piketty debate seems all the more remarkable because we've known this for two centuries now. If you don't want to create transhumanist inequalities, then wholly unrestrained markets are clearly a problem.

        I'm very careful to avoid claiming anything about domestic politics because this really isn't the time or place and is a distraction from the topic of discussion. But in a broad sense, surely you agree that there is at least some minimal set of areas in which regulation is necessary beyond the familiar property rights. Existential threats such as great power war, resource depletion and climate change are things that we know full well cannot be addressed solely in terms of free markets, and regardless of the underlying economic system special consideration has to be given to these dangers as requiring international, independent oversight with the powers to police this.

        I've tried to point out that there spectrums of possible regulation, varying in locality, formality, etc., and variations among problems in the relative costs of regulation, relative to benefits. This was to get you to see how extreme was your position to use explicit strong global regulation to deal with all the varied problems that concern you.

        But instead of addressing this point, you instead ask me if I'm proposing an opposite extreme of no regulation whatsoever. For the record NO, but I don't see how anything I said could be construed to suggest that. I was arguing for considering points between the two extremes.

        On rent-seeking, there has been quite a lot of variation in the type and strength of international response to activities that can be accused of risking international war. It isn't at all obvious to me that there hasn't been a lot of rent-seeking efforts to influence the actual responses within this range of variation.

        Dear Mr. Pope,

        You have written a terrific essay and the fact that so few of your fellow essayists have failed to rate it is criminal negligence at its most loathsome level.

        With the highest of regards,

        Joe Fisher

          10 days later

          Hi Benjamin,

          I enjoyed your essay! I think the basic framing and focus is a good one, and would read in-depth analyses about any of the things you touch on. A couple comments:

          First, do you have any information about how likely climate change is to be permanently civilization-hampering? It hadn't seemed to me to be a very serious concern, so I'm curious if you have any more information about it.

          Second, on the "expanding colonization front": have you read Armstrong and Sandberg's "Eternity in Six Hours" paper? It seems to me that their analysis means that Darwinian frontier dynamics are less likely than was previously thought. In any case, I think it's a useful paper if you're interested in that topic.

          Best,

          Daniel

            Hi Daniel,

            Climate change as present mid-range risks have it might not be, but very rapid climate change, widespread loss of ecosystem complexity, and simultaneous depletion of fossil fuels might leave a civilization both vulnerable to collapse, and lacking crucial resources on ~ million year timescales to resume its development after such a collapse. I refer in particular to Ian Morris' arguments that there is a logistic limit on the size and complexity of a non-industrial society, and that fossil fuels were the resource that helped Britain and then the world escape this. Without this puppy fat, and with a significantly degraded environment, it may be the case that the best we can do is achieve the Roman/Song Dynasty/Elizabethan scale of complexity and technology.

            Notwithstanding Hanson's evident dislike of my arguments, I am quite convinced by at least one of his - that expansion is likely to favour a Darwinian frontier. I find it a bit hard to see why the high-fanout scenario posed by Stuart and Anders would get past internal factional disagreement in the launch phase - it really is an awful lot of effort to get going! - and I struggle to see such scenarios happening at least for humans. I do like the Fermi explanation it gives, though, so maybe there's more to it - perhaps there is some feature of economies in the space-exploration phase that does stabilize it against the perturbations that would derail such a venture; I just can't think of it at the moment.

            Cheers,

            Ben

            I really enjoyed your essay, Benjamin. I touch on many of the same themes in my own essay and am in broad agreement with your view of the major issues. I think you are right to worry that what we do and decide now may have far-reaching implications for our future.

            I'm personally not much worried that we are coming to a liberal capitalist end of history. Fukuyama is problematic for a host of empirical and philosophic reasons. Among other things Fukuyama's view--after Hegel--was that meaningful conflict was disappearing because inequality was on the verge of being eliminated. That's hard to square with Piketty's data.

            I am also not as convinced as many people that the rapid technology development of the last few hundred years represents the bottom of an indefinitely increasing exponential curve rather than a the product of a temporary phase transition (I have written about this some here if you are interested).

            The issues you expertly discuss here are precisely the kinds of issues we need to think about before it becomes too late. Good luck in the contest, Benjamin--I hope your essay does well.

            Best,

            Robert de Neufville

              Benjamin - Thanks for the well-written essay. I very much enjoyed the framing of the issues, but was disappointed with the final direction. If I understand your conclusion, you are arguing for some form of "regulatory framework imposed by a very powerful international organization or unilaterally by a military superpower", but with an eye towards "self-policed norms... imposed not merely as law but as custom." This seems like a frightening direction given the power already inherent in our institutions. Does this not risk the very stagnation / entrenchment of institutions that your essay eloquently describes?

              I think we need to look harder at the evolutionary process for social norms as the means by which we frame a "fitness landscape" for our institutions. That is the jist of my essay - The Tip of the Spear, which I invite you to read.

              Thanks for your attention - George

              and I would appreciate any commentsThere may be a different answer in the concept of "self-regulation

                21 days later

                Hi Benjamin,

                A very wide ranging essay, that emphasises the importance of institutions, universities, states, religions and languages... the many rudders for steering to the future.

                The core concept of my essay on education, was stimulated by the history of Scotland and the enlightenment, which you touched upon.

                Nice work, high marks,

                Don Limuti