When you only have 25000 characters, I thought my best bet was to write about a hammer and discuss a few nails it was worth hitting.
You make a fair point that I don't address the costs of implementing such a framework. Here is a possible answer:
The common character that all the problems entail is that a unilateral action by relatively few people puts them at a considerable advantage in expansion and competition, and also separates them so strongly from other groups that it can meaningfully be considered a lasting split in humanity. Whereas at the moment competition is a standard feature of society, and though we may argue about the extent to which capitalism should be controlled, it doesn't fundamentally alter the nature of human physiology, or induce permanent and irrecoverable splits. Indeed, one of the sites of present conflict is large states which suffered badly in Western expansion are now undergoing rapid economic growth and acquiring technological parity, and in absence of transhumanism or spaceflight, an international capitalist future might be plausible.
On the other hand, if you can form a posthuman/AI elite which is much more dominant than other factions, this reflects a more permanent split. One can imagine a brief, horrible period in which posthumans/AIs destroy or assimilate the rest of humanity, which is undeniably a bad outcome but probably not an existential risk (where in the asymptotic future somewhere, things will be pretty ok for a very long time).
The alternative to this is persistent fracturing of humanity with conflict, or one dominant group subjugating another, which I think should be fairly regarded as a very bad outcome and an instance of flawed realisation of technological maturity.
I think the alternative to very bad fracturing scenarios is that we establish norms now, and ensure that they persist until at least the point where space colonization is relatively developed. I am deliberately agnostic about what these might entail, but I do not think that it is beyond the capacity of liberal/social democracy to deal with this.
A right-libertarian would argue that this cure is very much worse than the disease! This person might say that a world in which individuals are not free to do anything they like with any resources they like is a permanent state of tyranny that should be avoided at all costs! I doubt, however, that this argument really has any merit, from any perspective other than an extreme libertarian one. Can we really say that this is a good future, when the selfishness of a few should permanently close off fruitful lives for astronomically many people? I really think that sort of argument is unreasonable and requires a very strong ideological commitment to the idea that people with the power to do so should be free to abuse others as they choose.
Aside from the libertarian argument, is there any other downside to a pluralist future constrained by some hard rules - no interstellar war, no runaway colonization, and (in the medium term before expansion) no wasting our resources and no substantial inequality in transhumanism?
Well, here are a few examples, but I don't think they stand up. One is that maybe the best future really is where one particular realization of consciousness dominates all matter everywhere as quickly as possible - for instance, some kind of euphoric AI that should be simulating ever more exquisite pleasures for ever more separate instances of consciousness. For such a society it is perfectly rational to conquer the Galaxy as quickly as possible and turn every piece of cold matter into a computer and every star into engines to run them. I think this is pretty clearly a utility monster sort of argument and fairly minimal requirements for an open future or pluralism should put that to rest.
It might also be possible that, as is often considered in science fiction, there are modes of existing that are infinitely better than what we presently experience, depending on exotic, unknown physics. Consider Banks' Sublimed, for instance. It might be possible for some posthumans to attain this nirvana without imposing any large cost upon the rest of civilization, as is otherwise incurred in the previous argument. In this case, it might be reasonable, if not strictly obligatory, to try and run towards this as fast as possible, and the fracturing is then not detrimental to anybody. In general, it might be the case that among a small posthuman community, some particular experiences are considered so valuable that they dominate planning, but are not resource-intensive. I think it is perfectly fair that something like this is not prevented, although I struggle to think of what this might be under present physics. Indeed, an argument from pluralism can be fairly made here that this is no different to any other variety of posthuman experience and if some people want to participate and others don't, there should be no problem with this at all.
I think in general, though, within the scope of my essay, it is true that there will be a threshold (provided posthuman and space colonization developments do happen on the expected trajectory) beyond which key aspects of this civilization will be frozen in by physiology and the economics of spaceflight. Without advocating a totalitarian approach in any way, certain aspects of widespread governance which are quite rigid in character are necessary to prevent us from freezing in a flawed realization of humanity which cannot be undone. It is not easy to see any way around this, at least with respect to the problems I have outlined.