In a post-singularity future, people may, with the help of superintelligent AI, have almost arbitrary levels of control over their environment and their own mental and physical constitution. This near-omnipotence, however, will presumably not extend to other people's mind and body. (Argument from symmetry, though a game-theoretically stable society model where each participent has unrestricted control over everyone else seems at least remotely conceivable. We'll leave that aside for later speculations.)
I think it's plausible to assume that those post-singularity people can be modeled as agents trying to maximize (minimize) their respective goal functions on the universe. Given their, in principle, almost infinite capability to maximize those functions, the biggest factor holding back individual agents may turn out to be other, similarly powerful agents with incompatible goal functions. Since we're talking about an agent model that clearly separates preferences from beliefs, Aumann's results don't provide a safety hatch here. Clearly, the agents can compromise, and arguments from symmetry again prevail, but this may, in the face of the otherwise immense capabilities of the agents, result in huge discounts from the theoretically achievable level of goal-function fulfillment.
That is, the posthumans may get in each other's way, and there's now way to rationally resolve the situation without massively stomping on some (or all) people's goal functions.
How likely is it that people's preferences may intrinsically differ after the technological singularity ? If those people have evolved through self- (or mutual) modification from humans, or have otherwise inherited, possibly through deliberate design, human values and tastes, then I'd regard this to be very likely indeed. I may be pessimistic here, but my personal lifelong experience is that people have, in parts radically, different values attached to certain aspects of the world, themselves, and other people, and no amount of rational insight is ever going to make those values compatible. So I think the problem I'm discussing here is real and realistic.
Emigration may be a solution, and is a cherished human tradition that may extend into a post-singularity future. Of course, people's value function will often put strong emphasis on the presence of (certain) other people, so walking away will in many cases be worse than gritting your teeth and getting on with each other. But in some cases, getting out of each other's hair may be the optimal thing to do.
Now posthumans surely have some radical opportunities to venture out into unexplored territory, and the silentium universii may mean that there's a lot of place to settle down. Starw(h)isps traveling at a notch below light speed can carry virtualized passengers for billions of parsecs within a short subjective time. But even this may not be far enough, as those other annoying posthumans with incompatible value systems will presumably have access to the same means of expansion and may be determined to use them, if not now, than maybe later in the future. For their destinies to separate, the opposing parties will have to make their future light cones disjunct. Cosmic acceleration from dark energy may make this possible simply by traveling far enough fast enough, but has at least two disadvantages: It creates an asymmetry between those deciding to move away and those that "inherit the earth", and it may be impractical for posthumans to wait long enough for inflation to catch on - given post-singularity computing capacities, and a foreseeable tendency to virtualize your supporting hardware, even a nanosecond wait in objective time may be unbearable on a subjective scale.
As you may have guessed by now from the title of this post, there's probably another, much simpler way for posthumans to part ways. This method depends on the assumed validity of the so-called many-worlds interpretation of the superposition principle in of quantum mechanics. As a note of caution, however, I'd like to point out that the superposition principle relies on the linearity of quantum mechanics, which may turn out to be false, since general relativity is non-linear. (That is, a linear combination of two solutions describing world-states is not necessarily a valid solution itself.) The basic idea is for all parties to condition their further existence on the output of a quantum random number generator. By accepting to inhabit only mutually exclusive subsets of possible worlds, all participants can have symmetric access to a constrained resource (e.g., they can all "inherit the earth" in their Everett branch.) The superposition principle also assures that their fates are separated once and forever, without the danger of any one party deciding to overturn the deal at a later time point. Furthermore, this approach can be implemented on a very short timescale.
As I believe in the mutual incompatibility of many, if not most, human tastes, values, and likings, as well as in the stability of those tastes, values and likings under reflection, I believe posthumans will use one method or another to eventually part ways. (The fact that I spend some time thinking on such problems shows that I believe I would do so, doesn't it ?) Everett emigration seems to be a rather straightforward way to achieve that. We do not, however, currently understand quantum mechanics, general relativity, and the superposition principle well enough to literally bet our lives on it. (Otherwise, we could already choose to implement it using current technology, that is, a quantum random number generator and some hydrogen bombs ...)
Could this be an explanation of the Fermi paradox ? If technological civilizations reliably undergo technological singularities, and post-singularity societies tend to "atomize" themselves, universes may in fact on average be relatively quiet places. But I don't really hold this argument to be valid, as even isolated posthumans may be very noisy. Furthermore, I think the "Everett barrier" is in fact not that impermeable in the presence of a sufficiently powerful AI, so transhumans with compatible tastes might join each other, even if they originated in different Everett branches - but that's some stuff to discuss in a follow-up to this post.
I think it's plausible to assume that those post-singularity people can be modeled as agents trying to maximize (minimize) their respective goal functions on the universe. Given their, in principle, almost infinite capability to maximize those functions, the biggest factor holding back individual agents may turn out to be other, similarly powerful agents with incompatible goal functions. Since we're talking about an agent model that clearly separates preferences from beliefs, Aumann's results don't provide a safety hatch here. Clearly, the agents can compromise, and arguments from symmetry again prevail, but this may, in the face of the otherwise immense capabilities of the agents, result in huge discounts from the theoretically achievable level of goal-function fulfillment.
That is, the posthumans may get in each other's way, and there's now way to rationally resolve the situation without massively stomping on some (or all) people's goal functions.
How likely is it that people's preferences may intrinsically differ after the technological singularity ? If those people have evolved through self- (or mutual) modification from humans, or have otherwise inherited, possibly through deliberate design, human values and tastes, then I'd regard this to be very likely indeed. I may be pessimistic here, but my personal lifelong experience is that people have, in parts radically, different values attached to certain aspects of the world, themselves, and other people, and no amount of rational insight is ever going to make those values compatible. So I think the problem I'm discussing here is real and realistic.
Emigration may be a solution, and is a cherished human tradition that may extend into a post-singularity future. Of course, people's value function will often put strong emphasis on the presence of (certain) other people, so walking away will in many cases be worse than gritting your teeth and getting on with each other. But in some cases, getting out of each other's hair may be the optimal thing to do.
Now posthumans surely have some radical opportunities to venture out into unexplored territory, and the silentium universii may mean that there's a lot of place to settle down. Starw(h)isps traveling at a notch below light speed can carry virtualized passengers for billions of parsecs within a short subjective time. But even this may not be far enough, as those other annoying posthumans with incompatible value systems will presumably have access to the same means of expansion and may be determined to use them, if not now, than maybe later in the future. For their destinies to separate, the opposing parties will have to make their future light cones disjunct. Cosmic acceleration from dark energy may make this possible simply by traveling far enough fast enough, but has at least two disadvantages: It creates an asymmetry between those deciding to move away and those that "inherit the earth", and it may be impractical for posthumans to wait long enough for inflation to catch on - given post-singularity computing capacities, and a foreseeable tendency to virtualize your supporting hardware, even a nanosecond wait in objective time may be unbearable on a subjective scale.
As you may have guessed by now from the title of this post, there's probably another, much simpler way for posthumans to part ways. This method depends on the assumed validity of the so-called many-worlds interpretation of the superposition principle in of quantum mechanics. As a note of caution, however, I'd like to point out that the superposition principle relies on the linearity of quantum mechanics, which may turn out to be false, since general relativity is non-linear. (That is, a linear combination of two solutions describing world-states is not necessarily a valid solution itself.) The basic idea is for all parties to condition their further existence on the output of a quantum random number generator. By accepting to inhabit only mutually exclusive subsets of possible worlds, all participants can have symmetric access to a constrained resource (e.g., they can all "inherit the earth" in their Everett branch.) The superposition principle also assures that their fates are separated once and forever, without the danger of any one party deciding to overturn the deal at a later time point. Furthermore, this approach can be implemented on a very short timescale.
As I believe in the mutual incompatibility of many, if not most, human tastes, values, and likings, as well as in the stability of those tastes, values and likings under reflection, I believe posthumans will use one method or another to eventually part ways. (The fact that I spend some time thinking on such problems shows that I believe I would do so, doesn't it ?) Everett emigration seems to be a rather straightforward way to achieve that. We do not, however, currently understand quantum mechanics, general relativity, and the superposition principle well enough to literally bet our lives on it. (Otherwise, we could already choose to implement it using current technology, that is, a quantum random number generator and some hydrogen bombs ...)
Could this be an explanation of the Fermi paradox ? If technological civilizations reliably undergo technological singularities, and post-singularity societies tend to "atomize" themselves, universes may in fact on average be relatively quiet places. But I don't really hold this argument to be valid, as even isolated posthumans may be very noisy. Furthermore, I think the "Everett barrier" is in fact not that impermeable in the presence of a sufficiently powerful AI, so transhumans with compatible tastes might join each other, even if they originated in different Everett branches - but that's some stuff to discuss in a follow-up to this post.
No comments:
Post a Comment