|
by
Nick Bostrom from Simulation-Argument Website
It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation.
A number of other consequences of this
result are also discussed.
Let us suppose for a moment that these predictions are correct.
One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations.
Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct).
Then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race. It is then possible to argue that, if this were the case, we would be rational to think that we are likely among the simulated minds rather than among the original biological ones.
Therefore, if we don't think that we are
currently living in a computer simulation, we are not entitled to
believe that we will have descendants who will run lots of such
simulations of their forebears. That is the basic idea. The rest of
this paper will spell it out more carefully.
The argument provides a stimulus for
formulating some methodological and metaphysical questions, and it
suggests naturalistic analogies to certain traditional
religious conceptions, which some
may find amusing or thought-provoking.
The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences.
It is
nor an essential property of consciousness that it is implemented on
carbon-based biological neural networks inside a cranium:
silicon-based processors inside a computer could in principle do the
trick as well.
For example, we need not assume that the thesis of substrate-independence is necessarily true (either analytically or metaphysically) - just that, in fact, a computer running a suitable program would be conscious.
Moreover, we need not assume that in order to create a mind on a computer it would be sufficient to program it in such a way that it behaves like a human in all situations, including passing the Turing test etc.
We need only the weaker assumption that it would suffice for the generation of subjective experiences that the computational processes of a human brain are structurally replicated in suitably fine-grained detail, such as on the level of individual synapses.
This attenuated version of substrate-independence is quite widely accepted.
For example, if there can be no
difference in subjective experience without there also being a
difference in synaptic discharges, then the requisite detail of
simulation is at the synaptic level (or higher).
But persuasive arguments have been given to the effect that if technological progress continues unabated then these shortcomings will eventually be overcome. Some authors argue that this stage may be only a few decades away.[1]
Yet present purposes require no assumptions about the time-scale.
The simulation argument works equally well for those who think that it will take hundreds of thousands of years to reach a "posthuman" stage of civilization, where humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints.
As we are still lacking a "theory of everything", we cannot rule out the possibility that novel physical phenomena, not allowed for in current physical theories, may be utilized to transcend those constraints[2] that in our current understanding impose theoretical limits on the information processing attainable in a given lump of matter.
We can with much greater confidence establish lower bounds on posthuman computation, by assuming only mechanisms that are already understood.
For example, Eric Drexler has outlined a design for a system the size of a sugar cube (excluding cooling and power supply) that would perform 1021 instructions per second.[3]
Another author gives a rough estimate of 1042 operations per second for a computer with a mass on order of a large planet.[4]
(If we could create quantum computers, or learn to build computers out of nuclear matter or plasma, we could push closer to the theoretical limits.
Seth Lloyd calculates an upper bound for a 1 kg computer of 5*1050 logical operations per second carried out on ~1031 bits.[5] However, it suffices for our purposes to use the more conservative estimate that presupposes only currently known design-principles.)
The amount of computing power needed to emulate a human mind can likewise be roughly estimated.
One estimate, based on how computationally expensive it is to replicate the functionality of a piece of nervous tissue that we have already understood and whose functionality has been replicated in silico, contrast enhancement in the retina, yields a figure of ~1014 operations per second for the entire human brain.[6]
An alternative estimate, based the number of synapses in the brain and their firing frequency, gives a figure of ~1016-1017 operations per second.[7]
Conceivably, even more could be required if we want to simulate in detail the internal workings of synapses and dendritic trees.
However, it is likely that the human central
nervous system has a high degree of redundancy on the microscale to
compensate for the unreliability and noisiness of its neuronal
components. One would therefore expect a substantial efficiency gain
when using more reliable and versatile non-biological processors.
Moreover, since the maximum human sensory bandwidth is ~108 bits per second, simulating all sensory events incurs a negligible cost compared to simulating the cortical activity. We can therefore use the processing power required to simulate the central nervous system as an estimate of the total computational cost of simulating a human mind.
But in order to get a realistic simulation of human experience, much less is needed - only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don't notice any irregularities.
The microscopic structure of the inside of the Earth can be safely omitted.
Distant astronomical objects can have highly compressed representations:
On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated, but microscopic phenomena could likely be filled in ad hoc.
What you see through an electron microscope needs to look unsuspicious, but you usually have no way of confirming its coherence with unobserved parts of the microscopic world.
Exceptions arise when we deliberately design systems to harness unobserved microscopic phenomena that operate in accordance with known principles to get results that we are able to independently verify.
The paradigmatic case of this is a computer. The simulation may therefore need to include a continuous representation of computers down to the level of individual logic elements.
This presents no problem, since our
current computing power is negligible by posthuman standards.
Should any error occur, the director
could easily edit the states of any brains that have become aware of
an anomaly before it spoils the simulation. Alternatively, the
director could skip back a few seconds and rerun the simulation in a
way that avoids the problem.
While it is not possible to get a very exact estimate of the cost of a realistic simulation of human history, we can use ~1033 - 1036 operations as a rough estimate.[10]
As we gain more experience with virtual reality, we will get a better grasp of the computational requirements for making such worlds appear realistic to their visitors.
But in any case, even if our estimate is off by several orders of magnitude, this does not matter much for our argument.
We noted that a rough approximation of the computational power of a planetary-mass computer is 1042 operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal.
A single such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) by using less than one millionth of its processing power for one second.
A posthuman civilization may eventually build an astronomical number of such computers.
We can conclude that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations even it allocates only a minute fraction of its resources to that purpose.
We can draw this conclusion even while leaving a substantial margin of error in all our estimates.
We shall develop this idea into a rigorous argument.
Let us introduce the following notation:
The actual fraction of all observers with human-type experiences that live in simulations is then,
Writing for the fraction of posthuman civilizations that are interested in running ancestor-simulations (or that contain at least some individuals who are interested in that and have sufficient resources to run a significant number of such simulations), and for the average number of ancestor-simulations run by such interested civilizations, we have,
and thus:
(*)
Because of the immense computing power of posthuman civilizations, is extremely large, as we saw in the previous section.
By inspecting (*) we can then see that at least one of the following three propositions must be true:
More generally, if we knew that a fraction x of all observers with human-type experiences live in simulations, and we don't have any information that indicate that our own particular experiences are any more or less likely than other human-type experiences to have been implemented in vivo rather than in machina, then our credence that we are in a simulation should equal x:
(#)
This step is sanctioned by a very weak indifference principle.
Let us distinguish two cases.
I maintain that even in the latter case,
where the minds are qualitatively different, the
Simulation Argument still works,
provided that you have no information that bears on the question of
which of the various minds are simulated and which are implemented
biologically.
Space does not permit a recapitulation of that defense here, but we can bring out one of the underlying intuitions by bringing to our attention to an analogous situation of a more familiar kind.
Suppose that x% of the population has a certain genetic sequence S within the part of their DNA commonly designated as "junk DNA".
Suppose, further, that there are no manifestations of S (short of what would turn up in a gene assay) and that there are no known correlations between having S and any observable characteristic.
Then, quite clearly, unless you have had your DNA sequenced, it is rational to assign a credence of x% to the hypothesis that you have S.
And this is so quite irrespective of the
fact that the people who have S have qualitatively different minds
and experiences from the people who don't have S. (They are
different simply because all humans have different experiences from
one another, not because of any known link between S and what kind
of experiences one has.)
It does not in general prescribe indifference between hypotheses when you lack specific information about which of the hypotheses is true.
In contrast to Laplacean and other more
ambitious principles of indifference, it is therefore immune to
Bertrand's paradox and similar predicaments that tend to plague
indifference principles of unrestricted scope.
This is not so...
The Doomsday argument rests on a much stronger and more controversial premise, namely that one should reason as if one were a random sample from the set of all people who will ever have lived (past, present, and future) even though we know that we are living in the early twenty-first century rather than at some point in the distant past or the future.
The bland indifference principle, by
contrast, applies only to cases where we have no information about
which group of people we belong to.
If they bet on not being in a
simulation, then almost everyone will lose. It seems better that the
bland indifference principle be heeded.
As one approaches the limiting case in
which everybody is in a simulation (from which one can deductively
infer that one is in a simulation oneself), it is plausible to
require that the credence one assigns to being in a simulation
gradually approach the limiting case of complete certainty in a
matching manner.
If (1) is true, then humankind will almost certainly fail to reach a posthuman level; for virtually no species at our level of development become posthuman, and it is hard to see any justification for thinking that our own species will be especially privileged or protected from future disasters.
Conditional on (1), therefore, we must give a high credence to DOOM, the hypothesis that humankind will go extinct before reaching a posthuman level:
One can imagine hypothetical situations were we have such evidence as would trump knowledge of.
For example, if we discovered that we were about to be hit by a giant meteor, this might suggest that we had been exceptionally unlucky.
We could then assign a credence to
DOOM larger than our expectation of the fraction of human-level
civilizations that fail to reach posthumanity. In the actual case,
however, we seem to lack evidence for thinking that we are special
in this regard, for better or worse.
Another way for (1) to be true is if it
is likely that technological civilization will collapse. Primitive
human societies might then remain on Earth indefinitely.
Perhaps the most natural interpretation of (1) is that we are likely to go extinct as a result of the development of some powerful but dangerous technology.[13]
One candidate is molecular nanotechnology, which in its mature stage would enable the construction of self-replicating nanobots capable of feeding on dirt and organic matter - a kind of mechanical bacteria.
Such nanobots,
designed for malicious ends, could cause the extinction of all life
on our planet.[14]
In order for (2) to be true, there must be a strong convergence among the courses of advanced civilizations. If the number of ancestor-simulations created by the interested civilizations is extremely large, the rarity of such civilizations must be correspondingly extreme. Virtually no posthuman civilizations decide to use their resources to run large numbers of ancestor-simulations.
Furthermore, virtually all posthuman civilizations lack individuals who have sufficient resources and interest to run ancestor-simulations; or else they have reliably enforced laws that prevent such individuals from acting on their desires.
One can speculate that advanced civilizations all develop along a trajectory that leads to the recognition of an ethical prohibition against running ancestor-simulations because of the suffering that is inflicted on the inhabitants of the simulation. However, from our present point of view, it is not clear that creating a human race is immoral.
On the contrary, we tend to view the existence of our race as constituting a great ethical value.
Moreover, convergence on an ethical view of the immorality of running ancestor-simulations is not enough:
Another possible convergence point is that almost all individual posthumans in virtually all posthuman civilizations develop in a direction where they lose their desires to run ancestor-simulations.
This would require significant changes to the motivations driving their human predecessors, for there are certainly many humans who would like to run ancestor-simulations if they could afford to do so.
But perhaps many of our human desires will be regarded as silly by anyone who becomes a posthuman. Maybe the scientific value of ancestor-simulations to a posthuman civilization is negligible (which is not too implausible given its unfathomable intellectual superiority), and maybe posthumans regard recreational activities as merely a very inefficient way of getting pleasure - which can be obtained much more cheaply by direct stimulation of the brain's reward centers.
One conclusion that follows from (2) is
that posthuman societies will be very different from human
societies: they will not contain relatively wealthy independent
agents who have the full gamut of human-like desires and are free to
act on them.
The physics in the universe where the computer is situated that is running the simulation may or may not resemble the physics of the world that we observe. While the world we see is in some sense "real", it is not located at the fundamental level of reality.
Such computers would be "virtual machines", a familiar concept in computer science. (Java script web-applets, for instance, run on a virtual machine - a simulated computer - inside your desktop.)
Virtual machines can be stacked: it's possible to simulate a machine simulating another machine, and so on, in arbitrarily many steps of iteration.
If we do go on to create our own ancestor-simulations, this would be strong evidence against (1) and (2), and we would therefore have to conclude that we live in a simulation.
Moreover, we would have to suspect that the posthumans
running our simulation are themselves simulated beings; and their
creators, in turn, may also be simulated beings.
Even if it is necessary for the hierarchy to bottom out at some stage - the metaphysical status of this claim is somewhat obscure - there may be room for a large number of levels of reality, and the number could be increasing over time.
(One consideration that counts against
the multi-level hypothesis is that the computational cost for the
basement-level simulators would be very great. Simulating even a
single posthuman civilization might be prohibitively expensive. If
so, then we should expect our simulation to be terminated when we
are about to become posthuman.)
In some ways, the posthumans running a simulation are like gods in relation to the people inhabiting the simulation:
However, all the demigods except those at the fundamental level of reality are subject to sanctions by the more powerful gods living at lower levels.
For example, if nobody can be sure that they are at the basement-level, then everybody would have to consider the possibility that their actions will be rewarded or punished, based perhaps on moral criteria, by their simulators.
An afterlife would be a real possibility. Because of this fundamental uncertainty, even the basement civilization may have a reason to behave ethically.
The fact that it has such a reason for moral behavior would of course add to everybody else's reason for behaving morally, and so on, in truly virtuous circle.
One might get
a kind of universal ethical imperative, which it would be in
everybody's self-interest to obey, as it were "from nowhere".
The rest of humanity would then be zombies or "shadow-people" - humans simulated only at a level sufficient for the fully simulated people not to notice anything suspicious.
It is not clear how much cheaper shadow-people would be to simulate than real people. It is not even obvious that it is possible for an entity to behave indistinguishably from a real human and yet lack conscious experience. Even if there are such selective simulations, you should not think that you are in one of them unless you think they are much more numerous than complete simulations.
There would have to be about 100 billion
times as many "me-simulations" (simulations of the life of only a
single mind) as there are ancestor-simulations in order for most
simulated persons to be in me-simulations.
If so, one can consider the following (farfetched) solution to the problem of evil: that there is no suffering in the world and all memories of suffering are illusions.
Of course, this hypothesis can be
seriously entertained only at those times when you are not currently
suffering.
The foregoing remarks notwithstanding, the implications are not all that radical. Our best guide to how our posthuman creators have chosen to set up our world is the standard empirical study of the universe we see. The revisions to most parts of our belief networks would be rather slight and subtle - in proportion to our lack of confidence in our ability to understand the ways of posthumans.
Properly understood, therefore, the truth of (3) should have no tendency to make us "go crazy" or to prevent us from going about our business and making plans and predictions for tomorrow.
The chief empirical importance of (3) at the current time seems to lie in its role in the tripartite conclusion established above.[15]
We may hope that (3) is true since that would decrease the probability of (1), although if computational constraints make it likely that simulators would terminate a simulation before it reaches a posthuman level, then out best hope would be that (2) is true.
Based on this empirical fact, the Simulation Argument shows that at least one of the following propositions is true:
In the dark forest of our current ignorance, it seems sensible to apportion one's credence roughly evenly between (1), (2), and (3).
Notes
|