In this paper, Jenkins takes the simulation argument as posited by Nick Bostrom and questions whether a society capable of creating such simulations would be bound by ethical or legal considerations. The answer, says, Jenkins, is in all likelihood, no. Consequently, it is "highly probable that we are a form of artificial intelligence inhabiting one of these simulations."
Jenkins worries about the potential for endless simulation recursion (i.e. simulation "stacking") and the sudden termination of historical simulations. He speculates that the "end point" of history will occur when the requisite technologies required to create simulations becomes available (estimated to be 2050). Jenkins's conclusion: long range planning beyond this date is futile.
Jenkins's paper is quite interesting and provocative. Simulation ethics and legality is clearly going to be a pertinent issue in the coming years. This paper is a good start in this direction. This said, I have a pair of critical comments to make.
First, any kind of speculative sociological analysis of posthuman behaviour and ethics is fraught with challenges. So much so, I would say, that it is nearly an impossible task. From our perspective as potential sims (or is that gnostics?), those who put us in this simulation are acting exceptionally unethically; no matter how you slice it, our subjective interpretation of this modal reality and our presence in it makes it a bona fide reality. My life is no illusion.
As Jenkins asserts, however, our moral sensibilities are not an indication that future societies will refrain from engaging in such activities--and on this point I agree. However, Jenkins bravely attempts to posit some explanations as to why they would still embark on such projects, but I would suggest that any explanation is likely to appear naive, pedestrian and non-normative in consideration of what the real factors truly are; it's like trying to get inside the head of gods.
My second point is that I'm not convinced that simulation recursion is a problem. There are two parts to this issue.
First, there does not appear to be any hard limit to computation that would preclude the emergence of recursive simulations (although, we are forced to wonder why a simulation would be run such that its inhabitants would produce a deeper set of sub-simulations).
According to Robert Bradbury, an expert in speculative computational megaprojects, there are a number of ways in which an advanced civilization could push the limits of computation, including radically reduced clock speeds. Further, should we find ourselves in a simulation, it would be futile to speculate about limits to computational capacities and even the true nature of existence itself! Bradbury writes,
Simulated realities will run slower than the host reality. But since one can presumably distribute resources among them and prioritize them at will (even suspending them for trillions of years) it isn't clear that the rates at which the simulations run are very important. If we happen to be the basement reality, then the universe has the resources and time to run trillions of trillions of trillions (at least) of simulations of our perceived local reality (at least up until the point where we have uplifted the entire universe to KT-III level). If we aren't in the basement reality then speculations are pointless because everything from clock speed to quantum mechanics could be nothing but an invention for the experiment (one might expect that bored "gods" would entertain themselves by designing and simulating "weird" universes).Second, in consideration of the previous point, and the fact that we cannot presume the intentions of artificially superintelligent entities, speculations as to when a simulation will reach a "termination point" can only yield highly arbitrary and unfounded answers.
Thus, Jenkins is partially correct in his assertion that long-term planning beyond 2050 is futile. The reason, however, is not the advent of advanced simulations, but the onset of artificial superintelligence--what is otherwise referred to as the Singularity.
For more on this subject, there's my Betterhumans article from a few years back, "Welcome to the Unreal World", and my personal favourite, philosopher Barry Dainton's "Innocence Lost: Simulation Scenarios: Prospects and Consequences."
Robert Bradbury responds:
ReplyDeleteOn 9/18/06, George Dvorsky Michael Anissimov wrote:
"Why would our Simulation Overlords terminate our simulation just because we make another one? It doesn't cost more computing power."
It depends entirely what the purpose of the simulation is. Simulating a human brain using current technology computers is significantly less efficient in terms of matter and energy resources than running the brain on the actual hardware of this universe (atoms and molecules) [1]. If we had a "at the limits of physics" brain design [2] simulating such a brain on anything less than the physical instantiation itself is going to be significantly slower than actually building and running that brain.
If this reality is a simulation is to study evolutionary paths of advanced civilizations up to the point where they begin to simulate the evolutionary paths of advanced civilizations then we may soon (within decades) be "suspended". If this reality is a simulation to explore the feasibility of designing computers and running simulations based on femtotechnology then we may have much longer future ahead of us [3].
"From the physics perspective, a box that is just a box and a box with a mini-universe obeys the same laws and requires the same amount of computation."
"Same amount of computation" is the part which is inaccurate. One of the first "cool" tasks I had as a programmer (back when I was somewhat younger than Michael) was to write a computer simulator [4]. The only way simulations run nearly as efficiently as what is being simulated is if the underlying hardware architecture is explicitly designed for simulations. It currently appears as if the hardware architecture of this universe is not designed to easily enable simulations [5].
"Robert Bradbury is wrong, a simulation would not necessary run slower than the real world... the speed of a world is determined by the processing of the minds in it, not anything inherent about the world itself."
I love being wrong, but you have to prove it to me... I think what Michael is wrestling with is whether or not the simulation manages to develop abstractions (shortcuts) that enable computational speedups which exceed the reduction in speed inherent in the simulation. This is not unreasonable. For example we use computers as a shortcut for doing repetitious arithmetic because the natural brain does it so slowly. We develop laws of physics that allow us to compute results directly rather than having to simulate them. Then the question becomes whether or not the abstractions or shortcuts developed in the simulation can be translated back into the reality that produced the simulation. Or are the rules there so different that extracting the inventions of a simulation is impossible? [6,7].
"We can build a simulation in nanocomputers composed to beings that think a million times faster than us, therefore the "world" can be said to be moving a million times as fast. That's not slower, now is it?"
Ah, this is where things become confused. One can build computers using limits of physics hardware in *this* reality. They will run faster [8]. But simulating a limits of physics computational engine ( i.e. simulating the nanocomputer on current computers or even simulating a nanocomputer on a nanocomputer) is unlikely to run faster or be more efficient than using the "real" thing. As mentioned previously, only if the hardware is intentionally engineered to facilitate simulations will it run at close to non-simulated speeds or with non-simulated efficiency [9].
Robert
1. This gets into Seth Lloyd's perspective that one can think of "this" universe as simply one very large quantum computer (in that the "instruction set" is the equations of physics, esp. quantum mechanics).
2. For many, perhaps most, types of computations this is exactly what a Matrioshka Brain is at solar system scales unless engineering femtotechnology computers is feasible.
3. One can build a Matrioshka Brain in very short time scales once one has the base level of nanoengineering skills -- but optimization of MBrains or MBrain collectives (KT-II to KT-III civilizations) which is what may be required to explore "femotoreality" can take millions to billions of years [unless we find ways to rewrite the hardware rules for this universe]).
4. We simulated a 36-bit PDP-10 mainframe on a 16-bit PDP-11 -- so a single "add" instruction on the PDP-10 required 6 instructions on the PDP-11. The overhead of packing and unpacking 36 bit instructions and data pushed this up to at least 10x slower and the larger virtual memory system of the PDP-10 added probably another order of magnitude reduction in speed. So a compilation that took minutes on the PDP-10 would take hours on the PDP-11. But obtaining PDP-10 time was difficult and/or expensive while the PDP-11 time was "free" so the exercise was justified.
5. We could probably digress into a long debate about whether quantum computers will serve this precise function. But current molecular dynamics simulations require hours of supercomputer time to get nanoseconds of "real" time. So at least currently the hardware architecture of this universe does not appear to be easily simulated.
6. One example that comes to mind is human languages. Though rules for grammar may be built into all humans there are aspects of some languages which are so specific to that culture (in fact the language may "dictate" that cultural reality) that they cannot be translated back into different cultures.
7. This raises the interesting question of whether "gods" go to the trouble of only running simulations (creating realities) which explicitly allow easy abstraction extraction or whether one intentionally designs reality phase spaces from which extraction is difficult or impossible?
8. One has to be very careful about this. Nanosystems, pg 370, "A more modest 10 W system can deliver ~10^11 MIPS." The brain is ~10W and ~10^15 OPS so on a power per op basis the nanocomputer is only about 100x the processing capacity. But the nanocomputer is significantly smaller than the brain so you get a significant speedup due to a reduction in communication delays. One can of course get a 10^21 OP nanocomputer (1 million times speedup) in a much smaller volume than the brain but it requires 100,000 W (as well as a radiator significantly larger than the nanocomputer[!]).
9. The nanocomputer one commonly discusses is Drexler's rod-logic nanocomputer. That is a general purpose "mechanical" computer which would presumably execute some kind of general purpose computer instruction set. To the best of my knowledge nobody has designed computer hardware optimized for "reality" simulations (one wants to execute the laws of the fundamental laws of physics as efficiently as possible). Current computer graphics chips and the IBM cell processor are closer to what is required than general purpose microprocessors (because they are optimized to deal with aspects of physical reality). Some companies are starting to produce chips optimized for laws of physics or gate arrays that can be reprogrammed for these purposes -- but to the best of my knowledge nobody has tried to design a limits of physics "nanocomputer" optimized for this task. Perhaps because atoms and small molecules already satisfy this need.
George, Michael and especially Robert - thanks for your insightful comments. I have posted a summary and reply on my blog. Looking forward to seeing you at the Riv this weekend, George!
ReplyDeleteAs many of those who posted comments, I agree that we may well be living in a simulation running on some supercomputer in "a higher level of reality". But I don't think we have enough information to assign any probability to this possibility, and I don't agree with the conclusion that the simulation would probably be terminated as soon as its conscious inhabitants develop the capability to run their own equivalent simulations of "lower levels of reality". This would make the original simulation more interesting, wouldn't it? Creating an endless cascade of realities may even be the *objective* of the original simulation.
ReplyDeleteWhen I graduate to PostHuman status one of the first things I want to do is simulate possible versions of my life, filling in the "what if" scenarios. Anyway, that's my reality taken care of. Not quite sure what the rest of you are doing here...
ReplyDelete