And that brings us back to the Sims. How can know whether we're simulations in some superduper computer built by posthumans? Some pretty amusing objections have been raised, such as quantum tests that a simulation would fail. It seems safe to say that any sim-scientists examining the sim-universe they occupy would find that the laws of that universe are self-consistent. To assert that a future computer could simulate us, complete with consciousness, but crash when it came to testing Bell's Inequality strikes me as ludicrous. Unless, of course, the program were released by Microsoft. Oooh, sorry, Bill, cheap shot. Let's take it for granted that we could not expose a simulation from within -- unless the Creators wanted us to.Link.
But the problem of pointless suffering leads me to very different conclusion. Recall Bostrom's first conjecture: that few or none of our civilizations reach a posthuman stage capable of building computers that can run the kind of simulation in which we might exist. There are many ways civilization could end (just ask the dinosaurs!), but the one absolutely necessary condition for survival in an environment of continually increasing technological prowess is peace. Not a mushy, bumper sticker kind of peace, but the robust containment of conflict and competition within cooperative frameworks. (Robert Wright, in his brilliant if uneven book NonZero: The Logic of Human Destiny, unfolds this idea beautifully.)
What is civilization if not a mutual agreement to sacrifice some individual desires (to not pay taxes, for example, or to run through red lights) for the greater common good? Communication, trust, and cooperation make such agreements possible, but the one ingredient in the human psyche that propels civilization forward even as we gain technological power is empathy.
August 2, 2010
HuffPo: Sims, Suffering and God: Matrix Theology and the Problem of Evil
Check out Clay Farris Naff's latest article, Sims, Suffering and God: Matrix Theology and the Problem of Evil:
Subscribe to:
Post Comments (Atom)
2 comments:
George-
In Dr. Bostrom's explanation of the Simulation Argument, he uses the concept of (ideological) "convergence" as a constraint to the possibility that all possible simulator civilizations choose, for ethical or other reasons, to almost never run simulations. I find that argument persuasive but I don't understand why it doesn't also apply to his trilemma's first (aka Great Filter) alternative. Why wouldn't the vast diversity of possible simulator 'civilations' likely face a convergence issue regarding (at least emerging technology) existential risks?
George-
In Dr. Bostrom's explanation of the Simulation Argument, he uses the concept of (ideological) "convergence" as a constraint to the possibility that all possible simulator civilizations choose, for ethical or other reasons, to almost never run simulations. I find that argument persuasive but I don't understand why it doesn't also apply to his trilemma's first (aka Great Filter) alternative. Why wouldn't the vast diversity of possible simulator 'civilations' present a convergence issue regarding (at least emerging technology) existential risks?
Post a Comment