Michael Huang of the Space Review has posted a good overview of the Fermi Paradox. In his article, titled The Other Side of the Fermi Paradox, Huang notes that "By examining the possible futures of extraterrestrial civilizations, we are simultaneously examining the possible futures of our own civilization." Put in another way, says Huang, "if an alien civilization somewhere had their own version of the Fermi paradox, they would be speculating on our future in the same way that we speculate on theirs."
Bingo.
One of the reconciliations to the paradox cited by Huang is the Park hypothesis, which states that advanced civs have not colonized the galaxy because they don’t want to. Strangely, Huang claims that "staying on Earth is a mediocre future for humankind."
I've observed that the psychological and aesthetic desire to explore space often leads to a space exploration bias. This quite obviously has a bearing on any analysis of the Fermi Paradox. Space enthusiasts tend to be incredulous to the suggestion that interstellar colonization is not in our future. But as Huang himself admits, the possibility exists for "the creation of virtual reality worlds so impressive that real world challenges, such as space colonization, pale in comparison."
Indeed, if advanced civs stay at home you can bet that there's a damn good reason for it, and I'm certain it won't be a mediocre one.
13 comments:
It seems to me that the concept of the Technological Singularity provides an obvious solution to the Fermi Paradox. Technological progress works much faster than biological evolution, especially since it tends to accelerate exponentially, a trend visible through the whole span of recorded human history. If Kurzweil is right, just forty years from now we will achieve the merger of biological and machine intelligence, followed by an unimaginable increase in our technological abilities.
However many planets there are in the universe which have the innate potential to evolve an intelligent species at some point, some one planet will be the first to do so. Once this happens and a technological civilization appears, intelligence will diffuse outward from that planet and saturate the rest of the physical universe in far less time than it would take biological evolution to produce intelligent life on a second, third, fourth, etc. planet. That is, the first technological civilization to appear in the universe will be the only one. We just happened to be first. If we hadn't been first, we wouldn't exist at all.
John Smart wrote a paper many years ago about the concept of intelligent civilizations making themselves known to others for only a very brief period (300 years on average) between the point of the invention of radio-telecommunications and the point when they disappear into a technological singularity of their own creation. According to Smart, this explains the lack of evidence of ET to date.
Plenty of space right here on Earth. The ~10^50 atoms making up our planet can be suitably rearranged to implement at least ~10^35 human-sized uploads, assuming humans can be simulated by 10^17 ops/sec and we can develop computers that perform an op/sec with 100 atoms (very plausible).
Space is a bunch of boring dust and ice. Uploads will experience milennia of excitement in mere seconds. The second you leave the Earth's surface, you're ejecting yourself from the greatest party imaginable into a void-filled waste. By the time you make it 1000km from the Earth's surface, your friends will have already had so much fun it would blow your mind.
That should be "_were_ thinking about the Fermi paradox." In a universe that could have given rise to life any time in the last ten billion years, the odds that we reached the short phase between SETI capability and singularity at the same time as the aliens are miniscule. The aliens either have figured out their Fermi paradox by now or no longer exist. The Fermi paradox is just this -- where are the postaliens?
VR will be so impressive that there will be little impetus for space exploration. That's a positively intriguing idea!
It reminds me of The Simulation Argument
Explanations of the form "they exist(ed) but they are giving no sign of themselves for reason X" all share the same fallacy: a presumption of homogeneity in psychology among intelligent species which seems very implausible, given what we know about the random factors that influence evolution. If there are a million intelligent species in the universe, is every single one of those million so engrossed in virtual reality as to have lost interest in the rest of the universe? Is there not even one of those million which would feel drawn to explore space despite the cultural richness of its home planet? (I suspect that among humans, a minority would still be interested in exploring space even if the majority were not, and would do so.) Not even with the prospect of discovering alien intelligence and learning all the interesting and different things it had done with virtual reality, if nothing else?
The same applies to arguments that all technological civilizations destroy themselves at a certain point. What, all of them? Not even one species is rational enough to get past the threats of nuclear terrorism, ecological damage, accidental creation of black holes, or whatever, without self-destructing?
Remember, all we need is one exception. If one alien race survived and expanded into space, even if all the others stayed home or blew themselves up, then the statistical likelihood is that that one is millions or billions of years old, and the signs of its presence and achievements should be easily visible anywhere in the universe (or more likely, as I argue, it would have pre-empted us and we wouldn't be here at all).
I think there just isn't anybody out there. It's the only explanation that makes sense.
Hi Infidel753: Your point is well taken, but there is another possibility if you buy into a type of cosmological/environmental determinism and its impact on the trajectory of all intelligent life. The Universe may constrain intelligence in such a way that a shared destiny is unavoidable. In this sense, it's almost law-like. In the end, the Universe will have its way with us.
Hi George -- Obviously I don't believe in any "type of cosmological/environmental determinism" which would constrain the psychology of every intelligent species in the universe to develop in the same way. I see no evidence for the existence of such a hidden hand, and some telling evidence to the contrary.
As a counter-example I would point to the four other species on Earth whose intelligence most nearly approaches the human level (chimpanzees, bonobos, gorillas, and orangutans). These species differ substantially among themselves in social organization, sexual behavior, likelihood of resorting to violence in a given situation, and many other ways. This even though they are all not only products of the same evolutionary process on the same planet, but even very closely related to each other (and to humans). If the psychology of unrelated alien intelligent species on different planets could differ even as much as that of the four great ape species do, that would be more than enough to establish my point. In fact, it could obviously vary much more, since the different intelligent species on different planets would not have a common ancestor.
For that matter, human cultures and even human individuals differ considerably among themselves in these ways.
So I see nothing that suggests there is any factor in the universe that forces all intelligent life to follow a similar developmental path.
I would go for the probability of alien intelligences being too ‘alien’ to establish for the recipient - us for example - meaningful contact.
This already implies two assumptions - one, we as recipient can only ascertain the presence of one if we can somehow relate to its existence; and two, the life form must be capable of intelligence defined by its capacity for abstraction. I would argue these two assumptions are justified.
I would suggest another set of fundamentals: for intelligence to evolve, the life form must be able to reach a critical complexity, and it must do so on the basis of a system that can form re-representations of its environment.
In order to do that its functional elements necessary for that task must answer to the principles of autocatalytic closure, that is a sufficient number of them must exist where each has a sufficient degree of variance to enable the emergence of complexity derived from the self-same building blocks - this feeds back into the fundamentals mentioned above.
The question is, can such a ‘biological’ system exist outside the framework of a carbon-based economy (‘economy’ in the widest possible sense of the word)?
What about linking virtual realities with alien virtual realities and hence creating a virtual universe. Or am I thinking to Buddhist like?
Wolfram makes the point in his "A New Kind of Science" that "if the fundamental theory of physics is known, then everything about what is possible in our universe can in principle be worked out purely by a computation." There may be little need to explore the physical universe when it is only a subset of the mathematical universe.
Additionally, aggressive replication and expansion is risky. If there is another post-Singularity society somewhere in the universe, a Prisoner's Dilemma exists regarding mutual expansion. The dilemma becomes more complicated if we imagine that there may be many post-Singularity societies, each one waiting to see if another has begun a program of aggressive replication that has the potential to threaten its existence. Any emerging post-Singularity society must confront of the question of whether it is the first to emerge, or whether it is simply a new player on a carefully balanced playfield, and whether its actions could destabilize that balance.
The simplest solution to this dilemma is to cooperate by avoiding the appearance of aggressive replication. Just like the solution to the Prisoner's Dilemma can be worked out by individual minds independently, silent cooperation between post-Singularity societies could conceivably be worked out in the same way. It seems at least possible to me that one the fundamental transformations of the Singularity involves the acceptance of a sort of cosmic maturity that abandons the unthinking genetic drive towards needless permanent reproduction and territorial dominance.
Hi Dan,
Thank you bringing up game theory and the Prisoner's Dilemma. I think it's reasonable to infer that superintelligences will be super-rational as per Douglas Hofstadter (see his Metamagical Themas) and seek to maximize cooperative arrangements.
Hey y'all:
These sorts of discussions tend to polarize around two basic premises that stand in opposition to one another -- that posthuman intelligences are situated outside of cosmological determinations (and can do whatever they want), or that they are part of the cosmological framework and will necessarily be moulded by environmental pressures as they all migrate toward a common fitness peak.
I'm of the latter persuasion, particularly as our technologies become more powerful. The stronger the environmental stressors, the stronger are the pull to a fitness peak.
Post a Comment