Guest blogger David Brin is next scheduled to blog about the
Singularity -- a future nexus point when the capacities of an artificial intelligence (or a radically augmented human) exceeds that of humans. It is called the “Singularity” because it impossible to predict what will follow. A Singularity could usher in an era of great wisdom, prosperity and happiness (not to mention the posthuman era), or it could result in the end of the human species.
David Brin believes that we are likely en route to a Singularity, but that its exact nature cannot be known. He also doesn't believe that such an event is inevitable. In his article, “
Singularities and Nightmares: Extremes of Optimism and Pessimism About the Human Future,” Brin posits four different possibilities for human civilization later this century:
- Self-destruction
- Positive Singularity
- Negative Singularity
- Retreat (i.e. neo-Luddism)
Brin, in a personal email to me, recently wrote, “[My] singularity friends think I am an awful grouch, while my conservative friends think I am a godmaker freak.” Indeed, Brin has expressed skepticism at the idea of a
meta-mind or a
Teilhard de Chardin apotheosis, while on the other hand he hasn’t shied away from speculations about transcendent artificial intelligences who shuffle thorough the Singularity without a care for their human benefactors.
Stay tuned for David's elaboration on these and other points.
4 comments:
Surely computer technology will evolve and we will do great things in the future.
It's just that Kurzweil his predictions can be critized quite well. He claims technology is the next step in evolution, and that it is the successor of biology. Evolution is certainly not an exponential process, but a gradual one. Also technology has nothing to with the evolutionary processes. How can he claim this?
It is however the claim that forms the basis for his (long term) singularity predictions and its exponential growth path.
Also, humanity has walked many dead ends in science. Kurzweil assumes we understand all there is to physics and know all the basic concepts that are. We just need to digg into them a bit further. So thought Newton, so thought Aristoteles and so thought the Church in the past. It later turned out they weren't right in many aspects. Their models could only be used so far.
So maybe it's much more likely there's still some fundamental science stuff yet to be discovered before we can really reengineer everything around us. This idea is in my opinion just as likely as the idea that there isn't anything fundamental about science that we need to figure out first before we can truely understand our inner self.
Pessimism and optimism have nothing to do with the singularity. Pessimists will be afraid of it. Optimists will think it's the best thing yet.
How to miss the point about the ARTIFICIAL INTELLIGENCE:
First of all, there is NO AI. For any kind of AI to exist, there is a simple necessity to be fulfiled, and this lack of it explains the lack of any serious progress in spite of serious progress realized by hardware: there is no formalization of intelligence, there is no algorhytmical description of it. All definitions are unusable, as would be any definion of sun in the times when intra-atomic interactions were unknown. Talking about simulating the sun amounted about creating a source of light, with no connection whatsoever with nuclear processes within the Sun.
So, untill then... let's talk singularities!
danvasii@yahoo.com
Thanks Jorrold!
Post a Comment