A popular notion amongst futurists, technoprogressives and transhumanists alike is the suggestion that we can proactively engineer the kind of future we want to live in. I myself have been seduced by this idea; back during the Betterhumans days our mission was to "connect people to the future so that they can create it." Given the seemingly dystopic and near-apocalyptic trajectory that humanity appears to be heading in, this was and still is a powerfully intuitive and empowering concept.
Trouble is, we're mostly deluded about this.
Now, I don't deny that we should collectively work to build a desirable future that is inherently liveable and where our values have been preserved; my progressivism is unshatterable. What I am concerned about, however, is the degree to which we can actually control our destiny. While I am not an outright technological determinist, I am pretty damn close. As our technologies increase in power and sophistication, and as unanticipated convergent effects emerge from their presence, we will increasingly find ourselves having to deal with the consequences. It is in addressing these technological side-effects that our desired trajectories will be re-routed by pragmatism and survivalism.
In other words, adaptationism will supercede idealized notions of where we can and should develop as an advanced species.
For example, consider the remedial ecology and geoengineering concepts. We have not voluntarily chosen to explore these particular areas of inquiry. These are technologies of adaptationist necessity. Because we have buggered up the planet, and because we may have no other choice, humanity finds itself compelled to pour its time and resources into areas in which we wouldn't have otherwise cared about. Breaking down toxic wastes and removing carbon from the atmosphere was not anything anybody would have desired a century ago; our present is not the future that our ancestors could have anticipated or created.
Technological adaptationism also extends to ramifications in the social and political arenas. The entire back-half of the 20th Century was marred by the Cold War, a biopolar geopolitical arrangement that emerged due the presence of ideologically disparate hegemons in the possession of apocalyptic weapons. We have no reason to believe that a similar arrangement couldn't happen again, especially when considering the potential for ongoing nuclear proliferation and the development of novel apocalyptic-scale technologies such as nanoweapons and robotic armadas. Even worse, given the possibility that a small team (or even a single individual) may eventually be capable of hijacking the entire planet, our civil liberties as we know them may cease to exist altogether in favour of mass surveillance and quasi-totalitarian police states.
Again, this isn't anything that any progressive futurist wants. But these are the unintended consequences of technological advancement. We are slaves to technological adaptationism; to do otherwise would be to risk our very own existence. And in order to avoid our extinction (or something similarly catastrophic), we may be compelled to alter our social structures, values, technological areas of inquiry and even ourselves in order to adapt.
As to whether or not such a future is desirable by today's standards is an open question.
6 comments:
George:
I'm glad to see that you are blogging again. You have a great blog and I'm glad to see that you are back to it.
We have always been technological adaptationists. Making stone tools was a kind of technological adaptation to a changing environment.
What happens with technology is a bit like how cities develop. Cities are very rarely planned explicitly, and instead evolve as new architectures and modes of transport come along, with new pieces bolted on or old ones demolished.
The ability of humans to adapt to their environment (where the environment also includes technology) is quite remarkable, and we have faced and overcome problems in the past which could have lead to dark futures or extinction scenarios. So I'm optimistic that the future probably isn't going to be one of relentless doom and gloom.
George, congratulations on breaking free of the idealism that seduces so many futurists.
I hope you come to see also that letting go of unjustified fantasies (of an increasingly uncertain future) in no way entails powerlessness or warrants gloominess about our prospects.
Rather, such improved clarity and coherence facilitates effective decision-making in the here and now for the promotion of our always only present but evolving values.
Thanks, Jef -- but I didn't mean for this to sound like it was a sudden epiphany for me. I've held this perspective for many years, but thought I'd clarify it here.
Along these lines, check out "Future Risks and the Challenge to Democracy"
http://www.sentientdevelopments.com/2008/12/future-risks-and-challenge-to-democracy.html
You are making a good point. We can change our future only if we can control the future technology.
Given the future concentration of technological power within the confines of a few groups chances are that these few groups will decide the future for the rest of us.
Which can be good or bad. If you happen to be part of that group then everything will be fine.
Indeed, many people waiting for singularity tend to forget that talking about Artificial Intelligence and controlling it are two separate issues. Some of the people from the SIAI list seem to forgot that.
Interesting. Assuming the large-scale items in our technological future are more-or-less predetermined at this point, doesn't that necessarily mean that the large-scale aspects of our adaptationism are also determined? I.e. we really, really, don't have any "choices"?
Oh, maybe I missed your point. I guess the idea is that it's wrong to think that the future is completely wide open for us to create. There are restraints on what our choices will be. Our choices will, therefore, be a reaction (adaptation) to those constraints. Hence, our thinking won't be so much about creating the wide-open future but rather our thinking will be about how to manage the effects of our technology.
I reckon those are both parts of a complex set of problems that require simultaneous solutions: we'll be thinking about how to improve our technology, AND how to use our technology, AND how to manage the externalities that arise from developing and using our technology.
Post a Comment