As we prepare for the emergence of the next generation of apocalyptic weapons, it needs to be acknowledged that the world's democracies are set to face their gravest challenge yet as viable and ongoing political options.
The continuing presence and increased accessibility of Weapons of Mass Destruction (WMDs) are poised to put an abrupt end to politics as usual. Technologies that threaten our very existence will greatly upset current sensibilities about social control and civil liberties. And as a consequence, those institutions that have worked for centuries to protect democratic and humanistic values will be put to the test – a test that may ultimate result in a significant weakening of democracy, if not its outright collapse.
The pending political situation is categorically different than that which followed the Manhattan Project and the advent of nuclear weapons. While proliferation was a problem in the decades proceeding The Bomb’s development, the chances of those weapons getting into the hands of a so-called ‘rogue state’ or non-state actors was slim to none (unless you consider the former Soviet Union, Cuba, China and Pakistan as being rogue states). Moreover, as we move forward we will have more than just nuclear weapons to worry about; future WMDs include bioweapons (such as deliberately engineered pathogens), dirty bombs, weaponized nanotechnology, robotics, misused artificial intelligence, and so on.
What makes these WMDs different is the growing ease of acquisition and implementation by those who might actually use them. We live in an increasingly wired and globalized world where access to resources and information has never been easier. Compounding these problems is the rise and empowerment of non-traditional political forces, namely weak-states, non-state actors and disgruntled individuals. In the past, entire armadas were required to inflict catastrophic damage; today, all that’s required are small groups of motivated individuals.
And the motivations for using such weapons are set to escalate. Political extremism begets political extremism; government clamp-downs (both internally and externally) will likely illicit radical counter reactions. There is also the potential for global-scale arms races as new technologies appear on the horizon (molecular assembling nanotechnology being a likely candidate). Such arms races could increase not just international tensions, but also instigate espionage and preemptive strikes.
Given these high stakes situations, democratic institutions may not be given the chance to prevent catastrophes or deal with actual crises.
21st Century realities
Politics and conflict in the 20th Century was largely centered around differing opinions about the redistribution of wealth. It was a time of adjusting to the demands of the modern nation-state, large populations and mature industrial economies. Responses to these challenges included the totalitarian experiments, World War II -- and for those nations who resisted the radical urge, the instantiation of Keynesian economics and the welfare state.
The coming decades will bear witness to similar sorts of political experimentation and restructuring, including a renewed devotion to extreme measures and radicalism. It is becoming increasingly clear that 21st Century politics will be focused around managing the impacts of disruptive technologies, addressing the threats posed by apocalyptic weapons and environmental degradation, and attending to global-scale catastrophes and crises as they occur.
This restructuring is already underway. We live in the post 9/11 world -- a world in which we have legitimate cause to be fearful of superterrorism and hyperterrorism. We will also have to reap what we sowed in regards to our environmental neglect. Consequently, our political leaders and institutions will be increasingly called-upon to address the compounding problems of unchecked WMD proliferation, terrorism, civil unrest, pandemics, the environmental impacts of climate change (like super-storms, flooding, etc.), fleets of refugees, devastating food shortages, and so on. It will become very necessary for the world's militaries to anticipate these crises and adapt so that they can meet these demands.
More challenging, however, will be in avoiding outright human extinction.
Indeed, the term ‘existential risks’ is beginning to take root in the vernacular. During the presidential debates, for example, John McCain used the expression to illustrate the severity of the Iranian nuclear threat against Israel. While McCain was referring to the threat on Israel’s existence, the idea that humanity faces a genuine extinction risk has returned to the popular consciousness. Eventually these perceived risks will start to play a more prominent role in the political arena, both in terms of politicking and in the forging of policy itself.
So much for the End of History and the New World Order
When the Cold War ended it was generally thought that major wars had become obsolete and that a more peaceful and prosperous era had emerged. Some commentators, like the political scientist Francis Fukuyama, declared that Western liberal democracy and free market capital systems had triumphed and that it would only be a matter of time before it spread to all regions of the planet. For Fukuyama, this equated to the ‘end of history.’
It was also around this time that George H. W. Bush proclaimed the advent of a New World Order. With the collapse of European Communism and the end of bi-polar geopolitics it was hoped that nuclear disarmament would soon follow and with it a global community largely free of conflict.
Today, however, we see that these hopes were idealistic and naïve. There is still plenty of strife and violence in the international system. In fact, the current multi-polar geopolitical arrangement has proven to be far more unstable than the previous orientation, particularly because it has allowed economic, political and cultural globalization to flourish, and along with it, the rise of asymmetrical warfare and escalating motivations for rogue nations and non-state actors to exert terrible damage.
Despite the claims of Fukuyama and Bush, and despite our own collective sensibilities, we cannot take our democracies and civil liberties for granted. When appraising the condition of democracies we must realize that past successes and apparent trajectories are no guarantees of future gain. Indeed, democracy is still the exception around the world and not the rule.
Historically speaking, democracies are an abnormality. As early as 1972 only 38% of the world’s population lived in countries that could be classified as free. Today, despite the end of the Cold War, this figure has only crept up to 46%. We may be the victims of an ideological bias in which we’ve assumed far too much about democracy’s potential, including its correlation with progress and its ability to thrive in drastically different social environments.
Catastrophic and existential risks will put democratic institutions in danger given an unprecedented need for social control, surveillance and compliance. Liberal democracies will likely regress to de facto authoritarianism under the intense strain; tools that will allow democratic governments to do so include invoking emergency measures, eliminating dissent and protest, censorship, suspending elections and constitutions, and trampling on civil liberties (illegal arrests, surveillance, limiting mobility, etc).
Looking further ahead, extreme threats may even rekindle the totalitarian urge; this option will appeal to those leaders looking to exert absolute control over their citizens. What’s particularly frightening is that future technologies will allow for a more intensive and invasive totalitarianism than was ever thought possible in the 20th Century – including ubiquitous surveillance (and the monitoring of so-called ‘thought crimes’), absolute control over information, and the redesign of humanity itself, namely using genetics and cybernetics to create a more traceable and controllable citizenry. Consequently, as a political mode that utterly undermines humanistic values and the preservation of the autonomous individual, totalitarianism represents an existential risk unto itself.
Democracy an historical convenience?
It is possible, of course, that democracies will rise to the challenge and work to create a more resilient civilization while keeping it free. Potential solutions have already been proposed, such as strengthening transnational governance, invoking an accountable participatory panopticon, and the relinquishment of nuclear weapons. It is through this type of foresight that we can begin to plan and restructure our systems in such a way that our civil liberties and freedoms will remain intact. Democracies (and human civilization) have, after all, survived the first test of our apocalyptic potential.
That said, existential and catastrophic risks may reveal a dark path that will be all too easy for reactionary and fearful leaders to venture upon. Politicians may distrust seemingly radical and risky solutions to such serious risks. Instead, tried-and-true measures, where the state exerts an iron fist and wages war against its own citizens, may appear more reasonable to panicked politicians.
We may be entering into a period of sociopolitical disequilibrium that will instigate the diminishment of democratic institutions and values. Sadly, we may look back some day and reflect on how democracy was an historical convenience.
12 comments:
My thoughts on the matter:
http://theoriginalnebris.blogspot.com/2008/12/mass-democracy-has-failed.html
The only solution to the problem of superweapons getting into the hands of terrorists is for nation-states to keep their own capabilities far enough ahead that they can either prevent this from happening or neutralize the effects of the weapons.
A dozen malcontents with machine guns and a stockpile of ammunition could probably have taken over the Roman Empire. But by the time technology was advanced enough and distributed enough for a dozen malcontents with machine guns to be a realistic scenario, states had far more effective weapons of their own.
By the time a few nuts with a basement lab can do what a government bioweapons lab can do today, the government bioweapons lab should be able to do things which we can't imagine today -- including things like stamping out a new man-made plague in a matter of hours, whatever that might involve. (This is, of course, just one example. The same argument applies with other superweapons.)
The best way to protect ourselves is to push ahead with technological progress as fast as possible.
Empirical evidence shows that democracies are better at developing and exploiting new technology, and doing so faster, than authoritarian states are. Authoritarian states are too concerned with limiting the flow of information, too suspicious of minds that don't conform ideologically, too unpleasant for independent thinkers to live in (so that they tend to emigrate to democratic states when they can).
This means that (a) the democracies will probably continue to get further and further ahead of the non-democracies technologically, and (b) a democratic system is best able to keep ahead of dangerous nut groups.
A global authoritarian regime, or a global coalition of such regimes, might well try to suppress the development of dangerous technologies, thus guaranteeing that those government labs would not develop the necessary knowledge base to defend society against high-tech terrorists -- while the terrorists, who don't obey laws anyway, would probably still manage to keep creeping ahead with those technologies.
The most dangerous thing we can possibly do at this point is to panic and yield to a futile impulse to try to slow everything down.
By definition, an existential risk is a danger to more than just democracy. The first order of business is to list out exactly what those risks are along with all potential solutions. THEN, we can rate those solutions based on basic effectiveness and on the level to which they impact our preferred way of life.
Several people far more knowledgeable than I have suggested that the only way to avoid existential risks of planetary scope (Those that we are now or about to face) is to colonize space. Here are some examples:
http://en.wikiquote.org/wiki/Carl_Sagan
http://www.msnbc.msn.com/id/13293390/
On the criteria of effectiveness and preference, space colonization seems ideal. Scattering to multiple, distant colonies would effectively prevent human extinction resulting from any planetary-wide nuclear, biological, nano or meteor event. It would also allow us to maintain western societal paradigms that value individual freedom despite any potential increased risk that an unmonitored/controlled individual may take drastic, destructive action.
My question is, why is this solution not being actively pursued? These make sense for the most part:
http://www.singinst.org/upload/cognitive-biases.pdf
I don't think we can expect the majority of humanity to accept space colonization as a top priority. But what can we (Those of us that don't run the Bill and Melinda Gates Foundation or control congressional funding) do to advance the cause of space colonization, and to do so quickly?
For space colonization to work as an effective hedge against extinction due to a planet-wide catastrophe, we would need at least one colony capable of surviving indefinitely with no support from Earth whatsoever.
To develop a colony of the necessary size and capabilities would take at least decades (a longer time frame than some of these existential threats can be expected to materialize in), and a diversion of resources in the trillions of dollars.
Then, if Earth were destroyed, even if the colony survived, it would be a hollow victory indeed -- the bare survival of perhaps a few thousand people in a constrained environment, with almost all the human race dead and essentially all of its cultural achievements and capacity for future progress obliterated.
And, in fact, space colonies would not buy us a guarantee of even this degree of barren, meaningless racial survival. Even when the colonies were capable of living independently, they would presumably maintain some degree of contact with Earth. If some apocalypse-minded nut group were to engineer some sort of epidemic or other disaster capable of wiping out all humans on Earth, they might very well be able to design it to reach the space colonies as well. Similarly, any nation or group with a nuclear arsenal capable of wiping out Earthly humanity could certainly figure out how to deploy nuclear bombs against space colonies as well, if such were their goal.
Far better to invest those trillions of dollars and all that brainpower in developing measures to protect Earth itself against whatever dangers we anticipate, whether natural or created by malevolent humans armed with high technology.
For example, you mentioned meteors. Technology to detect and divert such bodies (even a large asteroid could be nudged far enough to prevent a collision if it were detected far enough in advance) could be developed faster and at much less cost than a space colony, and would protect all seven billion of us and all our works, not just a tiny handful.
Creating defenses for the world against the dangers of man-made epidemics or rogue nanotechnology would be a tremendous technological challenge. We will be in a better position to meet that challenge if we do not divert trillions of dollars and many of our best minds to the illusory panacea of space colonization.
For space colonization to work as an effective hedge against extinction due to a planet-wide catastrophe, we would need at least one colony capable of surviving indefinitely with no support from Earth whatsoever.
Agreed.
To develop a colony of the necessary size and capabilities would take at least decades (a longer time frame than some of these existential threats can be expected to materialize in), and a diversion of resources in the trillions of dollars.
Agreed.
Then, if Earth were destroyed, even if the colony survived, it would be a hollow victory indeed -- the bare survival of perhaps a few thousand people in a constrained environment, with almost all the human race dead and essentially all of its cultural achievements and capacity for future progress obliterated.
Disagree. The survival of the human species is a victory when compared to extinction so long as the remaining society:
A) Has sufficient numbers to ensure genetic diversity and continued re-population through standard sexual breeding or some future reproductive technology
B) Can sustain itself indefinitely
C) Can expand economically, technologically, etc., so as to re-capture human achievements lost to a particularly catastrophic event
And, in fact, space colonies would not buy us a guarantee of even this degree of barren, meaningless racial survival. Even when the colonies were capable of living independently, they would presumably maintain some degree of contact with Earth. If some apocalypse-minded nut group were to engineer some sort of epidemic or other disaster capable of wiping out all humans on Earth, they might very well be able to design it to reach the space colonies as well. Similarly, any nation or group with a nuclear arsenal capable of wiping out Earthly humanity could certainly figure out how to deploy nuclear bombs against space colonies as well, if such were their goal.
Partial agreement. Certainly, a known colony at a fixed location presents a potential target for a sufficiently funded, equipped and motivated group.
But, do they require contact with Earth? What if we deploy colonies that do not contact Earth and proceed to locations, some of which are known, some of which are unknown? These could be self-sustaining space stations and planetary/lunar colonies. Yes this is even more difficult and expensive, but that is a separate argument. I'm just talking about effectiveness here.
Far better to invest those trillions of dollars and all that brainpower in developing measures to protect Earth itself against whatever dangers we anticipate, whether natural or created by malevolent humans armed with high technology.
For example, you mentioned meteors. Technology to detect and divert such bodies (even a large asteroid could be nudged far enough to prevent a collision if it were detected far enough in advance) could be developed faster and at much less cost than a space colony, and would protect all seven billion of us and all our works, not just a tiny handful.
Creating defenses for the world against the dangers of man-made epidemics or rogue nanotechnology would be a tremendous technological challenge. We will be in a better position to meet that challenge if we do not divert trillions of dollars and many of our best minds to the illusory panacea of space colonization.
More questionsThis is the trillion dollar question isn't it? Is space colonization worth the money? Is it worth the effort? If you believe that we can successfully prevent planetary-scale extinction events through sufficient effort then space colonization certainly isn't worth the time/money - at least if done for the sole purpose of survival. If there is a sufficient chance that we cannot prevent such events, economics aside space colonization is our best hope for species survival.
That being said, I will agree that space colonization IS expensive, and that it undoubtedly WILL draw resources from other defensive measures. My questions for you are:
- Why do you believe that we will be able to prevent a planetary-scale disaster?
- What level of planetary organization, cooperation and effort would be required to ensure this? Has anyone studied this?
And my original question could be generalized, instead of being specific to space colonization. Is there any organization dedicated to assuring human survival, regardless of the specific mechanism? I would really like to know the answer to this question. As a non-expert in this area, I am less concerned with evangelizing one specific solution (space colonization) then I am with advocating the idea that human species survival must be taken seriously as a problem. It must be directly recognized and addressed by an organized effort.
*I know we're a little off-topic here for the blog post. I apologize.
Chris: The survival of the human species is a victory when compared to extinction [etc.]
To some extent. But my point was that the survival of a tiny handful of humans would be a far less significant victory than the survival of the entire planet, something with which you presumably agree.
But, do they require contact with Earth? What if we deploy colonies that do not contact Earth and proceed to locations, some of which are known, some of which are unknown? These could be self-sustaining space stations and planetary/lunar colonies. Yes this is even more difficult and expensive, but that is a separate argument. I'm just talking about effectiveness here.
But difficulty and expense are an integral part of evaluating effectiveness. Every dollar spent on one project is a dollar we can't spend on something else that might have an equal or greater probability of achieving the same goal.
A space colony totally isolated from Earth (which, remember, will probably continue to exist indefinitely) would be even less appealing to most normal humans than one which at least maintained some contact, and is thus even less likely to be built in practice.
Anyway, it's a tangential point. My argument is that it would be a mistake to put significant resources into space colonies, even if they could be guaranteed to survive a planetary catastrophe.
Is space colonization worth the money? Is it worth the effort? If you believe that we can successfully prevent planetary-scale extinction events through sufficient effort then space colonization certainly isn't worth the time/money - at least if done for the sole purpose of survival. If there is a sufficient chance that we cannot prevent such events, economics aside space colonization is our best hope for species survival.
There is no absolute security -- never has been and never will be. Neither space colonies nor a planetary defense system can reduce the probability of extinction to absolutely 0%. My argument can be boiled down to two points:
(1) Protecting the whole Earth from catastrophe is a far more desirable goal than building space colonies which would survive after such a catastrophe, because it preserves far more -- all seven billion humans presently alive, all our cultural achievements, and our capacity for future progress, as aopposed to a tiny handful.
(2) In a world of finite resources, every dollar spent on space colonies is not available for planetary defense (and vice versa). Thus we need to allocate resources based on which of the two offers hope of achieving a more desirable goal (see point 1) and on the relative probability of each strategy actually being able to achieve the goal toward which it is directed.
The probability that planetary defenses could keep Earth safe from the various potential threats (natural and artificial) is hard to assess at this point since as yet we know little of the exact nature of some of those threats, but I believe that if we act prudently, the probability of success is very high (see my first comment). Whether it is higher than the probability that a space colony could survive such a catastrophe is difficult to say, but as I said, the goal of preserving the whole Earth is far more desirable than the goal of preserving a tiny handful, and I think this should be the decisive factor in allocating the resources.
There's also the question of which solution is more likely to be put into action. Taxpayers and government officials would be much more likely to allocate resources to planetary defenses knowing that they personally would be among those protected, than they would to spend those resources on space colonies which would not make them personally any safer.
Why do you believe that we will be able to prevent a planetary-scale disaster?
Largely answered in the first comment. Historically the capabilities of large, organized societies have stayed ahead of the capabilities of tiny nihilistic groups which have sought to disrupt or destroy them. We can ensure that this continues to be the case. As for natural disasters, for centuries technological power has been tipping the balance in our favor against the forces (such as famine and epidemics) which used to kill humans in great numbers. I expect that within 20 years our defenses against microorganisms will become so effective that infectious disease will no longer be a problem, for example.
Addressing every possible disaster would take too much space for a blog comment, but you see the general principle.
In any case, I think the probability of most of these disasters is really low. The statistical likelihood of an extinction-level meteor hitting Earth in the next few centuries, or of some terrorist group setting out to design and release an artificial epidemic capable of killing millions and actually succeeding without any government finding out about it and stopping them, or whatever, seems pretty low. We do need to be taking intelligent precautions, but most likely none of these things will happen.
What level of planetary organization, cooperation and effort would be required to ensure this? Has anyone studied this?
Global cooperation is probably only an issue where intelligence-gathering is concerned (keeping tabs on those terrorist groups). Things like asteroid monitoring and developing anti-bioweapons technology are probably best handled by individual nation-states, only the most advanced of which would have much to contribute anyway.
Is there any organization dedicated to assuring human survival, regardless of the specific mechanism?
I know of no such organization, but establishing one wouldn't be the most effective strategy anyway. The best way to get something actually done about any of these issues is to act to influence the decisions of those who control the necessary resources: the governments of the major nation-states.
Along those lines, I think there is already a NASA project to detect asteroids in potentially dangerous orbits, and Ray Kurzweil has testified before Congress about possible dangers of rogue nanotechnology and how we might best protect the world against it.
Remember, the increase in human capabilities after the Technological Singularity will change all these considerations in ways we can't anticipate now. So our present planning need only (and can only) cover a time frame of a few decades.
as far as existential risk goes, the Wiki entry lists organizations that are devoted to addressing these problems
see: http://en.wikipedia.org/wiki/Existential_risk
also, for space habitats, George Zebrowski's science fiction work, Macrolife, is excellent because of its visionary utopian point of view; a fountain of ideas
To Chris: I think our exchange here could be adapted into an interesting posting on my blog. Do you have any objection to my including what you wrote for that purpose? infidel753@live.com
(To George Dvorsky: Sorry to use your comments for this note, but since Chris's name doesn't hyperlink to a profile with an e-mail address, this was the only way I could think of to ask him.)
@Infidel753 No worries, all good.
Infidel753 - you may feel free to re-post any of my comments at another location. Thank you for asking.
One more set of short points.
You said: Historically the capabilities of large, organized societies have stayed ahead of the capabilities of tiny nihilistic groups which have sought to disrupt or destroy them.
But:
- Historically the rate of change in technology was much slower. It was easier for larger, organized groups to stay in general control. Now new technologies are evolving before there is legislation or technical control mechanisms in place. You can see this in the IT security arena.
- Historically WMD were not available to small groups. Even now, it is not feasible for nuclear raw materials to become available in sufficient amounts to small groups for them to inflict planetary-scale damage. I am unfamiliar with nanotech, but the ability to construct biological organisms promises to be available to smaller labs, and viruses replicate where nuclear weapons do not.
- Danger does not exist solely from small groups, but also from nation-states that either make a mistake, or come under the control of dangerous individuals or groups.
Also, I agree that the risk for a global catastrophe is much smaller than for non-existential risks. But the weight of the potential impact of an existential risk lends greater weight to the small chance that the risk will be realized.
-chris
dharmicmel - thanks for the link. I've encountered most of those organizations online. I would add this to the list:
http://www.fhi.ox.ac.uk/
And I went to this organization's conference over this last summer:
http://www.wfs.org/
Unfortunately most forward thinking organizations seem to be domain specific (AI, Nano, etc.). And the more general organizations such as Lifeboat don't seem very well funded. As Infidel753 has indicated, we're talking about a rather expensive problem.
Chris: Infidel753 - you may feel free to re-post any of my comments at another location.
Thanks. I should have it up in a few days.
Historically the rate of change in technology was much slower. It was easier for larger, organized groups to stay in general control. Now new technologies are evolving before there is legislation or technical control mechanisms in place.
I've never thought that legislation or mechanisms of control would be of much use in dealing with this problem. Societies need to keep their defensive technology ahead of the nihilists' destructive technology. I still contend that the societies, due to their greater numbers and resources, have the advantage over the nihilists.
I'm not talking only about government labs. A lot of good software, including defensive software, has come from independent private individuals. This is able to happen because computer technology has been free to move ahead full speed, not subject to the kind of efforts at totalitarian control that George worried about in his posting above. If the government had ever tried to put all software development under centralized control, most likely we would be far less well protected against threats like viruses than we are.
You can see this in the IT security arena.
But the defensive technologies have stayed ahead. We've had the internet for a while now and viruses and hackers are still nuisances and localized dangers, not existential threats.
Historically WMD were not available to small groups.
See my earlier comments. There are still steps we can take to minimize the danger. Not to completely eliminate it, but to keep the probability of a successful attack low.
I am unfamiliar with nanotech, but the ability to construct biological organisms promises to be available to smaller labs, and viruses replicate where nuclear weapons do not.
As I noted above, by the time a few nuts with a basement lab can do what a government bioweapons lab can do today, the government lab should be able to do things which we can't imagine today -- including things like stamping out a new man-made plague in a matter of hours, whatever that might involve. We just have to resist the impulse to try to slow down progress.
Danger does not exist solely from small groups, but also from nation-states that either make a mistake, or come under the control of dangerous individuals or groups.
Examples of this already exist (Iran, Pakistan, North Korea). But we have the means to contain or eradicate such threats. Only the political will is missing, and that will change after the first nuclear terrorist attack. Also, there is one non-Western society with similar pre-emptive capabilities and much less psychological inhibition about using them if it feels seriously threatened (Russia).
In any case, terrorist or rogue-state nuclear attacks might destroy some cities or a small country, but they are not a planetary-scale existential threat.
Existential threats are not new. There have been cases in the past of large, organized societies destroyed by barbarian attack (Harappa, Roman Empire), or by epidemics introduced by human enemies (Aztec & Inca Empires), or by natural disaster (Minoan Greece). Current and near-future threats are more sophisticated, but so are the large societies, to an even greater degree. Overall, the probability of a civilization-killing disaster strikes me as being much lower than it was in pre-modern times.
Societies need to keep their defensive technology ahead of the nihilists' destructive technology. I still contend that the societies, due to their greater numbers and resources, have the advantage over the nihilists.
....
As I noted above, by the time a few nuts with a basement lab can do what a government bioweapons lab can do today, the government lab should be able to do things which we can't imagine today -- including things like stamping out a new man-made plague in a matter of hours, whatever that might involve
I guess what I'm trying to articulate is that I don't think we can rely on a government/societal capabilities being that far ahead of those that want to do damage.
As an example, in the IT security world, our society is NOT ahead of our adversaries (Well perhaps offensively, but not defensively). I can say this from professional experience as I have been involved in attempts to defend government and contractor networks under attack, when the attackers have been discovered well after they gained entry to our systems.
While I can’t relate my experiences explicitly, you can read links below, and:
- Imagine the problem on a much broader scale, because that is how this situation currently applies.
- Imagine that the adversary’s access to our networks is much more comprehensive than suggested in the articles, because that is how it actually is.
- Then consider what the adversary would have been able to do if their goals had been destruction rather than data theft.
http://www.gcn.com/print/25_25/41716-1.html
http://www.businessweek.com/magazine/content/08_16/b4080032218430.htm
This is obviously NOT an existential threat. I am providing this as an example of our society being entirely unprepared to deal with defense against a threat tied to a new technology. We have a new technology that became entrenched within our society before we knew how to use it safely, and we are suffering consequences as a result.
It is having to deal with this situation that has ruined my belief that our society is able to handle new technology safely. If we do not prepare for risks that have the relatively low impact of those found in the links above, why should we have any faith that our society will successfully prepare for risks that require even more foresight and planning for proper defense?
I vote ‘no confidence’.
Post a Comment