With everyone’s attention focused on a pending technological Singularity, few give consideration to the immediate period of time leading up to it. If things continue apace, this could prove to be the most dangerous time in human history. It will be the era of weak and narrow artificial intelligence, a highly problematic combo that could wreak tremendous havoc on human civilization. Here’s why we’ll need to be ready.
As opposed to the Technological Singularity, which is defined as the advent of recursively improving greater-than-human artificial intelligence (or artificial superintelligence), or the development of strong AI (human-like artificial general intelligence), this particular concern has to do with the rise of weak AI, expert systems that match or exceed human intelligence in a narrowly defined area, but not in broader areas. As a consequence, many of these systems will work outside of human comprehension and control.
But don't let the name fool you; there's nothing weak about the kind of damage it could do.
Before the Singularity
The Singularity is often misunderstood as AI that’s simply smarter than humans, or the rise of human-like consciousness in a machine. Neither are the case. To a non-trivial degree, much of our AI already exceeds human capacities. It’s just not sophisticated and robust enough to do any significant damage to our infrastructure. The trouble will start to come when, in the case of the Singularity, a highly generalized AI starts to iteratively improve upon itself.
And indeed, when the Singularity hits, it’ll be like, in the words of mathematician I. J. Good, anintelligence explosion — and it will indeed hit us like a bomb. Human control will forever be relegated to the sidelines, in whatever form that might take.
A pre-Singularity AI disaster or catastrophe, on the other hand, will be containable. But just barely. It’ll likely arise from an expert system or super-sophisticated algorithm run amok. And the worry is not so much its power — which is definitely a significant part of the equation — but the speed at which it will inflict the damage. By the time we have a grasp on what’s going on, something terrible may have happened.
Narrow AI could knock out our electric grid, damage nuclear power plants, cause a global-scale economic collapse, misdirect autonomous vehicles and robots, take control of a factory or military installation, or unleash some kind of propagating blight that will be difficult to get rid of (whether in the digital realm or the real world). The possibilities are frighteningly endless.
Our infrastructure is becoming increasingly digital and interconnected — and by consequence, increasingly vulnerable. In a few decades, it will be brittle as glass, with the bulk of human activity dependant upon it.
And it is indeed a possibility. The signs are all there.
Accidents Will Happen
Back in 1988, a Cornell University student named Robert Morris scripted a software program that could measure the size of the Internet. To make it work, he equipped it with a few clever tricks to help it along its way, including an ability to exploit known vulnerabilities in popular utility programs running on UNIX. This allowed the program to break into those machines and copy itself, thus infecting those systems.
On November 2, 1988, Morris introduced his program to the world. It quickly spread to thousands of computers, disrupting normal activities and Internet connectivity for days. Estimates put the cost of the damage anywhere between $10,000 to $100,000. Dubbed the “Morris Worm,” it’s considered the first worm in human history — one that prompted DARPA to fund the establishment of the CERT/CC at Carnegie Mellon University to anticipate and respond to this new kind of threat.
As for Morris, he was charged under the Computer Fraud and Abuse Act and given a $10,000 fine.
But the takeaway from the incident was clear: Despite our best intentions, accidents willhappen. And as we continue to develop and push our technologies forward, there’s always the chance that it will operate outside our expectations — and even our control.
Down to the Millisecond
Indeed, unintended consequences are one thing, containability is quite another. Our technologies are increasingly operating at levels beyond our real-time capacities. The best example of this comes from the world of high-frequency stock trading (HFT).
In HFT, securities are traded on a rapid-fire basis through the use of powerful computers and algorithms. A single investment position can last for a few minutes — or a few milliseconds; there can be as many as 500 transactions made in a single second. This type of computer trading can result in thousands upon thousands of transactions a day, each and every one of them decided by super-sophisticated scripts. The human traders involved (such as they are) just sit back and watch, incredulous to the machinations happening at break-neck speeds.
“Back in the day, I used to be able to explain to a client how their trade was executed. Technology has made the trade process so convoluted and complex that I can’t do that any more,” noted PNC Wealth Management's Jim Dunigan in a Markets Media article.
Clearly, the ability to assess market conditions and react quickly is a valuable asset to have. And indeed, according to a 2009 study, HFT firms accounted for 60 to 73% of all U.S. equity trading volume; but as of last year that number dropped to 50% — but it's still considered a highly profitable form of trading.
To date, the most significant single incident involving HFT came at 2:45 on May 5th, 2010. For a period of about five minutes, the Dow Jones Industrial Average plummeted over 1,000 points (approximately 9%); for a few minutes, $1 trillion in market value vanished. About 600 points were recovered 20 minutes later. It's now called the 2010 Flash Crash, the second largest point swing in history and the biggest one-day point decline.
The incident prompted an investigation by Gregg E. Berman, the U.S. Securities and Exchange Commission (SEC), and the Commodity Futures Trading Commission (CFTC). The investigators posited a number of theories (of which there are many, some of them quite complex), but their primary concern was the impact of HFT. They determined that the collective efforts of the algorithms exacerbated price declines; by selling aggressively, the trader-bots worked to eliminate their positions and withdraw from the market in the face of uncertainty.
The following year, an independent study concluded that technology played an important role, but that it wasn’t the entire story. Looking at the Flash Crash in detail, the authors argued that it was “the result of the new dynamics at play in the current market structure,” and the role played by “order toxicity.” At the same time, however, they noted that HFT traders exhibited trading patterns inconsistent with the traditional definition of market making, and that they were “aggressively [trading] in the direction of price changes.”
HFT is also playing an increasing role in currencies and commodities, making up about 28% of the total volume in futures markets. Not surprisingly, this area has become vulnerable to mini crashes. Following incidents involving the trading of cocoa and sugar, the Wall Street Journalhighlighted the growing concerns:
"The electronic platform is too fast; it doesn't slow things down" like humans would, said Nick Gentile, a former cocoa floor trader. "It's very frustrating" to go through these flash crashes, he said.....The same is happening in the sugar market, provoking outrage within the industry. In a February letter to ICE, the World Sugar Committee, which represents large sugar users and producers, called algorithmic and high-speed traders "parasitic."
Just how culpable HFT is to the phenomenon of flash crashes is an open question, but it’s clear that the trading environment is changing rapidly. Market analysts now speak in terms of “microstructures,” trading “circuit breakers,” and the “VPIN Flow Toxicity metric.” It’s also difficult to predict how serious future flash crashes could become. If insufficient measures aren’t put into place to halt these events when they happen, and assuming HFT is scaled-up in terms of market breadth, scope, and speed, it’s not unreasonable to think of events in which massive and irrecoverable losses might occur. And indeed, some analysts are already predicting systems that can support 100,000 transactions per second.
More to the point, HFT and flash crashes may not create an economic disaster — but it’s a potent example of how our other mission-critical systems may reach unprecedented tempos. As we defer critical decision making to our technological artifacts, and as they increase in power and speed, we are increasingly finding ourselves outside of the locus of control and comprehension.
When AI Screws Up, It Screws Up Badly
No doubt, we are already at the stage when computers exceed our ability to understand how and why they do the things they do. One of the best examples of this is IBM’s Watson, the expert computer system that trounced the world’s best Jeopardy players in 2011. To make it work, Watson’s developers scripted a series of programs that, when pieced together, created an overarching game-playing system. And they’re not entirely sure how it works.
David Ferrucci, the Leader Researcher of the project, put it this way:
Watson absolutely surprises me. People say: 'Why did it get that one wrong?' I don't know. 'Why did it get that one right?' I don't know.
Which is actually quite disturbing. And not so much because we don’t understand why it succeeds, but because we don’t necessarily understand why it fails. By virtue, we can’t understand or anticipate the nature of its mistakes.
For example, Watson had one memorable gaff that clearly demonstrated how, when an AI fails, it fails big time. During the Final Jeopardy portion, it was asked, “Its largest airport is named for a World War II hero; its second largest, for a World War II battle.” Watson responded with, “What is Toronto?”
Given that Toronto’s Billy Bishop Airport is named after a war hero, that was not a terrible guess. But why this was such a blatant mistake is that the category was “U.S. Cities.” Toronto, not being a U.S. city, couldn't possibly have been the correct answer.
Again, this is the important distinction that needs to be made when addressing the potential for a highly generalized AI. Weak, narrow systems are extremely powerful, but they’re also extremely stupid; they’re completely lacking in common sense. Given enough autonomy and responsibility, a failed answer or a wrong decision could be catastrophic.
Moreover, because expert systems like Watson will soon be able to conjure answers to questions that are beyond our comprehension, we won’t always know when they’re wrong. And that is a frightening prospect.As another example, take the recent initiative to give robots their very own Internet. By providing and sharing information amongst themselves, it’s hoped that these bots can learn without having to be programmed. A problem arises, however, when instructions for a task are mismatched — the result of an AI error. A stupid robot, acting without common sense, would simply execute upon the task even when the instructions are wrong. In another 30 to 40 years, one can only imagine the kind of damage that could be done, either accidentally, or by a malicious script kiddie.
The Shape of Things to Come
It’s difficult to know exactly how, when, or where the first true AI catastrophe will occur, but we’re still several decades off. Our infrastructure is still not integrated or robust enough to allow for something really terrible to happen. But by the 2040s (if not sooner), our highly digital and increasingly interconnected world will be susceptible to these sorts of problems.
By that time, our power systems (electric grids, nuclear plants, etc.) could be vulnerable to errors and deliberate attacks. Already today, the U.S. has been able to infiltrate the control system software known to run centrifuges in Iranian nuclear facilities by virtue of its Stuxnet program — an incredibly sophisticated computer virus (if you can call it that). This program represents the future of cyber-espionage and cyber-weaponry — and it’s a pale shadow of things to come.
In future, more advanced versions will likely be able to not just infiltrate enemy or rival systems, it could reverse-engineer it, inflict terrible damage — or even take control. But like the Morris Worm incident showed, it may be difficult to predict the downstream effects of these actions, particularly when dealing with autonomous, self-replicating code. It could also result in an AI arms race, with each side developing programs and counter-programs to get an edge on the other side’s technologies.
And though it might seem like the premise of a scifi novel, an AI catastrophe could also involve the deliberate or accidental takeover of any system running off an AI. This could include integrated military equipment, self-driving vehicles (including airplanes), robots, and factories. Should something like this occur, the challenge will be to disable the malign script (or source program) as quickly as possible, which may not be easy.
More conceptually, and in the years immediately preceding the onset of uncontainable self-improving machine intelligence, a narrow AI could be used (again, either deliberately or unintentionally) to execute upon a poorly articulated goal. The powerful system could over-prioritize a certain aspect, or grossly under-prioritize another. And it could make sweeping changes in the blink of an eye.
Hopefully, if and when this does happen, it will be containable and relatively minor in scope. But it will likely serve as a call to action in anticipation of more catastrophic episodes. As for now, and in consideration of these possibilities, we need to ensure that our systems are secure, smart, and resilient.
A different version of this article appeared at io9.
Images: Shutterstock/agsandrew; Washington Times; TIME, Potapov Alexander/Shutterstock.
No comments:
Post a Comment