The fast-advancing field of robotics is opening up serious questions about the military-based motivations behind some of the coolest tricks our machines can now be programmed to perform.
The Defense Advanced Research Projects Agency, DARPA, helped create the Internet. But these days, DARPA is probably best known for its robotics contests. Its latest robotics challenge was inspired by the Fukushima nuclear disaster, which happened three years ago.
Back then, nuclear engineers rushed to shut down reactors at the Fukushima Daiichi nuclear power plant, but fear of radiation poisoning kept utility workers at the Fukushima plant from shutting off the effectively cooling reactors sooner. Eventually, three of the plants six reactors melted down.
“There is good evidence that if we had been able to send in some kind of robot and had that robot do relatively simple things, simple manual tasks like opening valves, opening doors, getting to control panels, a lot of the following disaster could have been averted, ” says Brian Gerky, of the Open Source Robotics Foundation.
The goal now is to build that robot, one that can open doors, move debris, turn a valve, even drive a conventional car.
In December, 16 teams of roboticists converged in Miami to compete. While the robots moved slowly and some were tripped up by seemingly trivial obstacles, the event pushed humanoid robots to do things they have never done before.
While this may seem like an entirely altruistic enterprise — designing a robot for disaster response — the event also is pushing the field robotics toward goals military planners have long sought.
“At the end of the day people need to remember what the D in DARPA stands for. It stands for Defense,” says Peter Singer. Singer is a senior fellow at the Brookings Institution and author of Wired for War: The Robotics Revolution and Conflict in the 21st Century.
“Too often scientists try and kid themselves,” he says. “[They] act like just because I work on this system that is not directly a weapon system I have nothing to do with war.”
Singer recalls speaking to one researcher recently who was working on a project funded by the Navy.
“He was working on a Navy contract on a robot that would play baseball. ‘I don’t have anything to do with war.’ Come on. You think the Navy is fundng this because they want a better Naval Academy baseball team?
Singer asks, for example, how tracking and intercepting a fly-ball could be analogous to tracking and intercepting a missile.
It’s hard to find a roboticist working today in academia who hasn’t taken some kind of military funding. Illah Nourbakhsh is one of the few. While Nourbakhsh acknowledges the good that could come out of DARPA’s recent push to build a semi-autonomous search and rescue robot, he also sees an obvious dual use.
If researchers set out to build a robot that can drive a regular car, climb a ladder and operate a jack hammer, “That means that that robot can manipulate an AK-47. That means that robot can manipulate the controls of all the conventional military machines as well,” he says.
Nourbakhsh believes DARPA is pushing roboticists to build machines that can make complex decisions quickly and independently.
“We are making our robots ever more autonomous,” he says.
This research, Nourbakhsh says, is pushing us closer to the point were robots will decide when to kill. “It’s a really interesting boundary to cross,” he says.
Imagine using image recognition when a drone is flying in the air and matching faces against faces on a kill list, he suggests. If a robot like that made a mistake, who would be responsible? The programmer? The manufacturer? The military commander who launched it on its mission?
“It forces us to confront whether we really control machines,” says Ryan Calo, a law professor at the University of Washington. Calo says theses tension won’t just play out in the military, but will crop up whenever we are tempted to allow robots to make decisions on their own.