WASHINGTON (AP) — In the sci-fi thriller “Ex Machina,” the wonders and dangers of artificial intelligence are embodied in a beautiful, cunning android named Ava. She puts her electronic smarts to work with frightening results, manipulating and outwitting her human handlers.
Just how far off in the future is a robot like the fictional Ava? And how worried should we be about warnings issued Tuesday that artificial intelligence could be used to build weapons with minds of their own?
Five things to know about artificial intelligence:
SCIENTISTS PREDICT WEAPONS ‘WITHIN YEARS’
Autonomous weapons that can search and destroy targets could be fielded quickly, according to an open letter released Tuesday and signed by hundreds of scientists and technology experts.
“If any major military power pushes ahead with (artificial intelligence) weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: Autonomous weapons will become the Kalashnikovs of tomorrow,” said the letter, which references the Russian assault rifle in use around the world. “Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.”
AVA-LIKE ROBOTS A LONG WAY OFF
Robots with Ava’s sophistication are at least 25 years away and perhaps decades beyond that to realize, according to the experts. The gap between what’s possible today and what Hollywood puts on the movie screen is huge, said Oren Etzioni, chief executive officer of the Allen Institute for Artificial Intelligence in Seattle. “Our robots can’t even grip things today,” he said. “NASA still has to control spacecraft remotely.”
The most challenging aspect of an Ava-like robot is the hardware, said Toby Walsh, a professor of artificial intelligence at the University of New South Wales in Sydney, Australia, and at Australia’s Centre of Excellence for Information Communication Technologies.
“It might be 50 to 100 years to have this sort of hardware,” Walsh said. “But the software is likely less than 50 years away.”
Facial recognition technology that could be used to spot targets already performs better than humans do, said Bart Selman, a computer science professor at Cornell University in New York. That capability could be harnessed with the video taken by surveillance cameras to hunt people down autonomously. “That’s a bit scary,” Selman said.
Selman, Etzioni and Walsh signed Tuesday’s letter.
THE UPSIDE OF AI
Most artificial-intelligence researchers are focused on developing technologies that can benefit society, including tools that can make battlefields safer, prevent accidents and reduce medical errors. They’re calling for a “ban on offensive autonomous weapons beyond meaningful human control,” according to the letter. “The time for society to discuss this issue is right now,” Etzioni said. “It’s not tomorrow.”
U.S. LEADS, CHINA IN PURSUIT
The United States is the leader in the development of artificial intelligence for military and civilian applications. But China isn’t far behind, Selman said. “There’s no doubt they are investing in science and technology to catch up,” he said.
Any military that knows they might have to face these weapons is going to be working on them themselves, Walsh said. “If I was the Chinese, I would be working strongly on them. This is why we need a ban now to stop this arms race now.”
Officials at the Pentagon’s Defense Advanced Research Projects Agency weren’t immediately available for comment. But artificial-intelligence projects are being pursued to provide the U.S. military with “increasingly intelligent assistance,” according to an information paper on the agency’s website. One program is aimed at providing a software system that pulls information out of photos by allowing the user to ask specific questions that range from whether a person is on the terrorist watch list or where a building is located.
Follow Lardner on Twitter at http://twitter.com/rplardner