Control Dangerous AI Before It Controls Us, Experts Saying
He believes super-intelligent computers could one day threaten humanity’s existence
By Jeremy Hsu
Super-intelligent computers or robots have threatened humanity’s existence more than once in science fiction. Such doomsday scenarios could be prevented if humans can create a virtual prison to contain artificial intelligence before it grows dangerously self-aware.
Keeping the artificial intelligence genie trapped in the proverbial bottle could turn an apocalyptic threat into a powerful oracle that solves humanity’s problems, said Roman Yampolskiy, a computer scientist at the University of Louisville in Kentucky. But successful containment requires careful planning so that a clever breed of artificial intelligence cannot simply threaten, bribe, seduce or hack its way to freedom.
“It can discover new attack pathways, launch sophisticated social-engineering attacks and re-use existing hardware components in unforeseen ways,” Yampolskiy said. “Such software is not limited to infecting computers and networks — it can also attack human psyches, bribe, blackmail and brainwash those who come in contact with it.”
A new field of research aimed at solving the prison problem for artificial-intelligence programs could have side benefits for improving cybersecurity and cryptography, Yampolskiy suggested. His proposal was detailed in the March issue of the Journal of Consciousness Studies.
Computer scientist Roman Yampolskiy has suggested using this version of the biohazard or radiation warning signs to indicate a dangerous artificial intelligence.
How to trap Skynet
One starting solution might trap the artificial intelligence, or AI, inside a “virtual machine” running inside a computer’s typical operating system — an existing process that adds security by limiting the AI’s access to its host computer’s software and hardware. That stops a smart AI from doing things such as sending hidden Morse code messages to human sympathizers by manipulating a computer’s cooling fans.