Science fiction author Isaac Asimov’s I, Robot series, depicted a dystopian world in 2035 where, though humanity is served by humanoid robots, an army of more advanced robots is preparing to attack mankind. Though I, Robot was initially written in the 1940’s, according to physicist and author Louis Del Monte, the science fiction premise in 2015 is much closer to reality than ever before.
The author of The Artificial Intelligence Revolution, Del Monte wrote his book after reviewing a 2009 experiment conducted by the Swiss Institute of Technology for Intelligent Systems, which programmed robots to cooperate with each other in the search for food. During the test, researchers found some robots were more successful than others at finding food and, after about 50 generations of improvement in the machines, the robots stopped cooperating entirely and refused to share the food they’d find. It was that mindset of free will and deceit and, by implication, a sense of self-preservation that compelled Del Monte to take a hard look at where artificial intelligence might be headed.
“When I went through that experiment in detail, I became concerned that a strong artificially intelligent machine (SAM) could have a mindset of its own and its agenda may not align with our agenda,” he said. “The concern I had was, will it serve us or replace us?”
While Del Monte notes that, right now, the sense is machines are serving us, things may change as artificial intelligence continues to advance. And the change, he said, may come sooner than anticipated.
“I predict that between 2025 and 2030, the machines will be as advanced as the human mind and will be equivalent to the human body,” Del Monte said. “I also predict that, between 2040 and 2045, we will have developed a machine or machines that are not only equivalent to a human mind, but more intelligent than the entire human race combined.”
Just as science fiction may become fact, Del Monte believes Asimov’s safeguards may also provide the solution. That solution, he believes, lies not in software, but in hardware.
“We could take Asimov’s first law, which says a robot may not injure a human being or, through inaction, allow a human being to come to harm, and we could express that through hardware,” he said. “We take Asimov’s laws and whatever we in humanity think is important and we put it in hardware, not software. It would be integrated circuits… solid state circuits that would act as filters to make sure a machine is doing no harm.”
Given those machines’ potential for mass destruction, Del Monte acknowledges that some nations may not adhere to that hardware protocol. For the sake of humanity, he believes international treaties, such as those in place banning the use of nuclear or biological weapons, should be enacted before autonomous weapons can ever be put into use.
“If North Korea were to use nuclear weapons to take out Japan or South Korea, our response would have to be proportional. I’m using that analogy to say, if you develop weapons that are autonomous and indiscriminately attack targets or innocent people, expect retaliation. It’s mutually assured proportionate response,” Del Monte said. “The scientific community is coming out worldwide saying these weapons should be banned and, if they’re not banned, we should have limits on them.”
The hardware limits shouldn’t be confined to autonomous weapons, as Del Monte can envision a future where machines will exceed human intelligence. And he believes those advanced machines might not take a kind view of humanity.
“My concern is machines will view humanity negatively,” he said. “They’ll say, ‘These humans are unpredictable. They use nuclear weapons. They release computer viruses. They go to war. This is unacceptable to us.’”
From there, Del Monte said the science fiction premise of robots ruling the world could become a reality. That reality, he said, could put the survival of mankind at stake.
“One machine in 2040 develops the next machine without intervention, then that machine develops the next generation and we’re then not aware of how these machines work,” Del Monte said. “It could be our undoing if we don’t control what’s called ‘the intelligence explosion.’ I’m not saying we should halt AI or limit intelligence. We should just insure there is hardware technology in the machine that limits its capability to harm humanity.”
About the Author:
Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com