Hard-Wired AI: Insurance for a More Harmonious Future

Daniel Faggella /

Posted on: October 28, 2015 / Last Modified: October 28, 2015

Computer microprocessor with brain symbol.Science fiction author Isaac Asimov’s I, Robot series, depicted a dystopian world in 2035 where, though humanity is served by humanoid robots, an army of more advanced robots is preparing to attack mankind. Though I, Robot was initially written in the 1940’s, according to physicist and author Louis Del Monte, the science fiction premise in 2015 is much closer to reality than ever before.

The author of The Artificial Intelligence Revolution, Del Monte wrote his book after reviewing a 2009 experiment conducted by the Swiss Institute of Technology for Intelligent Systems, which programmed robots to cooperate with each other in the search for food. During the test, researchers found some robots were more successful than others at finding food and, after about 50 generations of improvement in the machines, the robots stopped cooperating entirely and refused to share the food they’d find. It was that mindset of free will and deceit and, by implication, a sense of self-preservation that compelled Del Monte to take a hard look at where artificial intelligence might be headed.

“When I went through that experiment in detail, I became concerned that a strong artificially intelligent machine (SAM) could have a mindset of its own and its agenda may not align with our agenda,” he said. “The concern I had was, will it serve us or replace us?”

While Del Monte notes that, right now, the sense is machines are serving us, things may change as artificial intelligence continues to advance. And the change, he said, may come sooner than anticipated.

“I predict that between 2025 and 2030, the machines will be as advanced as the human mind and will be equivalent to the human body,” Del Monte said. “I also predict that, between 2040 and 2045, we will have developed a machine or machines that are not only equivalent to a human mind, but more intelligent than the entire human race combined.”

Just as science fiction may become fact, Del Monte believes Asimov’s safeguards may also provide the solution. That solution, he believes, lies not in software, but in hardware.

“We could take Asimov’s first law, which says a robot may not injure a human being or, through inaction, allow a human being to come to harm, and we could express that through hardware,” he said. “We take Asimov’s laws and whatever we in humanity think is important and we put it in hardware, not software. It would be integrated circuits… solid state circuits that would act as filters to make sure a machine is doing no harm.”

Given those machines’ potential for mass destruction, Del Monte acknowledges that some nations may not adhere to that hardware protocol. For the sake of humanity, he believes international treaties, such as those in place banning the use of nuclear or biological weapons, should be enacted before autonomous weapons can ever be put into use.

“If North Korea were to use nuclear weapons to take out Japan or South Korea, our response would have to be proportional. I’m using that analogy to say, if you develop weapons that are autonomous and indiscriminately attack targets or innocent people, expect retaliation. It’s mutually assured proportionate response,” Del Monte said. “The scientific community is coming out worldwide saying these weapons should be banned and, if they’re not banned, we should have limits on them.”

The hardware limits shouldn’t be confined to autonomous weapons, as Del Monte can envision a future where machines will exceed human intelligence. And he believes those advanced machines might not take a kind view of humanity.

“My concern is machines will view humanity negatively,” he said. “They’ll say, ‘These humans are unpredictable. They use nuclear weapons. They release computer viruses. They go to war. This is unacceptable to us.’”

From there, Del Monte said the science fiction premise of robots ruling the world could become a reality. That reality, he said, could put the survival of mankind at stake.

“One machine in 2040 develops the next machine without intervention, then that machine develops the next generation and we’re then not aware of how these machines work,” Del Monte said. “It could be our undoing if we don’t control what’s called ‘the intelligence explosion.’ I’m not saying we should halt AI or limit intelligence. We should just insure there is hardware technology in the machine that limits its capability to harm humanity.”

 

About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

Browse More

The Future of Circus

The Future of Circus: How can businesses and artists thrive in a changing entertainment industry?

The Problem with NFTs preview

The Problem with NFTs [Video]

Micro-Moments of Perceived Rejection

Micro-Moments of Perceived Rejection: How to Navigate the (near) Future of Events

Futurist Tech Conference Preview

Futurist Conferences: Considerations for Progressive Event Professionals

Nikola Danaylov on Ex Human

Nikola Danaylov on Ex Human: the Lessons of 2020

Immortality or Bust preview

Immortality or Bust: The Trailblazing Transhumanist Movie

COVID19

Challenges for the Next 100 Days of the COVID19 Pandemic

2030 the film preview

Why I wanted to Reawaken FM-2030’s Vision of the Future for 21st Century Audiences