Designing AI Infrastructures that Sustain the Human Race

Daniel Faggella /

Posted on: September 16, 2015 / Last Modified: September 16, 2015

AI ScafoldingAs the epoch of an increasingly automated society fast approaches – many of us have already arrived – many governments and organizations find themselves facing and grappling with a heavy mix of logical and ethical issues.  Some of these concerns are discussed and debated, while others “rise out of the mist” and force us to “build the plane as we’re flying it.” I recently spoke with Steve Omohundro, CEO and founder of Possibility Research, to discuss some of the changes already taking place in the field of AI infrastructure, and how we might better prepare for those that are inevitably on their way.

Three Ways AI Will Transform Society

As the head of a company that analyzes ways to effectively handle the technological changes coming towards us at close to light speed, Steve Omohundro has done his fair share of thinking on how such innovations will change the makeup of society.  “I think we have three waves coming; first, transforming economy, looking at where things can be done more efficiently through automation i.e. Uber, Airbnb – not just robotics.

The second wave is military; every military on the planet is in the midst of doing robotics – drone/anti-drone warfare; autonomous underwater submarines in China – what does warfare look like in that context?” There are huge concentrations of power that could potentially wield robot armies. A trending ethical movement in Europe focused on stopping the use of military drones is starting to cross borders, and much of the US public is rallying for the elimination of such weapons. “I’m a big advocate of developing and managing the development of these (artificial intelligence) technologies very carefully and very slowly,” says Steve.

“The third wave is when (AI) systems become of the same kind or more intelligent than humans, and what does that do to society?”  Such a transformation leaves more questions than answers.  In 50 years, will humans work at all?  How do we manage?  What is the role of humanity in that future?  Do we merge with systems and create something new?  Do we create niches that are uniquely human and let robots do what we don’t want to do?

“(We) should very carefully think about, what are our values, what are the things that mean the most to us?”  He suggests that none of the AI technology futures discussed is necessarily inevitable, and that society should be careful to design systems that reflect our values.

Why We Need a United Band of Humans

As a common team of humans – team humanity, if you will – how do we best move forward to design AI systems that further progress but are also safe and align with agreed-upon values?  That’s a tall order. We think back to one of the biggest challenges that humanity has ever faced i.e. the rise of nuclear weapons, which could have easily blown us all off the face of the planet (and that risk still exists).  Omohundro mentions the book Command and Control, written by Journalist Eric Schlosser.  In the book, Schlosser discusses a little known incident that occurred in the 1960s, when the United States military accidentally dropped two hydrogen bombs over North Carolina.

“We managed to make it out, no unintentional bombs went off.” This is an undoubtedly scary realization, and one that calls for increasing transparency. “Having inspectors at nuclear sights has been important”, says Omoundro.  “I think every country realizes the danger and has signed agreements, but little countries still want to own nuclear weapons.”

A similarly real notion about the upcoming wave of powerful technologies is just “dawning on us now”, and Omohundro expresses approval in the likes of Musk, Gates, and Hawking coming forward to express concerns.  Fundamentally, all one needs to write an advanced AI system is a PC and a basement in which to do it. These technologies, compared to nuclear, are much harder to manage and verify.

Positive Prevention

Looking ahead, Steve proposes what he calls a “safe AI scaffolding strategy“, basically a safety infrastructure that manages these systems and does so in an extremely careful, sequential way, “where at every stage we have a very high confidence of safety.”  He uses the bitcoin network as a valuable example that’s been developed based on a cryptographic protocol that “doesn’t rely on anyone trusting anyone else, which is exactly the kind of technology we need in the future Internet.”

Today, we need a formidable police force and legal system to manage these new entities, but crypto currencies could be the first step in the type of management needed for AI systems of the future. Then we really open Pandora’s box: what if you have an AI program that can create nuclear and similar technologies?  Would this proposed scaffold undergird all development?

Omohundro comments that there exist huge security challenges, to include viruses and malware.  “There’s a current trend of ’script kiddies’ i.e. someone clever who figures out how to break into a system, then bundles it up into a code that can be used by others who don’t really understand (the technology).” Developing a sophisticated AI system that has the capability to accomplish such a feat is difficult, and only a few research labs really have the capability to do so.  “But if principles are understood in the future, you could figure out how to bundle up (harmful codes).” How do we prevent such script kiddies, which are anti-social in nature, from breaking into delicate systems?

“We need infrastructure – today’s Internet is loosely put together…a future analog of the Internet is going to (need to) be better managed and monitored, particularly money sites,” says Steve.  Crypto currency are initial steps in this arena, on which we can start to model and create a regulatory framework for the Internet that begins to span increasingly sensitive AI systems.

 

About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

 

Browse More

The Future of Circus

The Future of Circus: How can businesses and artists thrive in a changing entertainment industry?

The Problem with NFTs preview

The Problem with NFTs [Video]

Micro-Moments of Perceived Rejection

Micro-Moments of Perceived Rejection: How to Navigate the (near) Future of Events

Futurist Tech Conference Preview

Futurist Conferences: Considerations for Progressive Event Professionals

Nikola Danaylov on Ex Human

Nikola Danaylov on Ex Human: the Lessons of 2020

Immortality or Bust preview

Immortality or Bust: The Trailblazing Transhumanist Movie

COVID19

Challenges for the Next 100 Days of the COVID19 Pandemic

2030 the film preview

Why I wanted to Reawaken FM-2030’s Vision of the Future for 21st Century Audiences