Are we destined to be out-played by A.I.?

Louis Rosenberg / ,

Posted on: July 24, 2016 / Last Modified: July 24, 2016

Print

Imagine a flying saucer lands in Time Square and an alien steps out. He’s a competitive fellow, so he arrives armed with a board game in hand – the game of Go. He walks up the first person he passes and says the classic line, “Take me to your best player.”

We humans are competitive too, so a tournament is quickly arranged. The alien is surprisingly
confident. What we don’t appreciate is that he’s spent years studying how humans play Go, analyzing
replays of every major match between top players.

It feels like Humanity is being set up for a humiliating defeat. After all, the alien is deeply prepared to play humans, while we had no opportunity to get ready for playing aliens. We could lose – badly.

And that’s exactly what happened earlier this year when an alien intelligence named AlphaGo played the human Go master, Lee Sedol. The alien beat the human soundly, prevailing in 4 out of 5 games.

Was this Alpha Go victory a milestone in A.I. research? Absolutely, but not because it proved an A.I. can be built that is highly skilled at playing the game of Go. No, this victory proved that an A.I. can be
built that is highly skilled at playing the game of humans.

After all, AlphaGo didn’t learn to play by studying the rules and thinking up a clever strategy. No, it learned by studying how people play, processing thousands upon thousands of matches to characterize how masters make moves, and react to moves, and what mistakes they’re likely to make.

All told, the system trained by reviewing 30 million moves by expert players. Thus, AlphaGo is not a system designed to optimize play an abstract game. No, it’s a system optimized to beat humans by
studying us inside and out, learning to predict what actions we’ll take, what reactions we’ll have, and
what errors we’ll most frequently stumble into.

Simply put, the A.I. did not learn to play Go it learned to play us. According to published reports, AlphaGo was so well trained, it is able to correctly predict a human’s Go move 57% of the time. Imagine if you could correctly predict what a person would do 57% percent of the time – while negotiating a
business deal, or selling a product, or pushing a political agenda. Someone with that predictive ability
could use it to build an empire of political or economic power.

To me, this is terrifying. Not because computers can beat us at board games, but because from this
moment forward, we will always be at a disadvantage, facing the arrival of alien intelligences that are
better prepared to play us than we are to play them. Whether these
aliens are named AlphaGo or
Alpha-Finance or Alpha-Geopolitical-Conflict, they will beat us at our own games. This suggests a future
where we humans can be manipulated by intelligent systems that can easily predict our tendencies,
inclinations, and biases, quickly finding our weaknesses and exploiting them.

To those who say we can put controls in place to keep A.I. from becoming dangerous, I say these technologies are already dangerous. After all, we have already faced a strategic opponent that
understands us better than we understand it. The only thing left is to apply these technologies to
disciplines that are more significant than games. This will occur, which means we are destined to be
out-matched, and not just in the game of Go, but overall game of life. So what can we do?

My view is that we humans need to stay one step ahead in the intelligence arms-race. There are many technologies being explored by researchers around the world that make us smarter, ranging from gene therapy to boost our minds, to implanted chips that augment our brains. Personally I prefer less
invasive methods. If we look to Mother Nature, countless other species have faced challenges during
their evolutionary development where survival required a boost in intelligence beyond the abilities of
their individual brains. Those species developed method to amplify their intelligence by “thinking
together” – forming systems that tap their combined knowledge, experience, insights, and instincts.

Yes, I’m talking about the dreaded hive mindand for a long time I was deeply against it. But over the last few years I’ve come to realize that pooling our intellectual resources in closed-loop systems may be our best approach to keeping humanity ahead of purely artificial intelligences. After all, a hive mind
is comprised of living, breathing, people, instilled with human values and emotions, and is motivated to
keep human interests at the forefront. A purely artificial intelligence has no reason to share our core
values or make decisions that support our interests.

Said another way, if we build a purely digital A.I., we need to view it as an alien intelligence. And like any alien that arrives on planet earth, we have to assume it could be a threat. But, if we build superintelligence as a “hive mind” of networked people – it’s not an alien intellect, but an evolution of human thinking, following the same developmental path as countless other species in the natural world.

Again, for a long time I was against the “hive mind” paradigm, fearing it would change what it means to be human. But at this point, I believe change is our only way to stay ahead of the alien intelligences that are quickly heading towards us. No, they’re not flying through space at light speed – if they were, we’d be preparing ourselves for their arrival. Instead, they’re coming towards us in a far more insidious way, emerging from research labs around the world. But still, we need to prepare.

We also need to be more vigilant about the near-term dangers of A.I. systems. Long before we see a machine intelligence that rivals human intellect in a general sense, A.I. technologies will become
remarkably good at charactering and predicting human behavior. If a machine can out-play a Go master
by predicting his actions and reactions, it won’t be long before intelligent systems can out-play us in all
aspects of life. To prevent this from happening, our best defense may be a strong offense – a largescale
effort to amplify human intelligence, keeping us competitive for as long as we can.

 

About the Author:

Louis RosenbergLouis Rosenberg received B.S., M.S. and Ph.D. degrees from Stanford University. His doctoral work produced the Virtual Fixtures platform for the U.S. Air Force, the first immersive Augmented Reality system created. Rosenberg then founded Immersion Corporation (Nasdaq: IMMR), a virtual reality company focused on advanced interfaces. More recently, he founded Unanimous A.I. an artificial intelligence company focused on harnessing collective intelligence by enabling online human swarms.

 

Browse More

The Future of Circus

The Future of Circus: How can businesses and artists thrive in a changing entertainment industry?

The Problem with NFTs preview

The Problem with NFTs [Video]

Micro-Moments of Perceived Rejection

Micro-Moments of Perceived Rejection: How to Navigate the (near) Future of Events

Futurist Tech Conference Preview

Futurist Conferences: Considerations for Progressive Event Professionals

Nikola Danaylov on Ex Human

Nikola Danaylov on Ex Human: the Lessons of 2020

Immortality or Bust preview

Immortality or Bust: The Trailblazing Transhumanist Movie

COVID19

Challenges for the Next 100 Days of the COVID19 Pandemic

Nikola Danaylov on 2030 preview

Nikola Danaylov on 2030 Beyond the Film: We Need Ethics and Commitment, Not Transhumanism