≡ Menu

AlphaGo: Did DeepMind Just Solve Intelligence?!

AlphaGo-Lee-SedolJust recently, DeepMind’s AlphaGo won a series of Go matches against a top-level human opponent. This victory has caused a mix of excitement and consternation. What just happened, exactly? Are we seeing another case of a bigger and faster machine pushing the edge of performance, or are we perhaps approaching a fundamental crisis of “cognitive competition?”

Did DeepMind just solve intelligence?

To answer this questions, we look at the succession of game-playing computers, and then explore the rise of “model-free methods” and what it foretells for our future.

Advance of the gaming machines

We have become used to the idea that purpose-built machines can surpass humans in almost any physical task. But over the last 15 years, we have seen a progression of machines beating people at tasks considered to be the elite cognitive domain of Homo sapiens.

In 1996, IBM’s Deep Blue computer beat chess Grandmaster Gary Kasparov. Deep Blue was essentially a showpiece of computer performance – a big, fast machine that could efficiently search all possible moves.

In 2011, IBM followed up Deep Blue with a new generation project. Its Jeopardy-playing Watson beat two human experts in an impressive, well-publicized competition. Afterwards, IBM announced a billion-dollar move into AI technology.

But while IBM billed Watson as a demonstration of IBM’s AI prowess, and while the general public was powerfully impressed, Watson’s reality in some ways fell short of perception. The Jeopardy victory showcased Natural Language Processing (NLP), an enthusiastically named toolkit of statistical tricks that allow a computer to infer likely properties of a sentence. In many ways, Watson functioned like our powerful – but familiar – search engines. Although IBM built a powerful brand with Watson, little of the Jeopardy technology was generally usable. Instead IBM bought a set of NLP services and some machine learning tools and branded them as its Watson offering.

Conversely, in 2014 the U.K. company DeepMind released a powerfully influential paper in which they showed how their system taught itself to play Atari 2600 video games, given nothing but the pixel inputs. This achievement represented a true breakthrough in AI. Google immediately purchased DeepMind for over half a billion dollars, but the general public did not take notice as it had with Watson’s victory.

Perhaps seeking a more high-profile demonstration, DeepMind next targeted Go. Go presented an attractive problem because it’s so complex that it’s not computationally possible to simply search all the possible moves. Instead, Go players rely on subtle implicit pattern matching. “Good positions look good,” explained DeepMind founder Demis Hassabis in an interview with Wired.

AlphaGo thus represents the rising success of a different approach to artificial intelligence, one whose accelerating success foretells profound implications.

Cats on Mars

The rise of Deep Learning is part of an overall shift in AI leadership from what are termed “reductionist” methods to “model-free” methods.

Reductionism can be simply described as “building a model.” Reductionist methods pare the problem down to its essential elements, and describe those core features mathematically. Most AI systems today are reductionist – the programmer writes code that implements a model of how the world works, and the system does what the code tells it to do.

But functioning effectively outside of a narrow domain requires the ability to cope with the real world – a messy, chaotic, and continually changing environment. AI systems have traditionally failed miserably at these challenges, which humans easily handle.

AI researcher Monica Anderson attributes our capability to an approach she calls “model-free” methods. Model-free systems function by dynamically building their own simple (and imperfect) rules that fit the data they see, as best they can right now, and then revising them as new data becomes available.

“The key difference,” Anderson explains, “is who is doing the reduction.” While the rules in a reductionist system come from the programmer and are built-in, a system built with model-free methods makes its own rules, trying to best predict – and thus understand – its environment.

Imagine taking your cat to live on a Mars base for a while. In the beginning, the cat would be disoriented by the lesser gravity. But, being an effective model-free system, the cat would just try different movements and quickly adjust itself to the new environment. Without forming a complex model of gravity, it would simply learn that it should jump differently on Mars.

Cognitive competition – and cooperation

What is the real message of the AlphaGo victory? It demonstrates how model-free solutions are able to learn many cognitive activities at an effectively human level. This represents a fundamental workplace change. We are entering an age of “cognitive competition,” in which some cognitive jobs that were the exclusive domain of human beings may now be done by AI systems. For a straightforward example, we can expect to see the rise of strategic battle systems – “AI generals” – that learn by playing war game simulations against each other and against human generals.

But we are also entering an area of cognitive cooperation. The Go experts who played AlphaGo significantly improved their skills by playing a difficult and creative opponent. We ourselves are powerful, sophisticated model-free systems and, given more difficult opposition, we step up our own game.

And although model-free systems may offer a door to Artificial General Intelligence, our AI technology remains mired in task-based “narrow-AI” systems. Our AI systems have no identity, lack drives, and cannot come close to the sophisticated social understanding of a person (or, for that matter, a domestic pet). While we will see disruption in the labor markets as AI systems take over specific job roles, humans will also react flexibly, moving to harness narrow AIs as tools and using them as trainers or advisors in specific domains.

Being able to understand AI’s strengths and limitations, and to use those systems effectively or do things they cannot do, is now a distinct competitive advantage. In short, the game has changed. And like all good model-free systems, it is our job to adapt.


About the Authors:

jennifer-barryJennifer Barry has a background in cognitive science and finance and works as Director of Research and Projects for Leopard Research. She can be reached at jennifer [at] leopardllc.com


David Rostcheck 2David Rostcheck is a consulting data scientist helping companies tackle challenging problems and develop advanced technology. He can be reached at drostcheck [at] leopardllc.com.

Like this article?

Please help me produce more content:



Please subscribe for free weekly updates:

Over 3,000 super smart people have subscribed to my newsletter: