Quantcast
≡ Menu

AlphaGo: Did DeepMind Just Solve Intelligence?!

AlphaGo-Lee-SedolJust recently, DeepMind’s AlphaGo won a series of Go matches against a top-level human opponent. This victory has caused a mix of excitement and consternation. What just happened, exactly? Are we seeing another case of a bigger and faster machine pushing the edge of performance, or are we perhaps approaching a fundamental crisis of “cognitive competition?”

Did DeepMind just solve intelligence?

To answer this questions, we look at the succession of game-playing computers, and then explore the rise of “model-free methods” and what it foretells for our future.

Advance of the gaming machines

We have become used to the idea that purpose-built machines can surpass humans in almost any physical task. But over the last 15 years, we have seen a progression of machines beating people at tasks considered to be the elite cognitive domain of Homo sapiens.

In 1996, IBM’s Deep Blue computer beat chess Grandmaster Gary Kasparov. Deep Blue was essentially a showpiece of computer performance – a big, fast machine that could efficiently search all possible moves.

In 2011, IBM followed up Deep Blue with a new generation project. Its Jeopardy-playing Watson beat two human experts in an impressive, well-publicized competition. Afterwards, IBM announced a billion-dollar move into AI technology.

But while IBM billed Watson as a demonstration of IBM’s AI prowess, and while the general public was powerfully impressed, Watson’s reality in some ways fell short of perception. The Jeopardy victory showcased Natural Language Processing (NLP), an enthusiastically named toolkit of statistical tricks that allow a computer to infer likely properties of a sentence. In many ways, Watson functioned like our powerful – but familiar – search engines. Although IBM built a powerful brand with Watson, little of the Jeopardy technology was generally usable. Instead IBM bought a set of NLP services and some machine learning tools and branded them as its Watson offering.

Conversely, in 2014 the U.K. company DeepMind released a powerfully influential paper in which they showed how their system taught itself to play Atari 2600 video games, given nothing but the pixel inputs. This achievement represented a true breakthrough in AI. Google immediately purchased DeepMind for over half a billion dollars, but the general public did not take notice as it had with Watson’s victory.

Perhaps seeking a more high-profile demonstration, DeepMind next targeted Go. Go presented an attractive problem because it’s so complex that it’s not computationally possible to simply search all the possible moves. Instead, Go players rely on subtle implicit pattern matching. “Good positions look good,” explained DeepMind founder Demis Hassabis in an interview with Wired.

AlphaGo thus represents the rising success of a different approach to artificial intelligence, one whose accelerating success foretells profound implications.

Cats on Mars

The rise of Deep Learning is part of an overall shift in AI leadership from what are termed “reductionist” methods to “model-free” methods.

Reductionism can be simply described as “building a model.” Reductionist methods pare the problem down to its essential elements, and describe those core features mathematically. Most AI systems today are reductionist – the programmer writes code that implements a model of how the world works, and the system does what the code tells it to do.

But functioning effectively outside of a narrow domain requires the ability to cope with the real world – a messy, chaotic, and continually changing environment. AI systems have traditionally failed miserably at these challenges, which humans easily handle.

AI researcher Monica Anderson attributes our capability to an approach she calls “model-free” methods. Model-free systems function by dynamically building their own simple (and imperfect) rules that fit the data they see, as best they can right now, and then revising them as new data becomes available.

“The key difference,” Anderson explains, “is who is doing the reduction.” While the rules in a reductionist system come from the programmer and are built-in, a system built with model-free methods makes its own rules, trying to best predict – and thus understand – its environment.

Imagine taking your cat to live on a Mars base for a while. In the beginning, the cat would be disoriented by the lesser gravity. But, being an effective model-free system, the cat would just try different movements and quickly adjust itself to the new environment. Without forming a complex model of gravity, it would simply learn that it should jump differently on Mars.

Cognitive competition – and cooperation

What is the real message of the AlphaGo victory? It demonstrates how model-free solutions are able to learn many cognitive activities at an effectively human level. This represents a fundamental workplace change. We are entering an age of “cognitive competition,” in which some cognitive jobs that were the exclusive domain of human beings may now be done by AI systems. For a straightforward example, we can expect to see the rise of strategic battle systems – “AI generals” – that learn by playing war game simulations against each other and against human generals.

But we are also entering an area of cognitive cooperation. The Go experts who played AlphaGo significantly improved their skills by playing a difficult and creative opponent. We ourselves are powerful, sophisticated model-free systems and, given more difficult opposition, we step up our own game.

And although model-free systems may offer a door to Artificial General Intelligence, our AI technology remains mired in task-based “narrow-AI” systems. Our AI systems have no identity, lack drives, and cannot come close to the sophisticated social understanding of a person (or, for that matter, a domestic pet). While we will see disruption in the labor markets as AI systems take over specific job roles, humans will also react flexibly, moving to harness narrow AIs as tools and using them as trainers or advisors in specific domains.

Being able to understand AI’s strengths and limitations, and to use those systems effectively or do things they cannot do, is now a distinct competitive advantage. In short, the game has changed. And like all good model-free systems, it is our job to adapt.

 

About the Authors:

jennifer-barryJennifer Barry has a background in cognitive science and finance and works as Director of Research and Projects for Leopard Research. She can be reached at jennifer [at] leopardllc.com

 

David Rostcheck 2David Rostcheck is a consulting data scientist helping companies tackle challenging problems and develop advanced technology. He can be reached at drostcheck [at] leopardllc.com.

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • Dan Vasii

    I kind of get tired of repeating th same thing: game programs are not AI: Watson IBM is NOT an AI. An AI is a human grade program. And here is the trick(same as the Turing ”test„) – there is NO scientific definition and description of human intelligence. No one can scientifically prove that person Z is more intelligent than person X. Do not confuse scientifically with empirically – you can prove empirically that Z is clever than X, and you will be right. But you did not do it scientifically. An example: you can prove that one star is brighter than the other – you and the rest of the people can compare the brightness of these two stars, but to scientifically prove it, you would need a device that counts the number of photons emitted by each star. That is scientific proof.
    Today scientists in the AI field are mired in a bogus approach. Imagine the scientists of the past, trying to understand and reproduce the Sun. They don’t know nothing about nuclear reactions, but they study the solar radiatons, and they discover it is composed by infrared rays, normal light and ultraviolet rays. They decide to simulate the Sun by putting together three generators emitting the three kinds of radiation. They consider they created an artificial Sun!
    The same with intelligence. Taking the different abilities emanating, just like the radiation from Sun, from human rational, abstract intelligence, and turning them into computer programs, even if these computer programs are able to learn, it’s not AI.

  • Steve Morris

    This type of machine learning does seem to move machine utility forward, and has potential to be used in many different areas. But echoing what Dan Vasii said in the comments, machine utility isn’t the same as machine intelligence.

    As the article itself explains, a cat learning to move on Mars would do so experimentally, without forming a complex model of gravity. But human-level intelligence requires the ability to form a complex theory of gravity. Even the cat may form a simple theory of gravity, whereas AlphaGo never would.

    When children learn to walk, they are no doubt training their nervous system in much the same way as AlphaGo trained itself to play Go, or the cat jumping on Mars. But when a student learns Newton’s Theory of Gravitation, a completely different approach is necessary. Machine learning, while powerful, doesn’t move us closer to AGI.

  • I completely agree with you my friend. So, if you are referring to the title, it is a word-play pun on DeepMind’s motto: “Solve Intelligence”. I’d love to interview Demis Hassabis on the exact definition of, or at any rate what he actually means by, those words. And how they are going about it after AlphaGo. So, keep your figures crossed and I might get to do that 😉

  • Dan Vasii

    The truth is that there is a great job they are doing it and we need machine learning, automation, etc. … but it is not quite AI. Agree with Steve Morris as well.

  • Adam Peri

    I’m in agreement with Dan/Steve/Nikola, but a question: is it realistic to say that we (or DeepMind) can & should approach AlphaGo the same way as if it were literally a child/cat?

    There is no consensus on what defines intelligence and AlphaGo doesn’t demonstrate abstract thought or the ability to learn, apply or conceive ideas like Newton’s Gravitational Theory. But Newton was a baby once too. One of histories greatest geniuses still began as an infant with a clean slate; starting by learning how to learn, to evolve, adapt, use motor skills, feel gravity, etc.

    To David and Jenny’s point in the article, we must see AlphaGo’s path in the “Learning Model” that they describe and get a feel for it’s limitations, capabilities, strengths, etc. If AlphaGo is free of the constraints of just playing Go, what, if anything, will it continue to learn and what will it become?

  • Dan Vasii

    It seems that AlphaGo is able to learn Go and nothing else. That is the point: it is an human ability turned into a program, while human intelligence means metamodel processing – it builds models integrated into everbigger models, of both the Universe, the human self, and their interaction, until all these metamodels are forged into a universal paradigm. Not only no program can do this, but we cannot turn this perspective into a mathematic/algorhytmable model. (And Mr. Kurzweiler confuse the models with metamodels, among other confusions).

  • Dan Vasii

    That is the way to real Hum-grade AI. But until the scientists will become aware of this way, a long time will pass.

  • Well, I think the model-free advocates like Monica Anderson probably view Cyc (probably the ultimate example of a Reductionist approach) as a bit of a dead end. Because all the reduction has been done externally, it has a lot of rules, and the ability to operate on them, but it does not have what she would call “Understanding” of what the rules embody. Because of this, it’s very easy to break such systems with edge cases. If, for example, cats have four legs and fur, is a three legged cat still a cat? What about a Cornish Rex cat, which has no fur? A model-free system solves this by building an internal representation of a cat, which can be partially satisfied but still be the best interpretation it has (ex. a hairless cat is still more like a cat than a dog, so it must be a cat).

    So to gain the value of Cyc rules, a model-free system would have to read them, internally conceptualize them, and form its own opinions as to the validity of the rules.
    If we ever get to AGI systems and they operate using model-free methods (which Anderson posits are the only way that can really work), most of the effort will probably go into training it about the world. The Cyc rules might help with that by giving an enumeration of what to teach, but, as you note, they probably can’t do it by themselves.

  • I think you are talking about what AI researchers call “Artificial General Intelligence”, AGI; they use “AI” to mean something else that is more limited. Yes, our current AI is narrow AI only, it lacks many crucial elements of “real” intelligence (AGI), such as identity, drives, and an emotional intelligence (sophisticated understanding of the world).

  • Dan Vasii

    Narrow AI may be equivalent to animal grade intelligence – an intelligent database is like a cyber retriever. It can fetch what you tell it to fetch, but cannot undestand more complicated things. It cannot categorize data – you have to do it when introducing new data into database.

  • Adam Peri

    I agree, but are we certain AlphaGo is only capable of learning Go? That may be 100% correct, but it may not. Can we infer that because almost all other systems thus far have been created by programming skills and abilities and not larger thinking models capable of abstract thought that AlphaGo is the same?

    Nikola said above: it would be great to interview Demis Hassabis and see their plans for AlphaGo in the future. Until it is stated or shown that AlphaGo is just an algorithmic model, in a Kurzweil sense, why completely write it off as incapable of bigger learning?

  • Dan Vasii

    Theoretically, as is the case with any other computer program, it can be extended ad infinitum. But this is just theoretically. Practically, it is infinitely difficult to do it.

  • Dan Vasii

    There was something strange that bothered me about your reply, and only after a while I realized: ” are we certain AlphaGo is only capable of learning Go?” Don’t you realize that right now the so-called AI programs are not in their infancy, but in their phoetal stage? So this Go program is not able to learn anything but what it was programmed to do it. And even that can hardly be called “learning”.

  • Adam Peri

    Yes, I do realize they are in a fetal stage and not an infancy. I also know that we’re making analogies that aren’t perfect. What I meant in the above quote is basically: Shouldn’t we give DeepMind an opportunity to see what can and will be done next with AlphaGo?

    What can they do with what it has learned and they have learned in the process. I do understand it can only perform the type of tasks that it is programmed for and I do understand it is not learning anything in the same exact sense that we as humans do.

    Perhaps I misconstrued what Socrates(Nikola) asked above, but he mentioned hoping to interview the creators of AlphaGo/DeepMind. While I understand the scope of all past AIs has been narrow and that is in all likelihood the same scope that AlphaGo/DeepMind has; whether DeepMind is successful or not, I still have interest in hearing their perspective on the experiment, their plans for the future, etc. And whether they fail to answer your question/argument (and one I agree with) that this is a very narrow AI incapable of transcending it’s initial programming, I still think there is a lot of value in understanding their intentions, motivations, hopes and dreams.

    I am sitting here as a laymen and fan. I have a master’s degree in marketing and work in a very different field than the folks at DeepMind come from. That’s probably true of a lot of readers/listeners to this weblog. So, from my position, although I believe this is a fetal stage AI that might only reach animal levels of intelligence (if that), the goals, aspirations and techniques put into these exercises not only provide me edification, but they allow me a glimpse into the work of those making advances in the field. And these advances- whether basic, or ostensibly the same to prior ones are going to impact my future. Even if it’s 99.9% likely this is not the AI we all read/hear/daydream/hope/philosophize about that sets off some sort of “Singularity Rapture” and makes us one with technology, I am unwilling to write it off as “same old, same old,” especially when (specifically DeepMind) it continues to gain funding for more projects

  • Dan Vasii

    I apologize. Actually Deepmind is able to learn – and I think you are right – we don’t know what kind of intelligence this AI will be. I prefer it to be the kind of “faithful slave”, with the downside that it will not be as bright as the humans, but more reliable and quicker. I am as well a layman in the field, but I feel that in the long term, we, as a species, will need the scary, inhuman, better-than-us kind of AI as well, and I hope it will still be a friendly one. And the Deepmind kind it will be an essential stepping stone to that faraway in the future one.

  • I think we could call AlphaGo a hybrid system. It has a search function that comes from traditional (reductionist, hard-coded) AI, but coupled to two deep neural networks (model-free system). It’s possible that if you switched its inputs and outputs to a different game, it could learn to play it. It might even be better because of its Go experience. Deep Mind’s Atari 2600 demo learned to play 14 different kinds of games (but AlphaGo is a bit more specialized to Go).

    Deep networks can be generalized, but most of the time right now there is some modification involved. A common example w/ vision now is that researchers will train a neural network on image recognition (say cats vs. dogs), then take that network, clip off the output layer, and use it as the base for another vision recognition task, like recognizing different kinds of cars. Is this general intelligence? No, not yet, but it’s much more flexible that the traditional reductionist heurstic-based AI that Dan (correctly) criticizes.

  • Adam Peri

    Definitely no need to apologize. It’s fun discourse and we all learn from the conversations as well as the blogs. I think it’s an interesting gauge on the community of followers as well. In some way it’s important for those of us who follow these sorts of technology closely (especially as the varying types and ideas of AI/exponential growth/transhumanism) to engage in discourse even as laypeople because of it’s potential impact on humanity. I still think that these topics are not closely followed by the general public and our own ability to evangelize, or at least understand the “sea-change” that is coming could prove important. We are not only preparing ourselves as individuals, but the public discourse, sharing of articles, thoughts, opinions could help bring more ubiquity to these important issues that will cause huge shifts in our way of life.

    I understand that is changing the topic a bit, but definitely no need to apologize for asking for clarification. And I know I can be pretty verbose, especially when listening in bed/writing on my phone 🙂

  • Pingback: Neuromorphic Chips: a Path Towards Human-level AI()

  • Pedro Marcal

    A year has passed since this article was published. Maybe we can now look at it again with some perspective. The spectacular growth of neuraql networks has allowed us to tackle ‘model free problems’. I view it as transforming these problems into data based ‘model problems’. Watkins’ thesis at Cambridge proposed a Q learning with delayed reward based implicitly on dynamic programming. It was Deep Mind’s genius that combined these two theories to give us AlphaGo. This opened up a whole un-mined area for optimization of ‘model free problems’.
    However I see no reason for concluding that this tool is the solution for intelligence.

Over 3,000 super smart people have subscribed to my newsletter: