Steve Omohundro on Singularity 1 on 1: It’s Time To Envision Who We Are And Where We Want To Go

Steve OmohundroSteve Omohundro is a scientist, professor, author, and entrepreneur who has a Ph.D. in physics but has spent decades studying intelligent systems and artificial intelligence. His research into the basic “AI Drives” was featured in James Barrat’s recent book Our Final Invention and has been generating international interest. And so I was very happy to get Dr. Omohundro on my Singularity 1 on 1 podcast.

During our 1 hour conversation we cover a variety of interesting topics such as: his personal path starting with a PhD in physics and ending into AI; his unique time with Richard Feynman; the goals, motivation and vision behind is work; Omai Ventures and Self Aware Systems; the definition of AI; Rational Decision Making and the Turing Test; provably safe mathematical systems and AI scaffolding; hard vs soft singularity take-offs…

(You can listen to/download the audio file above or watch the video interview in full. If you want to help me produce more high-quality episodes like this one please make a donation!)

YouTube Preview Image

 

Who is Steve Omohundro?

Steve Omohundro has been a scientist, professor, author, software architect, and entrepreneur doing research that explores the interface between mind and matter.

He has degrees in Physics and Mathematics from Stanford and a Ph.D. in Physics from U.C. Berkeley. Dr. Omohundro was a computer science professor at the University of Illinois at Champaign-Urbana and co-founded the Center for Complex Systems Research.

He published the book Geometric Perturbation Theory In Physics, designed the programming languages StarLisp and Sather, wrote the 3D graphics system for Mathematica, and built systems which learn to read lips, control robots, and induce grammars.

Steve has worked with many research labs and startup companies. He is the president of Self-Aware Systems, a Palo Alto think tank working to ensure that intelligent technologies have a positive impact on humanity. His research into the basic “AI Drives” was featured in James Barrat’s recent book Our Final Invention: Artificial Intelligence and the End of the Human Era and has been generating international interest.

YouTube Preview Image

 

  • polybiblios

    Regarding the mention about what both Chomsky and Minsky believe about AI, it’s well known that both of those guys come from the same school of thought. Other groups have a completely different take. Have a look at this comment that Yann Lecun posted to this Google+ page following a critical article by Gary Marcus about deep learning:

    ====

    My dear NYU colleague +Gary Marcus wrote a critical response to +John Markoff’s front-page article on deep learning in the New York Times.

    Gary is a professor in the psychology department at NYU and the author of a number of books, including a very nice little book entitled “Kluge: the haphazard construction of the human mind” in which he argues (very convincingly) that the brain is a collection or hacks (which were called kluges, back when cool things were mechanical and not software), the result of haphazard refinement through evolution.

    Gary has been a long-time critic of non-symbolic (or sub-symbolic) approaches to AI, such as neural nets and connectionist models. He comes from the Chomsky/Fodor/Minsky/Pinker school of thought on the nature of intelligence whose main tenet is that the mind is a collection of pre-wired modules, that are largely determined by genetics. This is meant to contrast the working hypothesis on which us, deep learning people, are basing our research: the cortex runs a somewhat “generic” and task-independent learning “algorithm” that will capture the structure of whatever signal it is fed with.

    To be sure, none of us are extreme in our positions. I have been a long-time advocate for the necessity of some structure in learning architectures (such as convolutional net). All of learning theory points to the fact that learning needs structure. Similarly, Gary doesn’t claim that learning has no role to play.

    In the end, it all comes down to two questions:
    - how important of a role does learning play in building a human mind?
    - how much prior structure is needed?

    +Geoffrey Hinton and I have devoted most of our careers to devise learning algorithms that can do interesting feats with as little structure as possible (but still some). It’s a matter of degree.

    One important point in Gary’s piece is the fact that neural nets are merely “a ladder on the way to the moon” because they are incapable of symbolic reasoning. I think there are two issues with that argument:
    1. As I said on previous occasions, I’d be happy if, within my lifetime, we have machines as intelligent as a rat. I don’t think Gary would argue that rats do symbolic reasoning, but they are pretty smart. I don’t think human intelligence is considerably (qualitatively) different from that of a rat, and definitely not that different from that of an ape. We could do a lot without human-style symbolic reasoning.
    2. There is not that much of a conceptual difference between some of the learning systems that we are building and the symbolic reasoning systems that Gary likes. Much of modern ML systems produce their output by minimizing some sort of energy function, a process qualitatively equivalent to inference (Geoff and I call that “energy-based models”, but Bayesian nets also fit in that framework). Training consists in shaping the energy function so that the inference process produces an acceptable answer (or a distribution over answers).

    Gary points out that the second wave of neural nets in the late 80′s and early 90′s was pushed out by other methods. Interestingly, they were pushed out by methods such as Support Vector Machines which are closer to the earliest Perceptrons and even further away from the symbolic reasoning than deep learning systems. To some extent, it could be argued that the kernel trick allowed us to temporarily abandon the search for methods that could go significantly beyond linear classifiers and template matching.

    There is one slight confusing thing in Gary’s piece (as well as in John Markoff’s piece): the fact that all the recent successes of deep learning are due to unsupervised learning. That is not the case. Many of the stunning results use purely supervised learning, sometimes applied to convolutional network architectures, as in Geoff’s ImageNet object recognizer, our scene parsing system, our house number recognizer (now used by Google) and IDSIA’s traffic sign recognizer. The key idea of deep learning is to train deep multilayer architectures to learn pre-processing, low-level feature extraction, mid-level feature extraction, classification, and sometimes contextual post-processing in an integrated fashion. Back in the mid 90′s, I used to call this “end to end learning” or “global training”.

    Gary makes the point that even deep learning modules are but one component of complex systems with lots of other components and moving parts. It’s true of many systems. But the philosophy of deep learning is to progressively integrate all the modules in the learning process.
    An example of that is the check reading system I built at Bell Labs in the early 1990′s with +Leon Bottou, +Yoshua Bengio and +Patrick Haffner. It integrated a low-level feature extractor, a mid-level feature extractor, a classifier (all parts of a convolutional net), and a graphical model (word and language model) all trained in an integrated fashion.

    So, just wait a few years, Gary. Soon, deep learning system will incorporate reasoning again.

    The debate is open. Knock yourself out, dear readers.
    #deeplearning

  • http://www.singularityweblog.com/ Socrates

    Thanks very much for your good comments that contribute to our discussion friend. I myself have previously interviewed Gary Marcus on Singularity 1on1 and we discussed his ideas here: http://www.youtube.com/watch?v=NLCKZxMSqFw