Gary Marcus on Singularity 1 on 1: How do we bridge the mind with the brain?!…

image credit: Athena Vouloumanos

Gary Marcus is not only a professor in psychology but also a computer scientist, programmer, AI researcher and best selling author. He recently wrote a critical article titled Ray Kurzweil’s Dubious New Theory of Mind that was publsihed by The New Yorker. After reading his blog post I thought that it will be interesting to invite Gary on Singularity 1 on 1 so that he can talk more about his argument.   

During our conversation with Gary we cover a wide variety of topics such as: what is psychology and how he got interested in it; his theory of mind in general and the idea that the mind is a kluge in particular; why the best approach to creating AI is at the bridge between neuroscience and psychology; other projects such as Henry Markram‘s Blue Brain, Randal Koene‘s Whole Brain Emulation, and Dharmendra Modha‘s SyNAPSE; Ray Kurzweil’s Patern Recognition Theory of Mind; Deep BlueWatson and the lessons thereof; his take on the technological singularity and the ethics surrounding the creation of friendly AI…

As always you can listen to or download the audio file above, or scroll down and watch the video interview in full.

If you want to help me produce more episodes of Singularity 1 on 1 please make a donation:

YouTube Preview Image

 

Who is Gary Marcus?

Gary Marcus is cognitive scientist, and author of The New York Times Bestseller Guitar Zero: The New Musician and the Science of Learning as well as Kluge: The Haphazard Evolution of the Human Mind, which was a New York Times Book Review editor’s choice. You can check out Gary’s other blog posts at The New Yorker on neuroscience, linguistics, and artificial intelligence and follow him on Twitter to stay up to date with his latest work.

  • http://twitter.com/33rdsquare 33rdsquare

    Thank you so much for this interview! It really helps to explain some of the details of the New Yorker article criticisms of Kurzweil’s new book.

  • Pingback: Ramez Naam on Singularity 1 on 1: We Are The Ones Who Create The Future

  • Lu Lu

    I am getting the feeling that RK is making Singularity his one-man’s enterprise.
    The recent Lew Keilar’s whiteboard animation on Singularity ended with RK’s face on a locomotive. It gave the feeling of being almost sarcastic and comical.

  • Ted Johanson

    Hi Socrates, Thanks for a very interesting discussion!

    Working from the psychology angle sounds great but I can’t wrap my head around how that will work in practise? How do you build something starting from psychology? Psychology is an emergent property and can’t be in place until the underlying structure is working. Thus I don’t accept his criticism of Markram and Ray. You have to start from the bottom with the foundation before dealing with higher functions like psychology.

    Regarding teaching morals to artificial intelligences, I don’t think that will suffice. Just knowing what is considered right or wrong won’t make you act morally if you would benefit not to. To do that you will need empathy, to be able to really feel how other people are feeling and put yourself in their shoes. Psychopaths lack this and will thus act morally only if they are policed or get a benefit out of it but would not hesitate to screw everyone over if they are left uncontrolled or think they can get away with it. I believe a superintelligent machine could be akin to a psychopath if it’s not equipped with deep empathic skills at its very core and it being superintelligent would make it uncontrollable and thus extremely dangerous.

  • http://www.singularityweblog.com/ Socrates

    You are most welcome Ted,

    I also very much agree with and appreciate the point about empathy – it is a fundamentally vital feature indeed. But how do you teach or program empathy in an AI?!… I don’t know….

  • Ted Johanson

    I’m only a layperson, but the path I can see is to first understand how empathy works in the brain. We have some understanding that it’s dependent on “mirror neurons” and I think it will be vital to use brain simulations/emulations to understand it on both a mechanistic and governing level before we start building really smart AI’s. I don’t really see how to get there from the Ben Goertzel approach, building AI from scratch. Though I would love to hear his opinion as I’m sure he disagrees!

    A related problem would be just how different we are. I have experienced how it feels to get my fingers caught in a slamming door and can imagine how a mouse would feel if I stepped on it. However, an AI without a body without the experience of growing up in the real world who never felt physical pain could never relate in the same way. Hopefully it could still understand the existential anxiety we feel about death. Perhaps the answer is to let it grow up in a robot body which feels pain as we do, or similarly in a virtual world?

    I don’t know but I believe it’s essential we find out.

  • Pingback: The Singularity is Nearer: EU Commits 1 Billion To Fund The Human Brain Project