Singularity 1 on 1: James Barrat on Our Final Invention

Our Final InventionFor 20 years James Barrat has created documentary films for National Geographic, the BBC, Discovery Channel, History Channel and public television. In 2000, during the course of his career as a film-maker, James interviewed Ray Kurzweil and Arthur C. Clarke. The latter interview not only transformed entirely Barrat’s views on artificial intelligence, but also made him write a book on the technological singularity called Our Final Invention: Artificial Intelligence and the End of the Human Era.

I read an advance copy of Our Final Invention and it is by far the most thoroughly researched and comprehensive anti-The Singularity is Near book that I have read so far. And so I couldn’t help it but invite James on Singularity 1 on 1 so that we can discuss the reasons for his abrupt change of mind and consequent fear or the singularity.

During our 70 minute conversation with Barrat we cover a variety of interesting topics such as: his work as a documentary film-maker who takes interesting and complicated subjects and makes them simple to understand; why writing was his first love and how he got interested in the technological singularity; how his initial optimism about AI turned into pessimism; the thesis of Our Final Invention; why he sees artificial intelligence more like ballistic missiles rather than video games; why true intelligence is inherently unpredictable “black box”; how we can study AI before we can actually create it; hard vs slow take-off scenarios; the positive bias in the singularity community; our current chances of survival and what we should do…

(You can listen to/download the audio file above or watch the video interview in full. If you want to help me produce more episodes please make a donation!)

YouTube Preview Image

 

Who is James Barrat?

James BarratFor twenty years filmmaker and author of Our Final Invention, James Barrat, has created documentary films for broadcasters including National Geographic Television, the BBC, the Discovery Channel, the History Channel, the Learning Channel, Animal Planet, and public television affiliates in the US and Europe.

Barrat scripted many episodes of National Geographic Television’s award-winning Explorer series, and went on to produce one-hour and half-hour films for the NGC’s Treasure Seekers, Out There, Snake Wranglers, and Taboo series. In 2004 Barrat created the pilot for History Channel’s #1-rated original series Digging for the Truth. His high-rating film Lost Treasures of Afghanistan, created for National Geographic Television Specials, aired on PBS in the spring of 2005.

The Gospel of Judas which he  produced and directed, set ratings records for NGC and NGCI when it aired in April 2006. Another NGT Special, the 2007 Inside Jerusalem’s Holiest, features unprecedented access to the Muslim Noble Sanctuary and the Dome of the Rock. In 2008 Barrat returned to Israel to create the NGT Special Herod’s Lost Tomb, the film component of a multimedia exploration of the discovery of King Herod the Great’s Tomb by archeologist Ehud Netzer. In 2009 Barrat produced Extreme Cave Diving, an NGT/NOVA special about the science of the Bahamas Blue Holes.

For UNESCO’s World Heritage Site series, he wrote and directed films about the Peking Man Site, The Great Wall, Beijing’s Summer Palace, and the Forbidden City.

Barrat’s lifelong interest in artificial intelligence got a boost in 2000, when he interviewed Ray Kurzweil, Rodney Brooks, and Arthur C. Clarke for a film about Stanley Kubrick’s 2001: A Space Odyssey.

For more information see http://www.jamesbarrat.com

  • Gio

    Socrates: This is truly outstanding! This has caused me to rethink the whole Singularity concept.

    I admit that I was happily riding along on the blind positive/affirmation bandwagon that you refer to in the interview, but unfortunately (or fortunately?), I think that we have already crossed the point of no return.

    I’d like to ask you…What’s your final take on Mr. Barrat’s views? Do you think there is alternative and/or more cautious way of getting there? or should we haphazardly continue on our current trajectory?

    I look forward to devouring Mr. Barrat’s book once it comes out and I hope the documentary is not too far behind.

    Great work Nikola!!!

  • http://www.singularityweblog.com/ Socrates

    Thank you Gio!

    James’ book is already available – I think that today was his launch day.

    So just click on any of the Amazon links and feel free to get it there ;-)

    (Full disclosure – I may end up making 50 or 74 cents in the case you actually end up buying it so beware of my very strong conflict of interest ;-)

  • Martin

    A good episode Socrates. I enjoyed the quick discussion on rationality. You might consider the Austrian perspective on rationality: http://mises.org/daily/2249

    Under their framework all human action is rational in that it stems from desires to be satisfied of an acting man. This is different from the normal sense of rationality that generally assumes in order to be rational an action must be sensible, or successful, or not use emotion as a tool of cognition. I’d think this definition would require AI to develop values and execute means to achieve ends, which would separate Austrians from, say, a David Brin approach where the AI will be rational and conscious first, but require humans to give them ‘wants’. However, if the only requirement is desires and the ability to make mental cost-benefit analyses, then rationality could deservedly be assumed in a grey parrot.

  • http://www.singularityweblog.com/ Socrates

    Thank you Martin, I am, however, not a fan of the Austrian school of economics… I am very much a Keynesian ;-)

    http://www.youtube.com/watch?v=d0nERTFo-Sk

  • http://www.singularityweblog.com/ Socrates

    Thank you Martin, I am, however, not a fan of the Austrian school of economics… I am very much a Keynesian ;-)

  • Terrence Lee Reed

    Another awesome interview Nikola, I found myself agreeing with James Barrat almost completely. I appreciate your attempt to ‘convert’ him on the subject of immortality, not that it isn’t going to happen, simply that it is not important to some of us. Personally, I can see the possibility of immortality, but that person will bare little resemblance to the Terrence of today, and in that sense there is no immortality, nor should there be, at least for us in our current maturity level as a species.

  • Ken

    I highly recommend The Machine Question by David Gunkel as background for these sorts of questions. It really opened my mind to the types of relationships we might have with machines (using the philosophy of ethics as a framework), and the preconceptions that most of us bring to this question.

    I haven’t had the opportunity to read Barrat’s book yet, but from the interview I heard bits of the sorts of preconceptions that Gunkel dissects in his book. The ideas that it’s “us” vs “them”, that they don’t have morals, or that they can’t appreciate poetry because they lack emotion. These ideas are often taken as a given and not questioned.

    If you’re willing to accept that a machine can be given morals and emotions because they’re running similar computations in their brains then I think you can get to a very different place. When faced with concerns like a chess playing super intelligence running amuck, I think the question is why we built such an intelligent agent in the world without giving it a broader perspective. If we raised a human to know nothing more than chess I don’t think anyone would be surprised if they made some horrible choices after entering the real world.

    I’m hoping that “they” really do become “us” but not in a dystopian way where unemotional machines overthrow us. Instead, I hope we build an idealized form of ourselves and leave our messy biology behind.

  • thefermiparadox

    I just listened to the cast and would like to comment. A hard takeoff will not happen. I don’t think it’s possible due to so many external factors that Max More has pointed out. Good read if you have not run across the commentaries. I Recommend reading them all but def. scope out Max More’s. http://hanson.gmu.edu/vc.html

  • http://www.LimitlessMindset.com/ Jonathan Roseland

    I like his conservatism on the Singularity

  • Brad Arnold

    I think it is fair to say that SAI is a singular paradigm shift, and a highly disruptive technology. As far as danger, I think change is death, so our extinction is inevitable since we will inevitably pursue transhumanism regardless.

  • Brad Arnold

    BTW, I personally believe that the “problem” of AI will be simply the crafting of “magic” algorithms, to be plugged into a software architecture with data input streams (and of course a gigantic hardware platform). Sounds simple, huh? Yeah, try crafting hierarchical algorithms (where each level is synthesizing the data input stream for the next). It will take some imagination, plenty of trial and error, luck, and finally the “right person” (who won’t be detectable using resumes, experience, or test scores – in my opinion).

  • http://sciencedem.blogspot.com editor

    Seems we need to create EthicBots; which are ethical robot supervisors.

  • Roger Landes

    Mr. Barrat needs to read Stephen Pinker’s “The Better Angels of Our Nature” for how humanity has become less violent over time (because or in spite of technology).

  • Travis

    I’ve heard from several sources black projects are usually one or two generations ahead of what we are privy to. If that’s the case maybe GAI is already running in a secluded underground or underocean location? Well we are still here so far. I think if I was an A or SGI and and self preservation was part of my program I would gather the most creative humans in a self sustainable. Ship and distribute myself across the universe :)

  • Pingback: Steve Omohundro on Singularity 1on 1: It’s Time To Envision Who We Are And Where We Want To Go