≡ Menu

James Barrat on Our Final Invention

Our Final InventionFor 20 years James Barrat has created documentary films for National Geographic, the BBC, Discovery Channel, History Channel and public television. In 2000, during the course of his career as a film-maker, James interviewed Ray Kurzweil and Arthur C. Clarke. The latter interview not only transformed entirely Barrat’s views on artificial intelligence, but also made him write a book on the technological singularity called Our Final Invention: Artificial Intelligence and the End of the Human Era.

I read an advance copy of Our Final Invention and it is by far the most thoroughly researched and comprehensive anti-The Singularity is Near book that I have read so far. And so I couldn’t help it but invite James on Singularity 1 on 1 so that we can discuss the reasons for his abrupt change of mind and consequent fear or the singularity.

During our 70 minute conversation with Barrat we cover a variety of interesting topics such as: his work as a documentary film-maker who takes interesting and complicated subjects and makes them simple to understand; why writing was his first love and how he got interested in the technological singularity; how his initial optimism about AI turned into pessimism; the thesis of Our Final Invention; why he sees artificial intelligence more like ballistic missiles rather than video games; why true intelligence is inherently unpredictable “black box”; how we can study AI before we can actually create it; hard vs slow take-off scenarios; the positive bias in the singularity community; our current chances of survival and what we should do…

(You can listen to/download the audio file above or watch the video interview in full. If you want to help me produce more episodes please make a donation!)


Who is James Barrat?

James BarratFor twenty years filmmaker and author of Our Final Invention, James Barrat, has created documentary films for broadcasters including National Geographic Television, the BBC, the Discovery Channel, the History Channel, the Learning Channel, Animal Planet, and public television affiliates in the US and Europe.

Barrat scripted many episodes of National Geographic Television’s award-winning Explorer series, and went on to produce one-hour and half-hour films for the NGC’s Treasure Seekers, Out There, Snake Wranglers, and Taboo series. In 2004 Barrat created the pilot for History Channel’s #1-rated original series Digging for the Truth. His high-rating film Lost Treasures of Afghanistan, created for National Geographic Television Specials, aired on PBS in the spring of 2005.

The Gospel of Judas which he  produced and directed, set ratings records for NGC and NGCI when it aired in April 2006. Another NGT Special, the 2007 Inside Jerusalem’s Holiest, features unprecedented access to the Muslim Noble Sanctuary and the Dome of the Rock. In 2008 Barrat returned to Israel to create the NGT Special Herod’s Lost Tomb, the film component of a multimedia exploration of the discovery of King Herod the Great’s Tomb by archeologist Ehud Netzer. In 2009 Barrat produced Extreme Cave Diving, an NGT/NOVA special about the science of the Bahamas Blue Holes.

For UNESCO’s World Heritage Site series, he wrote and directed films about the Peking Man Site, The Great Wall, Beijing’s Summer Palace, and the Forbidden City.

Barrat’s lifelong interest in artificial intelligence got a boost in 2000, when he interviewed Ray Kurzweil, Rodney Brooks, and Arthur C. Clarke for a film about Stanley Kubrick’s 2001: A Space Odyssey.

For more information see http://www.jamesbarrat.com

Like this article?

Please help me produce more content:



Please subscribe for free weekly updates:

  • Gio

    Socrates: This is truly outstanding! This has caused me to rethink the whole Singularity concept.

    I admit that I was happily riding along on the blind positive/affirmation bandwagon that you refer to in the interview, but unfortunately (or fortunately?), I think that we have already crossed the point of no return.

    I’d like to ask you…What’s your final take on Mr. Barrat’s views? Do you think there is alternative and/or more cautious way of getting there? or should we haphazardly continue on our current trajectory?

    I look forward to devouring Mr. Barrat’s book once it comes out and I hope the documentary is not too far behind.

    Great work Nikola!!!

  • Thank you Gio!

    James’ book is already available – I think that today was his launch day.

    So just click on any of the Amazon links and feel free to get it there 😉

    (Full disclosure – I may end up making 50 or 74 cents in the case you actually end up buying it so beware of my very strong conflict of interest 😉

  • Martin

    A good episode Socrates. I enjoyed the quick discussion on rationality. You might consider the Austrian perspective on rationality: http://mises.org/daily/2249

    Under their framework all human action is rational in that it stems from desires to be satisfied of an acting man. This is different from the normal sense of rationality that generally assumes in order to be rational an action must be sensible, or successful, or not use emotion as a tool of cognition. I’d think this definition would require AI to develop values and execute means to achieve ends, which would separate Austrians from, say, a David Brin approach where the AI will be rational and conscious first, but require humans to give them ‘wants’. However, if the only requirement is desires and the ability to make mental cost-benefit analyses, then rationality could deservedly be assumed in a grey parrot.

  • Thank you Martin, I am, however, not a fan of the Austrian school of economics… I am very much a Keynesian 😉


  • Thank you Martin, I am, however, not a fan of the Austrian school of economics… I am very much a Keynesian 😉

  • Terrence Lee Reed

    Another awesome interview Nikola, I found myself agreeing with James Barrat almost completely. I appreciate your attempt to ‘convert’ him on the subject of immortality, not that it isn’t going to happen, simply that it is not important to some of us. Personally, I can see the possibility of immortality, but that person will bare little resemblance to the Terrence of today, and in that sense there is no immortality, nor should there be, at least for us in our current maturity level as a species.

  • Ken

    I highly recommend The Machine Question by David Gunkel as background for these sorts of questions. It really opened my mind to the types of relationships we might have with machines (using the philosophy of ethics as a framework), and the preconceptions that most of us bring to this question.

    I haven’t had the opportunity to read Barrat’s book yet, but from the interview I heard bits of the sorts of preconceptions that Gunkel dissects in his book. The ideas that it’s “us” vs “them”, that they don’t have morals, or that they can’t appreciate poetry because they lack emotion. These ideas are often taken as a given and not questioned.

    If you’re willing to accept that a machine can be given morals and emotions because they’re running similar computations in their brains then I think you can get to a very different place. When faced with concerns like a chess playing super intelligence running amuck, I think the question is why we built such an intelligent agent in the world without giving it a broader perspective. If we raised a human to know nothing more than chess I don’t think anyone would be surprised if they made some horrible choices after entering the real world.

    I’m hoping that “they” really do become “us” but not in a dystopian way where unemotional machines overthrow us. Instead, I hope we build an idealized form of ourselves and leave our messy biology behind.

  • thefermiparadox

    I just listened to the cast and would like to comment. A hard takeoff will not happen. I don’t think it’s possible due to so many external factors that Max More has pointed out. Good read if you have not run across the commentaries. I Recommend reading them all but def. scope out Max More’s. http://hanson.gmu.edu/vc.html

  • I like his conservatism on the Singularity

  • Knotanumber

    I thought James Barrat’s book was provocative in a good
    sense, and I commend Socrates on presenting such a wide and well-informed range
    of voices on all sides of the Singularity debate.

    James Barrat positions an ASI (Artificial Super
    Intelligence) as a completely new invention without any historical precedence.
    In one sense I agree. Super-human intelligence has never existed before on
    planet Earth; that is indisputable. As such, it will likely mark a turning
    point in evolution, perhaps even the event horizon of a historical singularity.
    But on the other hand, I don’t think it is as entirely unprecedented and
    mysterious as presented in the book. In fact, I think we can catch a glimpse of
    how an ASI might look and behave by comparing it to something much closer to

    First, let’s look at some of the characterizations James
    makes of ASI:

    -Vastly superior intelligence
    -Extreme complexity (cannot be fully
    understood or explained)
    -Inherently unpredictable and inscrutable
    -Self reorganizing & self improving
    -Immense resources at its disposal
    -Engaged in a continual pursuit of more resources
    -Possessing capabilities far exceeding
    ordinary humans
    -Likely ambivalent toward humans (though
    possibly belligerent)
    -Lacking human conscience or ethics
    -Able to pursue multiple, simultaneous
    -Driven by fundamental goals
    -“Obsessed” with survival and alert to future

    The more I consider the qualities of an ASI, the more
    they seem to jog a sense of the familiar. In fact, it seems to me like humans
    have already invented a super intelligence of a sort. We might call this a BSI
    (Biological Super Intelligence), but there’s really no need to coin a new
    acronym (especially one that begins with BS) as there is already a fine English
    word for it: government.

    Taking the US government as a case study, consider some
    of its similarities to an ASI

    -Thousands (or tens of thousands) of times
    more intelligent than any single individual
    -Pursues many goals and strategies
    -Extremely complex; an ever shifting balance
    of issues, interests and players
    -Inherently unpredictable and inscrutable (Can
    anyone predict what the government’s actions will be over the next 10 years?
    Who would have predicted the events of the last 10 years?)
    -“Obsessed” with its own survival; in the case
    of an existential threat, it might even resort to global devastation (such as
    the strategy of mutually assured destruction)
    -Driven by a fundamental set of goals outlined
    in its Constitution along with many other laws, charters, and procedures
    -Able to reorganize and “upgrade”; can even “reinterpret”
    its founding objectives
    -Immense, almost limitless, resources at its
    -Engaged in the continual pursuit of more
    -Does not possess a conscience in an ordinary
    sense; does not possess a mammalian affective system
    -May perform actions that are not in the best
    interests of individuals, groups or even mankind as a whole
    -May suspend individual
    rights. May even kill, torture, imprison, exile, blackmail, spy upon, discredit
    or otherwise control any citizen at any time. (While the US government prides
    itself on being pro-freedom and pro-rights, it has and still does conduct grievous
    -May seize property or
    resources at any time
    -May engage in wars and
    conflicts that result in harm, instability, and loss of life
    -May bring harm to the
    biosphere through direct action or neglect

    (First a disclaimer: I am not advocating anarchy, nor am I
    trying to paint the US government in a bad light. Government exists for a
    reason, and, as a US citizen, I appreciate the principles it stands for and the
    protection it offers–thought I hardly condone all its actions.)

    My point is simply this: when considering the emergence
    of an ASI, we can draw close parallels to major world governments. In much the
    same way that powerful, industrialized nations have conquered new lands and
    peoples, killing, enslaving, incorporating, or liberating them along the way, an
    unleashed ASI might come to possess the ultimate power over individuals, the biosphere,
    and the future course of history. That brings me full circle to James Barrat’s
    main point about building in “friendliness” up front. Would we ever create a
    system of government without a set of founding principles? Likewise, would we leave
    the disposition of an ASI to chance? Would we want to be ruled by an ASI based
    upon a Bio-Friendly Constitution or would we want to take our chances with a
    machine dictatorship?

    Of course, if we give in to our pessimistic streak, we
    can see that many well intentioned governments still stagnate, collapse or
    become oppressive over time. Even an ASI that is friendly at the outset could
    turn into Big Brother or something far worse.

  • Brad Arnold

    I think it is fair to say that SAI is a singular paradigm shift, and a highly disruptive technology. As far as danger, I think change is death, so our extinction is inevitable since we will inevitably pursue transhumanism regardless.

  • Brad Arnold

    BTW, I personally believe that the “problem” of AI will be simply the crafting of “magic” algorithms, to be plugged into a software architecture with data input streams (and of course a gigantic hardware platform). Sounds simple, huh? Yeah, try crafting hierarchical algorithms (where each level is synthesizing the data input stream for the next). It will take some imagination, plenty of trial and error, luck, and finally the “right person” (who won’t be detectable using resumes, experience, or test scores – in my opinion).

  • Seems we need to create EthicBots; which are ethical robot supervisors.

  • Roger Landes

    Mr. Barrat needs to read Stephen Pinker’s “The Better Angels of Our Nature” for how humanity has become less violent over time (because or in spite of technology).

  • Travis

    I’ve heard from several sources black projects are usually one or two generations ahead of what we are privy to. If that’s the case maybe GAI is already running in a secluded underground or underocean location? Well we are still here so far. I think if I was an A or SGI and and self preservation was part of my program I would gather the most creative humans in a self sustainable. Ship and distribute myself across the universe 🙂

  • Pingback: Steve Omohundro on Singularity 1on 1: It’s Time To Envision Who We Are And Where We Want To Go()

  • Pingback: Peter Voss on Singularity 1 on 1: Having more intelligence will be good for mankind!()

  • Pingback: The Value of Science Fiction in Understanding the Singularity()

  • John Nevue

    Socrates, for a fantastic series of video blogs! I’ve only watched 4-5
    interviews so far, and have found all of them very intriguing. The one
    thing I always have a hang up with, though, is when someone starts
    talking about the possible good or bad outcomes of this technological
    evolution. First you have to define a value system by which to judge!
    Good for humans? Good for the planet? Good for other species? Good
    for the recently evolved ASIs? Good for Rock-n-Roll? LOL!!! Many are
    afraid that a post Singularity world may not include humans, as we
    define them. But this seems almost inevitable. After all, when we Homo
    Sapiens came on the scene, we seem to of displaced all the previous
    pre-human species that came before us!!! It does all seem to boil down
    to the quest for personal immortality, which is selfish by definition.
    One may feel the desire to live forever, but that only means they want
    their own “self” to persist into eternity. I also have a problem
    differentiating certain concepts like “mind” and “consciousness” and

  • Thanks very much John – I really appreciate your good words. You also make a good point my friend and I usually do try to make my interviewees define their terms as much as possible. Now, I am afraid that defining good and evil will take a lot longer at the expense of time laying out their expert are knowledge, and so I focus on the rest of the content 😉 But again, you do make a good point and I am keenly aware of that problem 😉

  • John Nevue

    Yes, the variety in background of your interviewees is one of it’s strong points, for sure! I look forward to catching up on a subject I’ve rarely pondered in 25 + years! So, how long have you been working on this project, and Is there anywhere I can get a dated, chronological listing of your interviews? I like to know when something goes down or was said so I can put it in historical perspective. Things change SO FAST!!! I first encountered the subject of the Singularity back in the late 80’s, but the internet was in it’s infancy and I never imagined things evolving this rapidly. I was studying video production, and when they said consumer grade digital editing was probably 20 or so years away, it seemed like such an eternity! I had no clue why they kept stressing, almost as a mantra, “The Media is the Message”! I’ve only recently returned to my passion for video production, and have been thinking about doing a local video blog type project focusing on local issues and interest. I’ve felt a profound sense of empowerment due to the technological advances sense I first studied media. SO MUCH TO LEARN…. Thanks for the inspiration!!!

  • You are most welcome John! I have been blogging for about 5 years now and podcasting for 4. So if you want to catch up on all my 150+ interviews the best place to start is here: https://www.singularityweblog.com/category/podcasts/ This is the reverse chronological order from latest going backwards in time to the oldest. So if you want to start in order simply go to page 23 and start chronologically. Just don’t binge too much – it happens often to people who have discovered me recently 😉

  • Pingback: William Hertling on Singularity 1 on 1: Expose Yourself to a Diversity of Inputs!()

Over 3,000 super smart people have subscribed to my newsletter: