Quantcast
≡ Menu

Greg Bear, Ramez Naam and William Hertling on the Singularity

Sci Fi RoundtableThis is the concluding sci fi round-table discussion of my Seattle 1-on-1 interviews with Ramez Naam, William Hertling and Greg Bear. The video was recorded last November and was produced by Richard Sundvall, shot by Ian Sun and generously hosted by Greg and Astrid Bear. (Special note of thanks to Agah Bahari who did the interview audio re-mix and basically saved the footage.)

During our 30 minute discussion with Greg Bear, Ramez Naam and William Hertling we cover a variety of interesting topics such as: what is science fiction; the technological singularity and whether it could or would happen; the potential of conflict between humans and AI; the definition of the singularity; emerging AI and evolution; the differences between thinking and computation; whether the singularity is a religion or not…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes or make a donation.

 

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • Pingback: Greg Bear on Singularity 1 on 1: The Singularity is the Secular Apotheosis()

  • Pingback: William Hertling on Singularity 1 on 1: The Singularity is closer than it appears!()

  • Pingback: Ramez Naam on Singularity 1 on 1: The Future Isn’t Set In Stone!()

  • What a great discussion! It is very different to see the different opinions discuss the complicated path of the Singularity.

  • AuthorX1

    I enjoyed the discussion, but
    to my way of thinking, the key point behind the term “technological singularity”
    is that we are approaching a period of major transition, sort of like the
    industrial revolution, but much more epic. We are seeing the beginnings of it,
    with robotics and AI systems taking over human jobs and things like that, but
    the main thrust of the transition will come after the first “superintelligence”
    is created. That has not happened yet, so I would not agree with the position
    that the singularity has arrived already. Sure, if we look at it from the
    perspective of being unable to predict beyond some temporal marker, then we
    could support the theoretical argument that such is the case today, but it ignores
    the whole superintelligence idea, which to me is the more significant part.

    I also very much disagree
    with the idea that people who see some validity in the basic concepts of the singularity
    are following some sort of religion. If an economist predicts that the interest
    rate will rise by 4 percent over the next three years, with supporting
    arguments and trend data, that does not mean that the economist is religious
    about interest rates. If a climatologist predicts that the oceans will rise at
    a rate of .39mm per year by 2020, with supporting arguments and trend data, that
    doesn’t mean that the climatologist is religious about climate change. If a hi
    tech developer predicts that a machine will be capable of human equivalent or
    better intelligence by 2045, with supporting arguments and trend data, that does
    not make the developer religious about the technological singularity. The act
    of making a technological prediction is not proof that there is a religion
    behind it.

    There is a difference
    between a statistical forecast, an empirical prediction, and a SWAG (scientific
    wild-ass guess). And while each of those approaches generally reflects a
    different level of rigor and fidelity, any one of them could be used to make a
    prediction about the singularity, or any other technical evolution, and NONE
    of them are proof of religion. Religion should also not be confused with
    sensationalism. There is a lot of sensationalism surrounding the singularity,
    just as there was a lot of sensationalism surrounding the Apollo moon mission program,
    but no one accused the people involved with the Apollo program of worshiping
    the moon, as far as I know. Actually, that might not be a good example, because
    we all know that the moon landing was a hoax, right? The point remains, however,
    which is this: just as forecasting is not the same thing as religion, neither
    is sensationalism the same thing as religion. In fact, I think that the act of
    attaching the word “religion” to the term “technological singularity” is more
    of a theological-esque behavior than the expression of supporting opinions
    about the singularity.

    Furthermore, the idea that
    a date associated with technological prediction must be interpreted as a
    prediction of instantaneous change is ridiculous. It is a statement of the
    obvious (or it should be) to say that a prediction date might be offered as
    point in time by which some milestone will have been achieved. It is done all
    the time in the corporate world. It’s a standard part of project planning. It may
    take years of incremental progress along the way to the prediction date to
    achieve the milestone, versus a magical VOILA event that takes place exactly on
    the date itself.

    I also take exception to
    the de facto conclusion that computation is not the same as thinking. It might
    well be that thinking, from a design perspective at least, is EXACTLY the same
    as computation, nothing more and nothing less. This is debatable, of course,
    from a philosophical perspective. It is one of those tumble dryer questions,
    which can go round and round forever, like the question of a tree falling in
    the forest making a sound or not. Philosophers still don’t agree on what the
    terms “thinking” and “consciousness” really mean. Also, there are many thinkers
    who believe that the human brain itself is nothing more than a computer.

    The comment that I liked
    best in the discussion was the one that said something to the effect that we
    should take a look at the things we can do now to maximize the potential for
    positive outcomes. We can’t stop thinkers from going round and round on the
    philosophical issues, but we can make practical progress by considering worst
    case scenarios, playing what-if games with those, and extracting lessons
    learned about how to reduce risk, just on the outside chance that this radical
    AI evolution does come to pass. I would like to see a panel discussion with “practical
    action” as the theme.

    END

  • polybiblios

    This was indeed an interesting discussion; but, ultimately, they are not really experts in artificial intelligence (one of the main technologies that would be needed for something like a singularity to occur) — true, they each have some technical background (Naam worked at Microsoft; Hertling was a programmer; and Bear has surely knows a thing or two about AI); and sci-fi authors are maybe better than most at understanding how social, political and economic forces conspire to bring about a particular vision of the Future; but I think one also needs a deep technical background (especially in AI) to make accurate predictions.

    As always, I liked Greg Bear’s metaphors and commentary.

    ….

    Two names which come to mind, as far as possible interview subjects that have the kind of “technical background” I’m talking about (one certainly does; the other kind-of does), are:

    1. Oren Etzioni, U. of Washington professor of CS, and director of the Allen Institute. He believes we are very far from human-level AI (though concedes AI will be fairly advanced in some narrow ways in the coming years). On a recent reddit AMA he elaborated on his views on the near and medium future of AI. See:

    http://www.reddit.com/r/IAmA/comments/2hdc09/im_oren_etzioni_head_of_paul_allens_institute_for/

    2. Blaise Aguera y Arcas, currently a machine intelligence engineer at Google, formerly worked at Microsoft (see his “Jaw-Dropping” TED talks), thinks that machine intelligence will continue to develop rapidly in the coming years (an opinion he maybe acquired since working at Google). He says he thinks his friend Jaron Lanier’s idea that we can build an economy around Facebook likes and blog posts is b.s. — that machine intelligence will soon eliminate this as a possibility. See this talk he gave somewhat recently:

    http://m.kuow.org/?utm_referrer=#mobile/41879

  • Pingback: William Hertling on Singularity 1 on 1: Expose Yourself to a Diversity of Inputs!()

  • Samantha Atkins

    I don’t agree with some of them on many things. However what a fun group!

  • Agreed on both points. I also had a blast that day 😉

Over 3,000 super smart people have subscribed to my newsletter: