≡ Menu

Singularity Podcast: George Dvorsky on Transhumanism and the Singularity

In this edition of Singularity Podcast I had the pleasure of speaking with prominent Canadian transhumanist and animal rights advocate George Dvorsky. George is both a passionate and fascinating interlocutor and, even though I spend over 1h 15 min interviewing him, I feel that I could have easily spent double that time while still remaining highly interested in what he has to say. (So do not be surprised if I invite him for another podcast.)

Just one of the thoughts that I will personally take away from my conversation with Dvorsky: “Mass extinction is the simplest explanation for why we are seeing an uncolonized galaxy.”

George Dvorsky’s Short Bio: Canadian futurist, ethicist and animal rights advocate, George Dvorsky has written and spoken extensively about the impacts of cutting-edge science and technology—particularly as they pertain to the improvement of human performance and experience. He currently serves on the Board of Directors for the Institute for Ethics and Emerging Technologies and has a popular blog called Sentient Developments.

Like this article?

Please help me produce more content:



Please subscribe for free weekly updates:

  • Pingback: Kevin Warwick on Singularity Podcast: You Have To Take Risks To Be Part Of The Future()

  • Pingback: Michael Anissimov’s Podcast: Singularity Without Compromise()

  • Pingback: Jason Silva on Singularity Podcast: Let Your Ideas Be Noble, Poetic and Beautiful()

  • Pingback: George Dvorsky on Singularity 1 on 1: Specialization is for Insects()

  • Fascinating interview!

    Dvorsky’s bio-ethics stance is what initially impressed me, and made me a fan:

    “When you’re talking about transhumanism and the potential for there to be
    ‘transhumans,’ you are talking quite necessarily, by definition, talking about
    something that is not human, but something that at the same time deserves moral
    consideration, something that still is a living, thinking, breathing creature
    that experiences subjective awareness, has emotional capacities, and so on. We
    typically like to think of these entities in the transhumanist community as
    beings that are perhaps ‘more than human,’ or a bit more advanced than human.
    But I quickly realize there are many persons, in and amongst us, that qualify
    as such persons, and those are many of our non-human animal friends.”

    Since this interview, Dvorsky chose to either amend his definition of non-human
    animals, or chose to disregard his moral consideration of them, and practice
    and promote the globally unsustainable and inhumane paleo diet, and of that I
    am greatly disappointed. Aside for that, I continue to be impressed with his
    contribution to transhumanism, and wish him well on his journey.

    I also enjoyed his quote of Yudkowsky: “Experiencing the death of a loved one is
    not anything any sentient person should ever have to experience.” People
    are willfully ignorant of the fact that non-human persons experience the deaths
    of their loved ones at the hands of human persons countless times per day. This
    is the result of speciesism.

    ” . . people refuse to look at the moral value of an animal in the same way that we
    look at the moral value of a human. . . let’s not imbed that kind of culture in
    the development of artificial intelligence. Let’s start off on the right
    footing.” -Dvorsky. This is my number one concern in developing strong AI.
    We are working toward developing strong AI while we are in a mindset of moral
    disregard for non-humans. Do we really expect a human-friendly AI to emerge in
    a culture of moral disregard?

    Great answer to biology versus technology: “Biology IS technology.” This of
    course leads to Dvorsky’s explanation of the Fermi Paradox, and his
    extrapolation to our chances of surviving the Singularity. A sobering
    assessment. I hope we have already passed the “great extinction
    filter.” But I also agree it’s a “miracle that we haven’t destroyed
    ourselves with nuclear weapons.” And we’ll have to come up with more
    miracles to survive the exponentially increasing technology of
    self-destruction. That’s a tall order in an environment of obvious moral

  • Pingback: Jason Silva on Singularity Podcast: Let Your Ideas Be Noble, Poetic and Beautiful()

  • Pingback: Kevin Warwick on Singularity Podcast: You Have To Take Risks To Be Part Of The Future - Singularity Weblog()

Over 3,000 super smart people have subscribed to my newsletter: