Quantcast
≡ Menu

Luke Muehlhauser on Singularity 1 on 1: Superhuman AI is Coming This Century

Last week I interviewed Luke Muehlhauser for Singularity 1 on 1.

Luke Muehlhauser is the Executive Director of the Singularity Institute, the author of many articles on AI safety and the cognitive science of rationality, and the host of the popular podcast “Conversations from the Pale Blue Dot.” His work is collected at lukeprog.com.

I have to say that despite his young age and lack of a University Degree – a criticism which we discuss during our interview, Luke was one of the best and clearest spoken guests on my show and I really enjoyed talking to to him. During our 56 min-long conversation we discuss a large variety of topics such as: Luke’s Christian-Evangelico personal background as the first-born son of a pastor in northern Minnesota; his fascinating transition transition from religion and theology to atheism and science; his personal motivation and desire to overcome our very human cognitive biases and help address existential risks to humanity; the Singularity Institute – its mission, members and fields of interest; the “religion for geeks” (or “rapture of the nerds”) and other popular criticisms and misconceptions; our chances of surviving the technological singularity.

My favorite quote from the interview:

“Superhuman AI is coming this century. By default it will be disastrous for humanity. If you want to make AI a really good thing for humanity please donate to organizations already working on that or – if you are a researcher – help us solve particular problems in mathematics, decision theory or cognitive science.”

(As always you can listen to or download the audio file above or scroll down and watch the video interview in full.)

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • Sunrider

    Great interview (as usual) … Sadly, my vote goes to him regarding how to best think about AI, their preferences, how we can design it, etc. I think the argument from Sawyer is influenced by hope and not sound. The way we design systems is well alluded to – preferences are usually expressed as utility functions and it’s hard to see how we may get around the default/fundamental issue of an AI being unable to work on/fulfil its utility function if it is switched-off. 

    Unless, of course, we manage to specify these functions right … which is precisely what he may be meaning when he says that we still have many technical problems to solve in decision sciences, etc.

  • Anonymous

    I think it would be unethical to bring life into the world that cannot defend itself. I will believe in friendly AI or built-in human friendly morality when all humans have these inviolable rules built in too.

  • http://cmstewartwrite.wordpress.com/ CMStewart

    I’m glad you’re open-minded enough to take intelligent, well-educated people seriously, Nikola, and that you don’t let age and the absence of degrees prejudice or bias your thinking. Like Muehlhauser, Bill Gates is another genius who remains non-degreed, and he changed the world at a young age. :) In fact, many geniuses find an institutional education detrimental to the development of the intellect.

  • Pingback: Singularity Institute Progress Report, January 2012 | The Singularity Institute Blog()

  • Pingback: Singularity Institute Progress Report, January 2012 | Singularity Institute()

  • Pingback: Top 10 Reasons We Should NOT Fear The Singularity()

  • Pingback: Machine Intelligence Research Institute | Singularity Institute Progress Report, January 2012()

Over 2,000 super smart people have subscribed to my newsletter: