Quantcast
≡ Menu

Luke Muehlhauser on Singularity 1 on 1: Superhuman AI is Coming This Century

Last week I interviewed Luke Muehlhauser for Singularity 1 on 1.

Luke Muehlhauser is the Executive Director of the Singularity Institute, the author of many articles on AI safety and the cognitive science of rationality, and the host of the popular podcast “Conversations from the Pale Blue Dot.” His work is collected at lukeprog.com.

I have to say that despite his young age and lack of a University Degree – a criticism which we discuss during our interview, Luke was one of the best and clearest spoken guests on my show and I really enjoyed talking to to him. During our 56 min-long conversation we discuss a large variety of topics such as: Luke’s Christian-Evangelico personal background as the first-born son of a pastor in northern Minnesota; his fascinating transition transition from religion and theology to atheism and science; his personal motivation and desire to overcome our very human cognitive biases and help address existential risks to humanity; the Singularity Institute – its mission, members and fields of interest; the “religion for geeks” (or “rapture of the nerds”) and other popular criticisms and misconceptions; our chances of surviving the technological singularity.

My favorite quote from the interview:

“Superhuman AI is coming this century. By default it will be disastrous for humanity. If you want to make AI a really good thing for humanity please donate to organizations already working on that or – if you are a researcher – help us solve particular problems in mathematics, decision theory or cognitive science.”

(As always you can listen to or download the audio file above or scroll down and watch the video interview in full.)

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • Negyxo

    The real danger comes from less intelligent (than “human average”) AGI. Question is, is it possible to design one in that way. I have always wondered why some people cannot think rationally, why they can’t just listen to the arguments, all they need to do is to calculate the answer in their head (like some mathematician is calculating the function result). Letter, I’ve found great articles on web explaining human brain, one of the interesting part of the brain is Amygdala. I believe that there are  maybe similar hard wired brain parts that are blocking rational thinking, it is totally understandable from evolutionary point, but now we shouldn’t care much about that, we don’t live in forest full of snakes anymore:

    http://thebrain.mcgill.ca/flash/i/i_04/i_04_cr/i_04_cr_peu/i_04_cr_peu.html

    “Here is an example. Suppose you are walking through a forest when you suddenly see a long, narrow shape coiled up at your feet. This snake-like shape very quickly, via the short route, sets in motion the physiological reactions of fear that are so useful for mobilizing you to face the danger. But this same visual stimulus, after passing through the thalamus, will also be relayed to your cortex. A few fractions of a second later, the cortex, thanks to its discriminatory faculty, will realize that the shape you thought was a snake was really just a discarded piece of garden hose. Your heart will then stop racing, and you will just have had a moment’s scare.” 

    This is great example, I think this process is triggering even when you speaking, lets say with religion man about god and trying to explain him on rational basis that there is no such thing like god. For them, it is almost impossible to change their view, it is hard wired, high order thinking (frontal cortex) is not returning anything back to lower functions (Amygdala), maybe there are just not enough information or there is to much and thus frontal cortex cannot return back anything precise enough… who knows, all we (people with rational thinking) know is that their “processor” for rational thinking is not working correct. 

    How this is related to AGI? 
    We don’t have to hard wire them like we are, they could have at start rational back-up system, two, three, we can do what ever we want. So, I believe rational thinking is the way AGI should evolve from beginning and I do not believe that more intelligent beings are danger to us, it is other way around.

  • Sunrider

    Great interview (as usual) … Sadly, my vote goes to him regarding how to best think about AI, their preferences, how we can design it, etc. I think the argument from Sawyer is influenced by hope and not sound. The way we design systems is well alluded to – preferences are usually expressed as utility functions and it’s hard to see how we may get around the default/fundamental issue of an AI being unable to work on/fulfil its utility function if it is switched-off. 

    Unless, of course, we manage to specify these functions right … which is precisely what he may be meaning when he says that we still have many technical problems to solve in decision sciences, etc.

  • Anonymous

    I think it would be unethical to bring life into the world that cannot defend itself. I will believe in friendly AI or built-in human friendly morality when all humans have these inviolable rules built in too.

  • http://cmstewartwrite.wordpress.com/ CMStewart

    I’m glad you’re open-minded enough to take intelligent, well-educated people seriously, Nikola, and that you don’t let age and the absence of degrees prejudice or bias your thinking. Like Muehlhauser, Bill Gates is another genius who remains non-degreed, and he changed the world at a young age. :) In fact, many geniuses find an institutional education detrimental to the development of the intellect.

  • Pingback: Singularity Institute Progress Report, January 2012 | The Singularity Institute Blog()

  • Pingback: Singularity Institute Progress Report, January 2012 | Singularity Institute()

  • Pingback: Top 10 Reasons We Should NOT Fear The Singularity()

  • Pingback: Machine Intelligence Research Institute | Singularity Institute Progress Report, January 2012()

Over 2,400 super smart people have subscribed to my newsletter: