Quantcast
≡ Menu

Calum Chace on Surviving AI

Surviving AIAI is coming and it could be the best or the worst thing” was Calum Chace‘s message at the end of my first interview with him. Since then Chace has written a non-fiction book on Surviving AI and, given that it is a matter of the survival of our species, I thought it is worthy of a follow up discussion on the topic.

During our 1 hour conversation with Calum Chace we cover a variety of interesting topics such as: Surviving AI and why it is a companion book to Pandora’s Brain; writing fiction vs non-fiction; the digital divide, technological unemploymentuniversal income and the economic singularity; the importance of luck and our ignorance of those who have saved the world; the term Singularity, Bostrom’s Superintelligence and Barrat‘s Our Final Invention; the number of AI security experts; the future of capitalism

My favorite quote that I will take away from this interview with Calum Chace is:

“This is the century of two singularities and we have to get both of them right!”

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes or make a donation.

Who is Calum Chace?

Calum ChaseCalum Chace retired in 2012 to focus on writing after a 30-year career in business, in which he was a marketer, a strategy consultant and a CEO. He maintains his interest in business by serving as chairman and coach for growing companies.

Calum is co-author of The Internet Start-Up Bible, a business best-seller published by Random House in 2000. He is a regular speaker on artificial intelligence and related technologies, and runs a blog on the subject at www.pandoras-brain.com

Calum Chace lives in London and Sussex (England) with his partner, a director of a design school, and their daughter. He studied philosophy at Oxford University, where he discovered that the science fiction he had been reading since early boyhood is actually philosophy in fancy dress.

 

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • Adam Peri

    Thank you Calum & Nikola,

    Calum, if you have the time, I have a couple questions:

    1) I understand you distinguish yourself as an evangelizer, but with 6 full-time and a debatable number of part-timers working on AI/Singularity preparedness, your knowledge is certainly in the 99+ percentile. Could you elaborate your opinions on Bostrom’s suggestions for mitigating AI’s threat to survival? I believe he suggests two possibilities: one is simply limiting AI’s capabilities of certain tasks and “bottling it up” and testing it in a controlled environment and the second is engineering desirable traits- sort of programming benevolence. What are your thoughts on these tactics or any others that may be posited by others in the field?

    It seems to me that there is still a risk of being in the hands of the few who control the AI. And although people like Gates, Musk and Zuckerburg aren’t Robber Barons, whoever conducts the tests and controls the technology can only be able to put their own subjective experience into the programming. On the other hand, you bring up the important point that “nurturing” is left to every individual and not a small number of organizations, we could end up with thousands of AIs with conflicting interests.

    2) It’s a question of semantics, but when you are speaking to audience and evangelizing the importance of AI research and understanding, what are people’s reactions to the term “existential threat?” To me it was clear that it is referring to a threat to our existence, but I am not sure if people who aren’t following the “Singularity Blogosphere” react the same way towards the word ‘existentialism.’ Also to me, it still brings up images of reading Sartre and Camus in high school within a totally different context. In a manner similar to Bertrand Russell’s theories on denotation within epistemology; people, including myself might have difficulty truly conceptualizing these ideas by description, which is currently the only way to do so. I think most humans can get on board with a feeling of direct acquaintance, but it doesn’t exist. Is this a hurdle in selling your audience on the importance on AI research?

    Thanks so much for the interview! I’m sorry if the questions became convoluted (and time consuming). It seems every time that I try to ask one on here I get swept away by nuances.

  • PandorasBrain

    Hi Adam.

    You’re right: Bostrom talks of two main strategies for AGI safety: control and motivation. The idea of controlling an entity which becomes millions of time smarter than you, and probably able to invent whole new fields of physics in order to escape constraints, seems very hard indeed! Which leaves motivation – not at all easy, but hopefully manageable. I go into this in some detail in chapter 8 of “Surviving AI”.

    You make an interesting point that “existential threat” might be too technical a term for some audiences. I’m not aware of anyone having failed to grasp its meaning, but of course I wouldn’t necessarily know if they had. I’ll try to bear that in mind and elaborate it when there’s room for doubt. Thanks.

    By the way, I’m not an AI safety researcher, which is what Bostrom said there were about six of. I don’t know how many other evangelists there are, so I don’t know whether your generous 1% estimate is correct or not.

  • Adam Peri

    Thanks again, Calum!

    I look forward to picking up the book soon and will certainly read chapter 8 with special intention.

    I do think if there are 6 FT, and max 100-200 PT somehow touching in the field of AI research/security, you may not be in the 99% as far as technical knowledge within that group itself- but certainly in comparison to greater society as a whole.

  • PandorasBrain

    Maybe so, in which case there remains a lot of work to do! 🙂

  • Neoliberal Agenda

    A third of the population already lives in poor conditions, does it men we have universal basic income?

    No.

    There are so many ways this can play out that doesn’t involve a universal basic income. We could upgrade our brains so we could keep up with the machines, we could work fewer hours in the week, we could prefer to buy services from humans, there could be jobs that we can’t think of today, we could retire earlier, we could decide to live without the technology etc

    If we live in a world of abundance, where you can get anything you like without any cost, why wouldn’t you not help out your fellow man?

    Why would the government need to step in and force you?

  • ernest101

    Thank you for an excellent interview. I am deeply disturbed by the issue of technological unemployment as well. I think that Mr. Chace’s book is excellent. As usual, Nikola, you have done an excellent job as well in delineating the important points. I have also read books by Martin Ford, Richard/Daniel Susskind, and Jerry Kaplan. My outlook is dismal. I simply do not think that we can survive the economic disparity even with a guaranteed minimum/maximum income. My problem is the following: (1) I view the economy as a closed system – with Central Banks simply diluting the “worth” of our exchange medium and not really increasing the pie; (2) The only way to increase the pie is thru manipulation of natural resources that aren’t renewable and finite; (3) Therefore, even with a guaranteed income for the majority of the non-elites, how will the elites maintain their super-wealth if no one can really buy anything – and if they do, it is from money given them by the elites in the first place. It reminds me of the following (possible anecdotal) exchange between Henry Ford II and a union representative:

    Henry Ford II: Walter, how are you going to get those robots to pay your union dues?

    Walter Reuther: Henry, how are you going to get them to buy your cars?

    I would appreciate any thoughts anyone can give me in seeing my way thru this issue.

  • PandorasBrain

    The third of the world that lives in real poverty is in different jurisdictions than the rest, so the analogy doesn’t really work.

    You may be right that we can race with the machines rather than being rendered unemployable by them, but I suspect that machines will quickly (in decades, not centuries) acquire most or all of the abilities that we bring to the commercial table. That is probably a minority opinion at the moment, but it may well become the majority view when self-driving cars really start to bite, domestic robots become seriously impressive, Baxter-like robots become widespread and so on.

    I agree that in a world of true abundance, being unemployable should be no problem. But we haven’t even begun to design the kind of economy we will need to handle that, or how to get from here there without a horrendous crashing of economic and social gears.

  • PandorasBrain

    I think you’re asking the right questions. I’m trying to answer them in my next book, The Economic Singularity, our later this year.

Over 3,000 super smart people have subscribed to my newsletter: