Quantcast
≡ Menu

Stuart Armstrong: The future is going to be wonderful [If we don’t get whacked by the existential risks]

Stuart-ProfileStuart Armstrong is a James Martin research fellow at the Future of Humanity Institute at Oxford where he looks as issues such as existential risks in general and Artificial Intelligence in particular. Stuart is also the author of Smarter Than Us: The Rise of Machine Intelligence and, after participating in a fun futurist panel discussion with him – Terminator or Transcendence, I knew it is time to interview Armstrong on Singularity 1 on 1.

During our conversation with Stuart we cover issues such as: his transition from hard science into futurism; the major existential risks to our civilization; the mandate of the Future of Humanity Institute; how can we know if AI is safe and what are the best approaches towards it; why experts are all over the map; humanity’s chances of survival… 

My favorite quote from this interview with Stuart Armstrong is: “If we don’t get whacked by the existential risks, the future is probably going to be wonderful.”

(You can listen to/download the audio file above or watch the video interview in full. If you want to help me produce more episodes like this one please make a donation!)

 

Who is Stuart Armstrong?

Stuart Armstrong was born in St Jerome, Quebec, Canada in 1979. His research at the Future of Humanity Institute centers on formal decision theory, the risks and possibilities of Artificial Intelligence, the long term potential for intelligent life, and anthropic (self-locating) probability. Stuart is particularly interested in finding decision processes that give the “correct” answer under situations of anthropic ignorance and ignorance of one’s own utility function, ways of mapping humanity’s partially defined values onto an artificial entity, and the interaction between various existential risks. He aims to improve the understanding of the different types and natures of uncertainties surrounding human progress in the mid-to-far future.

Armstrong’s Oxford D.Phil was in parabolic geometry, calculating the holonomy of projective and conformal Cartan geometries. He later transitioned into computational biochemistry, designing several new ways to rapidly compare putative bioactive molecules for virtual screening of medicinal compounds.

 

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • Given that Stuart works alongside Nick Bostrom and is listed second on the staff page for FHI (http://www.fhi.ox.ac.uk/about/staff/), I’d not disregard his comments entirely, though I know some will.

    I am getting the feeling, a raw emotional feeling as opposed to a nuanced impartial feeling, that AGI may never occur. Too many brilliant minds are warning against AGI as an existential threat, whilst at the same time talking up super-intelligence of man-machine mergers. Narrow AI is becoming ever more useful, and ever more clever, and if it could be used to augment our own intelligence, as well as give us telepathic and hive mind capabilities, then AGI seems like it would be left dangling on the sidelines. At any rate, it wouldn’t be an existential threat to augmented humans because even a virus war would quickly reach zero sum and result in a cold war until co-existence could be established.

    I might be wrong, and stand to be corrected, but I guess that won’t happen until the future arrives 🙂

  • Good points Michelle. It was surprising to find that IBM’s Watson, which has been touted as the best AI system, defeated the human players on television because 94.7% of Jeopardy! questions are titles of Wikipedia pages. Even with exponential growth, we may still be very far away from actual AGI.

  • Stuart Armstrong

    That’s certainly possible. But it does not seem probable enough (given what we don’t know) to take a big chunk out of the AGI risk probability.

    Your scenario requires:
    1) Great success with narrow AI
    2) That AGI remain difficult, even with very effective narrow AI
    3) That people’s rationality and abilities are increased by narrow AGI, and that they therefore decide pursuing AGI is dangerous.
    4) That they implement some system (regulations? spying?) that prevent AGI research in general.

    I’d say that 2) and 4) are the parts with the most uncertainty.

  • Riccardo Pusceddu

    I feel that finding a way to augment our intelligence (being within ourselves or externally by mean of an AI or a combination of the two) is really the only way for the human race to survive in the long term. The major risk will be to not to have augmented intelligence because the human species as it is now is not equipped to survive in the likely scenario of a dying Universe. Augmenting our intellectual faculties is the only possibility given to us in order to persist in this Universe.
    I agree that we have to be very careful about the implication of building an artificial super intelligence. I feel that we haven’t yet completely taken on board our own instinct as humans and as living creatures subjected to the laws of evolution through natural selection. We need to find out a similar mechanism that will work for an artificial super intelligence too. And it needs to be so ingrained in the hardware of it that there won’t be any risk of it loosing it. To this point I can’t find a better process that naturally maintain itself as the process of natural selection. It needs to be a tautology, external to the object that it will be applied to, as natural selection is for biological life forms. Ingraining something in the system that artificially maintain this core value may be dangerous as it could be wiped out by a sudden fault. However there is a natural process that it’s so deeply ingrained into living beings as to never actually being able to get out from this constrains: death. We are only at the beginning of this journey and I increasingly think that to make enough progress we’ll need the help of the very AI we are seeking to create. Our mind is not enough and it always strike me the passion with which we humans hold to this very limited machine. This tells us how limited we are. Of course I will take a completely different approach if the Universe were the one postulated by some scientists now proved wrong (I can’t remember the names) who thought that the Universe was eternal and self powered. But it’s not and timing is already running out and we humans albeit being nothing are the only known chance of subverting this process. If the preservation of information makes still sense. Otherwise we can just wait for oblivion

  • What if we get whacked by over-cautiousness regarding unrealistic fears? There is no rational foundation to the notion of AI being dangerous. The only danger we face is human irrationality. AI is the only solution to the dangers of human irrationality but ironically some people fear the only solution, then insult is added to injury because their fears delay the solution. The true danger is people who clamour about existential threats regarding AI.

  • xenophone

    Great int as usual. I find it curious that in all this discussion of AI, there is rarely much mention of what motivates and intelligent individual. What makes a human ‘human’, isn’t just self-awareness, adaptability, and emotion. It’s also the motivations behind all this stuff. For instance, when we speculate about what general AI would be like if we were to create it, a huge part of this question will be answered by what this entities’ motivations are. Presumably this is something that is ‘programmed’, and not inherently part of an intelligent system. Will strong/general AI be benevolent? It seems to me that it could obviously be benevolent, or malevolent, or any anything in between, and whether or not it is one or the other has very little to do with the nature of consciousness or intelligence, and very much to do with the way this system is set up.

  • I keep hearing the caveat “as long as …” when people describe how great the future is going to be. Sure, the future might be interesting for some people, but masses of people are left clueless what the hell to do next. Jobs are disappearing and before long we will be left with a large demographic of very poor, undereducated, relatively less talented, probably a little less competitive, probably a bit older people who have absolutely no clue why they are alive.

    This is a big problem. Many of these people might take being completely irrelevant personal, pretty much as some fine upstanding nitwits in the middle east have a habit of doing – so they a huge number of these “disenfranchised” might not give a damn what happens – or worse, they might actively desire things most sane people might not want … like the end of times.

    We have an upcoming crisis of desperation, and people who have all these assumptions they actually matter get really angry when society completely does not.

  • [A]
    4:29 is NONSENSE (https://www.youtube.com/watch?v=HLW4a2KIeWc&t=549s)!
    There is no such problem (as stated by Dr Stuart) in the assertion that Moore’s law shall engender human-level intelligence. [Via machine learning theory/implantation, the NON-‘Henry-Markramian’ regime]

    [B]
    Supercomputers already produce 10^15 flops. These are however inefficient, and UNSUITABLE to solve dimensionality problems in machine learning TODAY.
    On Moore’s law, by 2020, we shall likely have efficient, small human brain capacity, human brain sized chips.

    [C]
    Brain based models already EXCEED/EQUAL human performance in non trivial individual cognitive tasks/cognitive task groups, ranging from language translation to disease diagnosis.

    [D]
    It is an observable FACTUM, that the machine-executable-cognitive task field HAS BROADENED, as time (and computational parallelism) diverged. (Such a field’s broadening PERSISTS this day)

    [E]
    Thereafter, non-trivial, cognitive AI already exists (that execute individual cognitive tasks/task groups), SO, this super-human capable algorithm sequence shall not be “SUDDEN” or surprising.

Over 3,000 super smart people have subscribed to my newsletter: