TechEmergence Surveys Experts on AI Risks: Some Surprises Emerge

Daniel Faggella /

Posted on: May 9, 2016 / Last Modified: May 9, 2016

AI Risk infographic thumbWhen a technology becomes all-pervasive and affects the human experience in a positive or negative light, it’s easy to forget that (for better or worse) technology is a tool largely under human control. Yet once a technology is released into and adapted by society, the line between control and potential chaos can become blurred.

Recent advancements in artificial intelligence have found their way into the media spotlight, and one doesn’t have to do much searching to find headlines that allude to elimination of jobs or destruction of life as we know it due to AI.

But are these fears legitimate? Considering the inherent risks in AI, such fears do not seem irrational, but are the risks being publicized by much of the media really the ones about which we should be thinking?

To help shed light on the issue, TechEmergence completed a recent survey on the topic and received the opinions of over 30 AI researchers’, the majority of whom hold a PhD and all of whom are experts in their respective fields. In this survey, we asked researchers to give their perspectives on what they believe is the most likely AI-related risk in the next 20 years, as well as the next 100 years.

Definite patterns emerged amongst the researchers’ responses. Within the next two decades, the majority of researchers (36.36 percent) foresaw the most likely risks related to automation and the economy. Interestingly, the second most populous category of response (18.18 percent) was that there are no inherent short-term risks.

These trends shift slightly when looking out over the next century, with the greatest risk (27.27 percent) being humans’ mismanagement of AI, followed by automation and economic concerns (21.21 percent). The following graphic lays out a visual representation of all 30 researchers’ named risks within the next 20 years, organized by category.

AI Risk infographic
About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

Browse More

Dune Part Three official title card — Dune Messiah meaning explained

Dune Is Not What You Think: The Warning Frank Herbert Meant Us to Hear

A human conductor leads an orchestra of AI robots, symbolizing the human skills that will matter in the age of artificial intelligence.

The Skills That Will Matter When AI Can Do Almost Everything

A smartphone depicted as a slot machine surrounded by abstract human figures, illustrating how social media engagement is designed to capture attention rather than facilitate real social interaction.

Facebook’s Quiet Confession: The Social Network Was a Lie

Living systems illustrating the limits of the machine metaphor

How Our Machine Story Shaped Modernity and Why It Can’t Shape the Future

Bust of Socrates with unseeing eyes, symbolizing ignorance, humility, and the limits of human knowledge.

Ignorance Is the Greatest Evil: Why Certainty Does More Harm Than Malice

Illustrated portrait of John von Neumann with circuit board background and title text “John von Neumann and the Technological Singularity.”

John von Neumann and the Original Vision of the Technological Singularity

Above the Law – Big Tech’s Bid to Block AI Oversight” with an AI symbol crossed out

Above the Law: Big Tech’s Bid to Block AI Oversight

A Kernel of Truth Buns The World

A Kernel of Truth