TechEmergence Surveys Experts on AI Risks: Some Surprises Emerge

Daniel Faggella /

Posted on: May 9, 2016 / Last Modified: May 9, 2016

AI Risk infographic thumbWhen a technology becomes all-pervasive and affects the human experience in a positive or negative light, it’s easy to forget that (for better or worse) technology is a tool largely under human control. Yet once a technology is released into and adapted by society, the line between control and potential chaos can become blurred.

Recent advancements in artificial intelligence have found their way into the media spotlight, and one doesn’t have to do much searching to find headlines that allude to elimination of jobs or destruction of life as we know it due to AI.

But are these fears legitimate? Considering the inherent risks in AI, such fears do not seem irrational, but are the risks being publicized by much of the media really the ones about which we should be thinking?

To help shed light on the issue, TechEmergence completed a recent survey on the topic and received the opinions of over 30 AI researchers’, the majority of whom hold a PhD and all of whom are experts in their respective fields. In this survey, we asked researchers to give their perspectives on what they believe is the most likely AI-related risk in the next 20 years, as well as the next 100 years.

Definite patterns emerged amongst the researchers’ responses. Within the next two decades, the majority of researchers (36.36 percent) foresaw the most likely risks related to automation and the economy. Interestingly, the second most populous category of response (18.18 percent) was that there are no inherent short-term risks.

These trends shift slightly when looking out over the next century, with the greatest risk (27.27 percent) being humans’ mismanagement of AI, followed by automation and economic concerns (21.21 percent). The following graphic lays out a visual representation of all 30 researchers’ named risks within the next 20 years, organized by category.

AI Risk infographic
About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

Browse More

Staying Sane in an Insane World

Staying Sane in an Insane World

IASEAI’25

IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?

Conversations with the Future Event Epilogue

“Conversations with the Future” Epilogue: Events Can Create the Future

Tree of Knowledge with Roots in Code

HumAInity Is Genius, But Where’s the Wisdom?

Don't Panic

Everything Has Changed, Yet Nothing Has Changed: Don’t Panic

AI_Matters_Character_and_Culture_Matter_More

AI Matters. Character and Culture Matter More!

The Future of Circus

The Future of Circus: How can businesses and artists thrive in a changing entertainment industry?

The World is Transformed by Asking Questions Feature

The World is Transformed by Asking Questions