The Future of AI Through a Historian’s Looking Glass: A Conversation with Dr. John MacCormick

Daniel Faggella /

Posted on: September 2, 2015 / Last Modified: September 2, 2015

nine-algorithms-that-changed-the-futureUnderstanding the possibilities of the future of a field requires first cultivating a sense of its history. Dr. John MacCormick, professor of Computer Science at Dickinson and author of Nine Algorithms That Changed the Future: The Ingenious Ideas That Drive Today’s Computers, has waded through the historical underpinnings of the technology that is driving artificial intelligence (AI) today and forward into the near future.

I recently spoke with Dr. MacCormack about some of the possible future outcomes of AI, including self-driving cars and autonomous weapons. He gives a historian’s perspective as an informed and forward-thinking researcher in the field.

Q: Where will AI apply itself in the next 5 years?

A: New algorithms are coming out all the time.  One area where we have seen a lot of improvement is in the translation of human languages, with Google’s software being one example.  The results today are not overly impressive, but we will continue to see increasing high-quality translations between human languages in the medium term.

Another area that has rocketed is self-driving cars, which are starting to emerge and really seem like they could be a reality for everyday use in the medium term.  A half a decade ago, a lot of followers of the technology might have been doubting this reality, stating that we would need a big breakthrough; however, these views are starting to turn, just based on incremental improvements in the past few years.

Q: What about machine vision?

A: Machine vision is one subfield of AI in which we try to simulate human-like vision, like recognizing objects at rest and in motion.  It sounds simple, but this has been one of the toughest nuts to crack in the whole field of AI.  There have been amazing improvements in the last few decades, in terms of object recognition systems. They are good in comparison to what they were, but those systems are still far inferior to human capabilities.

Because this technology is so difficult to crack, current AI systems try not to rely on vision.  In self-driving cars, for example, vision systems are present but the cars are not dependent.  That vision might be used for something relatively simple, like recognizing if traffic lights are red or green. But with other objects, such as lane markings or any obstructions, the car is going to rely on other sources, such as GPS for navigating and a built-in Mac that knows where various objects are supposed to be, based on a pre-mapped location. Machine vision still poses a cumbersome challenge.

Q: High-profile names like Musk and Hawking have conveyed their AI fears – in your eyes, do you see these as unfounded?

A: I’m an unapologetic optimist on this question.  I do not think AI is going to get out of control and do evil things on its own.  As we get closer to systems that rival human capabilities, such as creativity and original thought, I think these will still be systems that humans have designed and have methods of controlling.  We’ll be able to continue building and making useful tools that are not the same as humans, but that have extraordinary capabilities and that are still able to be guided and controlled.  I think Musk and Hawking are technically correct in their hypothetical line of thought, that AI could turn ‘evil’ and out-of-control, but I also think this is an unlikely scenario.

Q: Should we research national and international protocols that guide AI?

A: Yes, this is an important point, and we need collaboration between many people, including social scientists, technologists, and many other relevant areas of society.

One area that is already starting to draw attention is that of military robotics.  We see multiple countries capable of building systems that have the ability to be autonomous and be used for lethal force.  This opens up an entirely new scenario for ethical debate and a discussion of the kinds of things that should and should not be done.  The United Nations (UN) and others are already looking at the implications of autonomous weapons, but the impact of this technology is certainly pressing and we need to formulate solutions now.

About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

Browse More

Nikola Danaylov @ Fast Future 2

Positive Futures Fireside Chat With Nikola Danaylov

Nikola Danaylov on Ex Human

Nikola Danaylov on Ex Human: the Lessons of 2020

FORMWELT Media SystemTalk with Nikola Danaylov preview

FORMWELT Media SystemTalk with Nikola Danaylov

Nikola Danaylov on 2030 preview

Nikola Danaylov on 2030 Beyond the Film: We Need Ethics and Commitment, Not Transhumanism

2030 the film preview

Why I wanted to Reawaken FM-2030’s Vision of the Future for 21st Century Audiences

Nikola Danaylov on the Dissender preview

Nikola Danaylov on the Dissenter: The Singularity, Futurism, and Humanity

Nikola Danaylov on Aways Another Way Podcast

Nikola Danaylov on the Always Another Way Podcast: The World is Transformed by Asking Questions

Nikola Danaylov at DES 2017 thumb

We’re All Truck Drivers Now: Nikola Danaylov @ DES2017