Quantcast
≡ Menu

Roman Yampolskiy on Artificial Superintelligence

Artificial SuperintelligenceThere are those of us who philosophize and debate the finer points surrounding the dangers of artificial intelligence. And then there are those who dare go in the trenches and get their hands dirty by doing the actual work that may just end up making the difference. So if AI turns out to be like the terminator then Prof. Roman Yampolskiy may turn out to be like John Connor – but better. Because instead of fighting by using guns and brawn he is utilizing computer science, human intelligence and code. Whether that turns out to be the case and whether Yampolskiy will be successful or not is to be seen. But at this point I was very happy to have Roman back on my podcast for our second interview. [See his first interview here.]

During our 1 hour conversation with Prof. Yampolskiy we cover a variety of interesting topics such as: slowing down the path to the singularity; expert advice versus celebrity endorsements; crowd-funding and going viral or “potato salad – yes; superintelligence – not so much”; his recent book on Artificial Superintelligence; intellectology, AI complete problems, singularity paradox and wire-heading; why machine ethics and robot rights are misguided and AGI research is unethical; the beauty of brute force algorithm; his differences from Nick Bostrom’s Superintelligence; Roman’s definition of humanity; theology and superintelligence…

(You can listen to/download the audio file above or watch the video interview in full. If you want to help me produce more high-quality episodes like this one please make a donation!)

Roman-V.-YampolskiyDr. Roman V. Yampolskiy is a Tenured Associate Professor in the department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach. During his tenure at UofL, Dr. Yampolskiy has been recognized as: Distinguished Teaching Professor, Professor of the Year, Faculty Favorite, Top 4 Faculty, Leader in Engineering Education, Top 10 of Online College Professor of the Year, and Outstanding Early Career in Education award winner among many other honors and distinctions. Yampolskiy is a Senior member of IEEE and AGI; Member of Kentucky Academy of Science, and Research Advisor for MIRI and Associate of GCRI.

Roman Yampolskiy holds a PhD degree from the Department of Computer Science and Engineering at the University at Buffalo. He was a recipient of a four year NSF (National Science Foundation) IGERT (Integrative Graduate Education and Research Traineeship) fellowship. Before beginning his doctoral studies Dr. Yampolskiy received a BS/MS (High Honors) combined degree in Computer Science from Rochester Institute of Technology, NY, USA. After completing his PhD dissertation Dr. Yampolskiy held a position of an Affiliate Academic at the Center for Advanced Spatial Analysis, University of London, College of London. He had previously conducted research at the Laboratory for Applied Computing (currently known as Center for Advancing the Study of Infrastructure) at the Rochester Institute of Technology and at the Center for Unified Biometrics and Sensors at theUniversity at Buffalo. Dr. Yampolskiy is an alumnus of Singularity University (GSP2012) and a Visiting Fellow of the Machine Intelligence Research Institute.

Dr. Yampolskiy’s main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, Cybersecurity, Digital Forensics, Games, Genetic Algorithms, and Pattern Recognition. Dr. Yampolskiy is an author of over 100 publications including multiple journal articles and books. His research has been cited by 1000+ scientists and profiled in popular magazines both American and foreign (New Scientist, Poker Magazine, Science World Magazine), dozens of websites (BBC, MSNBC, Yahoo! News), on radio (German National Radio, Swedish National Radio, Alex Jones Show). Dr. Yampolskiy’s research has been featured 250+ times in numerous media reports in 22 languages.

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • Travis

    With regards to machine ethics, (@~36:) isn’t a optimal ethical solution benevolent and compatible to all humans, equivalent to a algorithm?

  • Travis

    With regards to machine ethics, (@~36:) isn’t an optimal ethical solution benevolent and compatible to all humans, equivalent to a algorithm?

  • George Baily

    Regarding the study of whether a chess move is uncharacteristic, possibly human when it is supposed to be software, or vice versa — wonder if this will start to be a problem in sports… and an area of research about how to detect it (both tactically and in terms of enhanced abilities)

  • Great discussion! At 0:52 the scientist Nikola is trying to recall is Sylvester James Gates https://en.wikipedia.org/wiki/Sylvester_James_Gates

  • He would make a great guest on the podcast too!

  • Totally right my friend! 😉

  • Yes indeed, I have sent him 2 invitations but have received not reply so far ;-(

  • siphersh

    Great interview as always, thank you.

    Not being an expert, maybe I’m too naive about this, but I don’t understand why there’s so much emphasis on the concept of a goal-oriented, reward-motivated autonomous machine in the theory of AGI.

    This model seems absurdly dangerous to me. It reminds me of the three laws of robotics: as if it were designed for the narrative purpose of bringing about a disaster.

    And at the same time it also seems overly ambitious. I would imagine that by the time we can build such a sci-fi type doomsday AI, we will have already become able to build a self-improving AGI that doesn’t have a motivation defined in terms of reality in general.

  • Travis

    It seems to me that a majority of humans have a capability to tap into a cosmic creative force, which seems to me non-human intelligence would not be guaranteed to have access to the same force, logically seems it would be advantageous for AGI not to disrupt it’s access to this force. Also in human history seems all malevolent intelligences eventually falters to less malevolent or more benevolent intelligences.

Over 3,000 super smart people have subscribed to my newsletter: