• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

AI

Laura Major and Julie Shah on What to Expect When You’re Expecting Robots

November 10, 2020 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/926628661-singularity1on1-laura-major-julie-shah.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Hans Moravec famously claimed that robots will be our (mind) children. If true, then, it is natural to wonder What to Expect When You’re Expecting Robots? This is the question that Laura Major and Julie Shah – two expert robot engineers, are addressing in their new book. Given the subject of robots and AI as well as the fact that both Julie and Laura have experience in the aerospace, military, robotics, and self-driving car industries, I thought that they’d make great guests on my podcast. I hope you enjoy our conversation as much as I did.

During this 90 min interview with Laura Major and Julie Shah, we cover a variety of interesting topics such as: the biggest issues within AI and Robotics; why humans and robots should be teammates, not competitors; whether we ought to focus more on the human as a weak link in the system; what happens when technology works as designed and exceeds our expectations; problems in defining driverless (or self-driving) car, AI and robot; why, ultimately, technology is not enough; whether the aerospace industry is a good role model or not.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is Julie Shah?

Julie Shah is a roboticist at MIT and an associate dean of social and ethical responsibilities of computing. She directs the Interactive Robotics Group in the Schwarzman College of Computing at MIT. She was a Radcliffe fellow, has received an National Science Foundation Career Award, and has been named one of MIT Technology Review’s “Innovators Under 35.” She lives in Cambridge, Massachusetts.

 

Who is Laura Major?

Laura Major is CTO of Motional (previously Hyundai-Aptiv Autonomous Driving Joint Venture), where she leads the development of autonomous vehicles. Previously, she led the development of autonomous aerial vehicles at CyPhy Works and a division at Draper Laboratory. Major has been recognized as a national Society of Women Engineers Emerging Leader. She lives in Cambridge, Massachusetts.

Filed Under: Podcasts Tagged With: AI, Julie Shah, Laura Major, robot

Juan Enriquez on Right/Wrong: How Technology Transforms Our Ethics

October 30, 2020 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/917775401-singularity1on1-juan-enriquez.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Juan Enriquez is a bestselling author, TED All-Star with 9 TED Talks, and countless TEDx talks. Juan is an angel investor and Managing Director of Excel Venture Management. He has sailed around the world on an expedition that increased the number of known genes a hundredfold and was part of the peace commission that negotiated the cease-fire with the Zapatistas in Mexico. Most recently, Enriquez is the author of Right/Wrong: How Technology Transforms Our Ethics.

During this 90 min interview with Juan Enriquez, we cover a variety of interesting topics such as: why he is a very curious and optimistic Cromagnon; his work as a venture capitalist at Excel Venture Management; the difference between the price and the cost of health and education; the story of how science, technology, ethics, and angel investment came into his life; his work with Ed Boyden; Catholic ethics and certainty in what’s right and wrong; the importance of humility and forgiveness; why those who can make you believe absurdities can make you commit atrocities; intelligent design, homo evolutis, and transhumanism; his latest book Right/Wrong; veganism, techno-solutionism and personal development; the Abrahamic religions and adaptation; AI and the technological singularity.

My favorite quote that I will take away from this interview with Juan Enriquez is:

Just do it and enjoy the ride!

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Juan Enriquez?

Juan Enriquez is a leading authority on the economic impact of life sciences and brain research on business and society as well as a respected business leader and entrepreneur. He was the founding Director of the Harvard Business School’s Life Sciences Project and is a research affiliate at MIT’s synthetic neurobiology lab. After HBS, Juan became an active angel investor, founding Biotechonomy Ventures. He then co-founded Excel Venture Management. Author and co-author of multiple bestsellers including As the Future Catches You: How Genomics Will Change Your Life, Work, Health, and Wealth (1999), The Untied States of America: Polarization, Fracturing and Our Future (2005), Evolving Ourselves: Redesigning Humanity One Gene at a Time (2015,) and RIGHT/WRONG: How Technology Transforms Our Ethics (2020).

As a business leader, advisor, and renowned speaker, Juan Enriquez works directly with the CEOs of a number of Fortune 50 companies, as well as various heads of state, on how to adapt to a world where the dominant language is shifting from the digital towards the language of life. He is a TED All-Star with nine TED talks on a variety of subjects, as well as dozens of TEDx talks. Mr. Enriquez serves on multiple for-profit boards as well as a variety of non-profits including The National Academy of Sciences, The American Academy of Arts and Sciences, WGBH, The Boston Science Museum, Harvard Medical School, and Harvard’s David Rockefeller Center. Juan sailed around the world on an expedition that increased the number of known genes a hundredfold and was part of the peace commission that negotiated the cease fire with the Zapatistas. He graduated from Harvard with a B.A. and an M.B.A., both with honors.

 

Filed Under: Podcasts Tagged With: AI, ethics, homo evolutis, Juan Enriquez, Right/Wrong, Tech, Technology, transhumanism

Melanie Mitchell on AI: Intelligence is a Complex Phenomenon

September 23, 2020 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/897084307-singularity1on1-melanie-mitchell.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute, and Professor of Computer Science at Portland State University. Prof. Mitchell is the author of a number of interesting books such as Complexity: A Guided Tour and Artificial Intelligence: A Guide for Thinking Humans. One interesting detail of her academic bio is that Douglas Hofstadter was her Ph.D. supervisor.

During this 90 min interview with Melanie Mitchell, we cover a variety of interesting topics such as: how she started in physics, went into math, and ended up in Computer Science; how Douglas Hofstadter became her Ph.D. supervisor; the biggest issues that humanity is facing today; my predictions of the biggest challenges of the next 100 days of the COVID19 pandemic; how to remain hopeful when it is hard to be optimistic; the problems in defining AI, thinking and human; the Turing Test and Ray Kurzweil’s bet with Mitchell Kapor; the Technological Singularity and its possible timeline; the Fallacy of First Steps and the Collapse of AI; Marvin Minsky’s denial of progress towards AGI; Hofstadter’s fear that intelligence may turn out to be a set of “cheap tricks”; the importance of learning and interacting with the world; the [hard] problem of consciousness; why it is us who need to sort ourselves out and not rely on God or AI; complexity, the future and why living in “Uncertain Times” is an unprecented opportunity.

My favorite quote that I will take away from this conversation with Melanie Mitchell is:

Intelligence is a very complex phenomenon and we should study it as such. It’s not the sum of a bunch of narrow intelligences but something much bigger.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

P.S. I owe special thanks to Robert Ziman without whom this interview would not have happened.

Who is Melanie Mitchell?

Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute, and Professor of Computer Science (currently on leave) at Portland State University. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems.

Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux).

Melanie originated the Santa Fe Institute’s Complexity Explorer platform, which offers online courses and other educational resources related to the field of complex systems. Her online course “Introduction to Complexity” has been taken by over 25,000 students, and is one of Course Central’s “top fifty online courses of all time”.

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, Complexity, Melanie Mitchell

Johan Steyn Interviews Nikola Danaylov on Artificial Intelligence

July 18, 2020 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/860317633-singularity1on1-nikola-danaylov-johan-steyn.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Last month I did an interview for Johan Steyn. It was a great 45-min-conversation where we covered a variety of topics such as: the definition of the singularity; whether we are making progress towards Artificial General Intelligence (AGI); open vs closed systems; the importance of consciousness; my Amazon bestseller Conversations with the Future; how I started blogging and podcasting; the process of preparing for each interview that I do; ReWriting the Human Story: How Our Story Determines Our Future.

I enjoyed talking to Johan and I believe he has created an interesting podcast with a number of great episodes that are very much worth watching. Furthermore, thanks to him I already interviewed one and have booked a second upcoming Singularity.FM interview with a fantastic guest. So check out Johan Steyn’s website and subscribe to Johan’s YouTube channel.

Filed Under: Podcasts Tagged With: AGI, AI, Johan Steyn, Nikola Danaylov, singularity

Prof. Massimo Pigliucci: Accompany science and technology with a good dose of philosophy

May 2, 2020 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/810780325-singularity1on1-massimo-pigliucci.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

I have previously interviewed a few fantastic scientists and philosophers but rare are those strange birds who manage to combine together both deep academic training and the living ethos of those separate disciplines. Prof. Massimo Pigliucci is one of those very rare and strange people. He has 3 Ph.D.’s – Genetics, Evolutionary Biology, and Philosophy, and is also the author of 165 technical papers in both science and philosophy as well as a number of books on Stoic Philosophy, including the best selling How to Be A Stoic: Using Ancient Philosophy to Live a Modern Life.

During this 80 min interview with Massimo Pigliucci, we cover a variety of interesting topics such as: why Massimo is first and foremost a philosopher and not a scientist; the midlife crisis that pushed him to switch careers; stoicism, [virtue] ethics and becoming a better person; moral relativism vs moral realism; the meaning of being human; what are the biggest issues humanity is facing today; why technology is not enough; consciousness, mind uploading and the technological singularity; why technology is the how not the why or what; teleology, transhumanism and Ray Kurzweil’s six epochs of the singularity; scientism and the philosophy of the Big Bang Theory.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Massimo Pigliucci?

Prof. Pigliucci has a Ph.D. in Evolutionary Biology from the University of Connecticut and a Ph.D. in Philosophy from the University of Tennessee. He currently is the K.D. Irani Professor of Philosophy at the City College of New York. His research interests include the philosophy of science, the relationship between science and philosophy, the nature of pseudoscience, and the practical philosophy of Stoicism.

Prof. Pigliucci has been elected fellow of the American Association for the Advancement of Science “for fundamental studies of genotype by environmental interactions and for public defense of evolutionary biology from pseudoscientific attack.”

In the area of public outreach, Prof. Pigliucci has published in national and international outlets such as the New York Times, Washington Post, and The Wall Street Journal, among others. He is a Fellow of the Committee for Skeptical Inquiry and a Contributing Editor to Skeptical Inquirer. He blogs on practical philosophy at Patreon and Medium.

At last count, Prof. Pigliucci has published 165 technical papers in science and philosophy. He is also the author or editor of 13 books, including the best selling How to Be A Stoic: Using Ancient Philosophy to Live a Modern Life (Basic Books). Other titles include Nonsense on Stilts: How to Tell Science from Bunk (University of Chicago Press), and How to Live a Good Life: A Guide to Choosing Your Personal Philosophy (co-edited with Skye Cleary and Daniel Kaufman, Penguin/Random House).

 

Filed Under: Podcasts Tagged With: AI, Massimo Pigliucci, mind uploading, singularity, Stoic, Stoicism, Technology

Canadian SF Author Karl Schroeder: We’re living in a moment of creative possibility

March 25, 2020 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/783097945-singularity1on1-karl-schroeder-2.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

People often ask me about my most favorite interview I have ever done. And my usual reply is that interviews are like children, even if we have our favorites it is not wise to express that outwardly because all kinds of problems will follow. And yet, after having published nearly 250 episodes of my podcast, I can hardly remember one that has had a greater impact on me than my 1st interview with science fiction author and futurist Karl Schroeder. So if you haven’t seen it yet please go and watch it because I will try not to repeat any of the questions I asked Karl last time.

During this 2 hour interview with Karl Schroeder, we cover a variety of interesting topics such as: the major shifts or changes since our last conversation 8 years ago; whether it is harder and harder to write near-term science fiction; the collapse of our past grand narratives; alternative facts and natural selection; why we live in a moment of divergence; Lady of Mazes, the culture of technology and technology of culture; why the best way to become more creative is to have constraints; freedom, limits and infinite possibilities; Ross Ashby’s Law of Requisite Variety; Stealing Worlds, strange-making, tool consciousness, and identity; why Karl thinks that AI is a bit of a red herring; complex systems and predictability; why Global Warming is not a problem to be solved but a constraint to work within; pre-apocalyptic moments as a possibility to create something new; why code is law and technology is a value; transition design as a way to steer rather than control the future.

My favorite quote that I will take away from this conversation with Karl Schroeder is:

What beautiful thing are we going to be forced to make in the next 100 years?

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Karl Schroeder?

Karl Schroeder is the author of ten novels that have been translated into a dozen languages. Karl has a design degree in Strategic Foresight and Innovation and divides his time between writing fiction, conducting workshops and speaking on the potential impacts of science and technology on society. He pioneered a new mode of writing that blends fiction and rigorous futures research — with his influential short novels Crisis in Zefra (2005) and Crisis in Urlia (2011), commissioned by the Canadian army as study and research tools. In 2011 Karl Schroeder attained a Masters degree in Strategic Foresight and Innovation from OCAD University in Toronto.

Filed Under: Podcasts Tagged With: AI, Foresight, Futurism, Karl Schroeder, Lady of Mazes, Stealing Worlds

In the Age of AI [full film]

December 23, 2019 by Socrates

In the Age of AI is probably the best documentary that I have seen on Artificial Intelligence – as it is currently designed and used for, and not as it is theoretically supposed to be in the idealized utopian visions of some scientists, entrepreneurs, and futurists. Though this documentary came out after I delivered my NeoTechnocracy: The Future is Worse than You Think short speech it provides tons of evidence in support of my thesis.

In the Age of AI  is a documentary exploring how artificial intelligence is changing life as we know it — from jobs to privacy to a growing rivalry between the U.S. and China. FRONTLINE investigates the promise and perils of AI and automation, tracing a new industrial revolution that will reshape and disrupt our world, and allow the emergence of a surveillance society.

If you can’t watch the video above try this one:

Filed Under: Video, What if? Tagged With: AI, Artificial Intelligence

Former IBM Watson Team Leader David Ferrucci on AI and Elemental Cognition

December 15, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/727915300-singularity1on1-david-ferrucci.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Dr. David Ferrucci is one of the few people who have created a benchmark in the history of AI because when IBM Watson won Jeopardy we reached a milestone many thought impossible. I was very privileged to have Ferrucci on my podcast in early 2012 when we spent an hour on Watson’s intricacies and importance. Well, it’s been almost 8 years since our original conversation and it was time to catch up with David to talk about the things that have happened in the world of AI, the things that didn’t happen but were supposed to, and our present and future in relation to Artificial Intelligence. All in all, I was super excited to have Ferrucci back on my podcast and hope you enjoy our conversation as much as I did.

During this 90 min interview with David Ferffucci, we cover a variety of interesting topics such as: his perspective on IBM Watson; AI, hype and human cognition; benchmarks on the singularity timeline; his move away from IBM to the biggest hedge fund in the world; Elemental Cognition and its goals, mission and architecture; Noam Chomsky and Marvin Minsky‘s skepticism of Watson; deductive, inductive and abductive learning; leading and managing from the architecture down; Black Box vs Open Box AI; CLARA – Collaborative Learning and Reading Agent and the best and worst applications thereof; the importance of meaning and whether AI can be the source of it; whether AI is the greatest danger humanity is facing today; why technology is a magnifying mirror; why the world is transformed by asking questions.

My favorite quotes that I will take away from this conversation with David Ferrucci is:

Let our imagination drive the expectation for what AI is and what it does for us!

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is David Ferrucci?

Dr. David Ferrucci is the CEO, Founder and Chief Scientist of Elemental Cognition. Established in 2015, Elemental Cognition is an AI company focused on deep natural language understanding and explores methods of learning that result in explicable models of intelligence. Elemental Cognition’s mission is to change how machines learn, understand, and interact with humans. Elemental Cognition envisions a world where AI technology can serve as thought partners through building a shared understanding and is capable of revealing the ‘why’ behind it’s answer.

Dr. Ferrucci is the award-winning Artificial Intelligence Researcher who built and led the IBM Watson team from its inception through its landmark Jeopardy success in 2011. Dr. Ferrucci was awarded the title of IBM Fellow in 2011 and his work in AI earned numerous awards including the CME Innovation award and the AAAI Feigenbaum Prize. From 2011 through 2012, Dr. Ferrucci pioneered Watson’s applications which helped lay the technical foundation for the IBM Watson Division. After nearly 20 years at IBM research, Dr. Ferrucci joined Bridgewater Associates in 2013 to explore applications of AI in markets and management based on a synergy with Bridgewater’s deep commitment to explicable machine intelligence.

Dr. Ferrucci graduated from Rensselaer Polytechnic Institute with a Ph.D. in Computer Science. He has 50+ patents and published papers in the areas of AI, Automated Reasoning, NLP, Intelligent Systems Architectures, Automatic Text Generation, and Automatic Question-Answering. He led numerous projects prior to Watson including AI systems for manufacturing, configuration, document generation, and standards for large-scale text and multi-modal analytics. Dr. Ferrucci has keynoted in highly distinguished venues around the world including many of the top computing conferences. He has been interviewed by many media outlets on AI including: The New York Times, PBS, Financial Times, Bloomberg and the BBC. Dr. Ferrucci serves as an Adjunct Professor of Entrepreneurship and Innovation at Kellogg School of Management at Northwestern University.

 

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, David Ferrucci, Elemental Cognition, IBM Watson, Watson

Gary Marcus on Rebooting AI: Building Artificial Intelligence We Can Trust

September 9, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/678116274-singularity1on1-gary-marcus-rebooting-ai.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

It’s been 7 years since my first interview with Gary Marcus and I felt it’s time to catch up. Gary is the youngest Professor Emeritus at NYU and I wanted to get his contrarian views on the major things that have happened in AI as well as those that haven’t happened. Prof. Marcus is an interesting interviewee not only because he is an expert in the field but also because he is a skeptic on the current approaches and progress towards Artificial General Intelligence but an optimist that we will eventually figure it all out. I can honestly say that I have learned a lot from Gary and hope that you will too.

During this 90 min interview with Gary Marcus we cover a variety of interesting topics such as: Gary’s interest in the human mind, natural and artificial intelligence; Deep Mind’s victory in Go and what it does and doesn’t mean for AGI; the need for Rebooting AI; trusting AI and the AI chasms; Asimov’s Laws and Bostrom’s paper-clip-maximizing AI; the Turing Test and Ray Kurzweil’s singularity timeline; Mastering Go Without Human Knowledge; closed vs open systems; Chomsky, Minsky and Ferrucci on AGI; the limits of deep learning and the myth of the master algorithm; the problem of defining (artificial) intelligence; human and machine consciousness; the team behind and the mission of Robust AI.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Gary Marcus?

 

Gary Marcus is a scientist, best-selling author, and entrepreneur. He is Founder and CEO of Robust.AI, and was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016. He is the author of five books, including The Algebraic Mind, Kluge, The Birth of the Mind, and The New York Times bestseller Guitar Zero, as well as editor of The Future of the Brain and The Norton Psychology Reader.

Gary Marcus has published extensively in fields ranging from human and animal behavior to neuroscience, genetics, linguistics, evolutionary psychology, and artificial intelligence, often in leading journals such as Science and Nature, and is perhaps the youngest Professor Emeritus at NYU. His newest book, co-authored with Ernest Davis, Rebooting AI: Building Machines We Can Trust aims to shake up the field of artificial intelligence

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, Gary Marcus, Rebooting AI

Nikola Danaylov on the Dissenter: The Singularity, Futurism, and Humanity

January 31, 2019 by Socrates

A few weeks ago I got interviewed by Ricardo Lopes for the Dissenter. The interview just came out and I thought I’d share it with you to enjoy or critique. Here is Ricardo’s original description:

#131 Nikola Danaylov: The Singularity, Doing Futurism, and the Human Element

In this episode, we talk about what is meant by the term “Singularity”, and its technological, social, economic, and scientific implications. We consider the technological and human aspects of the equation of economic and technologic growth, and human and moral progress. We also deal with more specific issues, like transhumanism, the ethics of enhancement, AI, and Big Data.

Time Links:

00:58 What is the Singularity?

02:51 Exponential growth

04:42 What would mean to have reached the Singularity?

10:29 The trouble with futurism

15:35 The technological and the human aspects

20:20 What we get from technology depends on how we use it

23:16 Transhumanism, enhancement, and ethics

26:26 AI and economics

31:53 Eliminating boring tasks, and living more meaningful lives

36:37 Big Data, and the risk of exploitation

43:04 The example of self-driving cars

51:32 The human element in the equation

52:20 Follow Mr. Danaylov’s work!

Filed Under: Profiles, Video Tagged With: AI, Futurism, Nikola Danaylov, singularity

Stuart Russell on Artificial Intelligence: What if we succeed?

September 13, 2018 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/499489077-singularity1on1-stuart-russell.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Stuart Russell is a professor of Computer Science at UC Berkeley as well as co-author of the most popular textbook in the field – Artificial Intelligence: A Modern Approach. Given that it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries, I can hardly think of anyone more qualified or more appropriate to discuss issues related to AI or the technological singularity. Unfortunately, we had problems with our internet connection and, consequently, the video recording is among the worst I have ever published. Thus this episode may be a good candidate to listen to as an audio file only. However, given how prominent Prof. Russel is and how generous he was with his time, I thought it would be a sad loss if I didn’t publish the video also, poor quality as it is.

During our 90 min conversation with Stuart Russell we cover a variety of interesting topics such as: his love for physics and computer science; human preferences, expected utility and decision making; why his textbook on AI was “unreasonably successful”; his dream that AI will contribute to a Golden Age of Humanity; aligning human and AI objectives; the proper definition of Artificial Intelligence; Machine Learning vs Deep Learning; debugging and the King Midas problem; the control problem and Russell’s 3 Laws; provably safe mathematical systems and the nature of intelligence; the technological singularity; Artificial General Intelligence and consciousness…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Start Russell?

Stuart Russell is a professor (and formerly chair) of Electrical Engineering and Computer Sciences at University of California at Berkeley. His book Artificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, global seismic monitoring, and philosophical foundations.

He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, singularity, Stuart Russell

Roman Yampolskiy on Artificial Intelligence Safety and Security

August 31, 2018 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/493290474-singularity1on1-artificial-intelligence-safety-and-security.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

There are those of us who philosophize and debate the finer points surrounding the dangers of artificial intelligence. And then there are those who dare go in the trenches and get their hands dirty by doing the actual work that may just end up making the difference. So if AI turns out to be like the terminator then Prof. Roman Yampolskiy may turn out to be like John Connor – but better. Because instead of fighting by using guns and brawn he is utilizing computer science, human intelligence, and code. Whether that turns out to be the case and whether Yampolskiy will be successful or not is to be seen. But at this point, I was very happy to have Roman back on my podcast for our third interview. [See his previous interviews here and here.]

During our 80-minute conversation with Prof. Yampolskiy, we cover a variety of interesting topics such as: AI in the media; why we’re all living in our own bubbles; the rise of interest in AI safety and ethics; the possibility of a 9.11 type of an AI event; why progress is at best making “safer” AI rather than “safe” AI; human and artificial stupidity; the need for an AI emergency task force; machine vs deep learning; technology and AI as a magnifying mirror; technological unemployment; his latest book Artificial Intelligence Safety and Security.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Roman Yampolskiy?

Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the Department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach.

Yampolskiy is a Senior member of IEEE and AGI; Member of Kentucky Academy of Science, former Research Advisor for MIRI and Associate of GCRI.

Roman Yampolskiy holds a Ph.D. degree from the Department of Computer Science and Engineering at the University at Buffalo. He was a recipient of a four year National Science Foundation fellowship. Dr. Yampolskiy’s main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, and Cybersecurity.

Dr. Yampolskiy is an author of over 150 publications including multiple journal articles and books. His research has been cited by 2000+ scientists and profiled in popular magazines both American and foreign, dozens of websites, on radio and TV and other media reports in some 30 languages.

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, Artificial Intelligence Safety and Security, Roman Yampolskiy

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 7
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Nikola Danaylov @ Frankfurt AI Meetup
  • Gus Hosein on Privacy: We’ve been well-meaning but stupid
  • Francesca Ferrando on Philosophical Posthumanism
  • Kim Stanley Robinson on Climate Change and the Ministry for the Future
  • Matthew Cole on Vegan Sociology, Ethics, Transhumanism and Technology

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • Gadgets
  • Lists
  • Music
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Survey
  • Tips
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 3,500 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, better business and better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your own ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Donate
  • My Gear
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” — Nikola Danaylov

Copyright © 2009-2021 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy