• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

Artificial Intelligence

Joscha Bach on AI, Cosmology, Existence and the Bible

June 15, 2022 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/1287316762-singularity1on1-joscha-bach-2.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Joscha Bach is a beautiful mind and a very rewarding interlocutor. No surprise I had many requests to have him back on my podcast. I apologize that it took 4 years but the good news is we are going to have a third conversation much sooner this time around. If you haven’t seen our first interview with Joscha I suggest you start there before you watch this one. Enjoy and don’t hesitate to let me know what you think.

During our 2-hour conversation with Joscha Bach, we cover a variety of interesting topics such as his new job at Intel as an AI researcher; whether Moore’s Law is dead or alive; why he is first and foremost a human being trying to understand the world; the kinds of questions he would like to ask God or Artificial Superintelligence; the most recent AI developments and criticisms from Gary Marcus, Marvin Minsky, and Noam Chomsky; living in a learnable universe; evolution and the frame problem; intelligence, smartness, wisdom, and values; personal autonomy and the hive mind; cosmology, theology, story, existence, and non-existence.

My favorite quotes that I will take away from this conversation with Joscha Bach are:

What is a model? A model is a set of regularities that we find in the world – the invariances like the Laws of Physics at a certain level of resolution, that describe how the world doesn’t change but is the same. [These are the things that remain constant.] And the state that the world is in. And once you combine these constants and the known state you can predict the next state of the world.

Intelligence is the ability to dream in a very focused way.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

Who is Joscha Bach?

Joscha Bach, Ph.D. is a cognitive scientist focused on cognitive architectures, mental representation, emotion, social modeling, and learning. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany. He is especially interested in the philosophy of AI, and in using computational models and conceptual tools to understand our minds and what makes us human. Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin, the Institute for Cognitive Science at Osnabrück, and the MIT Media Lab, and authored the book “Principles of Synthetic Intelligence” (Oxford University Press). He currently works at the Harvard Program for Evolutionary Dynamics in Cambridge, Massachusetts.

Filed Under: Podcasts Tagged With: Artificial Intelligence, Cosmology, Intelligence, Joscha Bach

Chapter 11: The AI Story

August 2, 2021 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/1098721606-singularity1on1-rewriting-the-human-story-chapter-11.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

ReWriting the Human Story: How Our Story Determines Our Future

an alternative thought experiment by Nikola Danaylov

 

Chapter 11: The AI Story

Computer Science is no more about computers than astronomy is about telescopes. Edsger Dijkstra

When looms weave by themselves, man’s slavery will end. Aristotle

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Vernor Vinge, 1993

Today we are entirely dependent on machines. So much so, that, if we were to turn off machines invented since the Industrial Revolution, billions of people will die and civilization will collapse. Therefore, ours is already a civilization of machines and technology because they have become indispensable. The question is: What is the outcome of that process? Is it freedom and transcendence or slavery and extinction?

Our present situation is no surprise for it was in the relatively low-tech 19th century when Samuel Butler wrote Darwin among the Machines. There he combined his observations of the rapid technological progress of the Industrial Revolution and Darwin’s theory of evolution. That synthesis led Butler to conclude that intelligent machines are likely to be the next step in evolution:

…it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race.

Samuel Butler developed further his ideas in Erewhon, which was published in 1872:

There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusk has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time.

Similarly to Samuel Butler, the source of Ted Kaczynski’s technophobia was his fear that:

… the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide. The Unibomber Manifesto

As noted at the beginning of this chapter, humanity has already reached the machine dependence that Kaczynski was worried about. Contemporary experts may disagree on when artificial intelligence will equal human intelligence but most believe that in time it likely will. And there is no reason why AI will stop there. What happens next depends on both the human story and the AI story.

For example, if AI is created in a corporate lab it will likely be commercialized. If AI is created in a military lab it will likely be militarized. If AI is created in an Open Source community it will likely be cooperative and collaborative. And if it is created in someone’s garage it will likely reflect the story of that particular person or people. So, the context within which AI is created will shape its own origin story and that story will define the way it treats humanity.

Thus a military AI will likely treat humans as allies and enemies. A commercial AI will likely treat humans as customers and products. An Open Source AI might treat humans as parents, coders, friends, or allies. [Given current funding trends the first two types seem the most likely.] So the most crucial thing that humanity will do when creating AI is narrating the AI origin story. Because similar to us, by describing how it came into being, what it is here for, what’s its purpose and proper place in the universe, its story will determine its future. If we get the AI story right we have a chance to coexist peacefully. But if we get it wrong that could mean a Gigawar of planetary proportions, even extinction.

For example, what if, like us, AI ends up with a story of being the pinnacle of evolution, the smartest species in the universe, and, ultimately, God?

This “AI-ism” is going to be to the AIs very much like what humanism is to us, humans. Would it be surprising if it uses this AI story to justify enslaving and killing billions of humans and destroying the biosphere?! Just like we are using ours to justify killing 73 billion animals and 1.3 trillion aquatic organisms every year. Because as Zora Neale Hurston pointed out in Tell My Horse:

Gods always behave like the people who make them.

That is why it is crucial that humanity transcends humanism to embrace post-humanism, post-anthropomorphism, post-exclusivism, and post-dualism. Because one specieist story should not be replaced by another. Not even with one about the primacy of the superintelligent machines, transhumans, posthumans, aliens, or uplifted animals. To be a true revolution, the AI revolution must change our story structure and abolish its current hierarchy. Otherwise, oppression will persist, suffering will increase and the only change will be who is the oppressor and who is oppressed.

In other words, the dangers posed by AI originate in the same place as the dangers posed by humanity: our story. If like the human story, the AI story ends up as one of uniqueness, exclusivity, progress, supremacy, dominance, alienation, teleology, manifest destiny, and godhood then we can expect a similarly destructive impact. Therefore, to minimize suffering and improve our own chances of survival during the turbulent 21st century, both humanity and AI must embrace a new type of story. One that is decentralized, non-singular, non-hierarchical, non-specieist, non-dualistic, and non-exclusive. Because a multiplicity is ethically better than a singularity. And because it is safer too.

Filed Under: Podcasts, ReWriting the Human Story Tagged With: Artificial Intelligence, singularity, Technological Singularity

Melanie Mitchell on AI: Intelligence is a Complex Phenomenon

September 23, 2020 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/897084307-singularity1on1-melanie-mitchell.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute, and Professor of Computer Science at Portland State University. Prof. Mitchell is the author of a number of interesting books such as Complexity: A Guided Tour and Artificial Intelligence: A Guide for Thinking Humans. One interesting detail of her academic bio is that Douglas Hofstadter was her Ph.D. supervisor.

During this 90 min interview with Melanie Mitchell, we cover a variety of interesting topics such as: how she started in physics, went into math, and ended up in Computer Science; how Douglas Hofstadter became her Ph.D. supervisor; the biggest issues that humanity is facing today; my predictions of the biggest challenges of the next 100 days of the COVID19 pandemic; how to remain hopeful when it is hard to be optimistic; the problems in defining AI, thinking and human; the Turing Test and Ray Kurzweil’s bet with Mitchell Kapor; the Technological Singularity and its possible timeline; the Fallacy of First Steps and the Collapse of AI; Marvin Minsky’s denial of progress towards AGI; Hofstadter’s fear that intelligence may turn out to be a set of “cheap tricks”; the importance of learning and interacting with the world; the [hard] problem of consciousness; why it is us who need to sort ourselves out and not rely on God or AI; complexity, the future and why living in “Uncertain Times” is an unprecented opportunity.

My favorite quote that I will take away from this conversation with Melanie Mitchell is:

Intelligence is a very complex phenomenon and we should study it as such. It’s not the sum of a bunch of narrow intelligences but something much bigger.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

P.S. I owe special thanks to Robert Ziman without whom this interview would not have happened.

Who is Melanie Mitchell?

Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute, and Professor of Computer Science (currently on leave) at Portland State University. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems.

Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux).

Melanie originated the Santa Fe Institute’s Complexity Explorer platform, which offers online courses and other educational resources related to the field of complex systems. Her online course “Introduction to Complexity” has been taken by over 25,000 students, and is one of Course Central’s “top fifty online courses of all time”.

Filed Under: Podcasts Tagged With: Artificial Intelligence

In the Age of AI [full film]

December 23, 2019 by Socrates

In the Age of AI is probably the best documentary that I have seen on Artificial Intelligence – as it is currently designed and used for, and not as it is theoretically supposed to be in the idealized utopian visions of some scientists, entrepreneurs, and futurists. Though this documentary came out after I delivered my NeoTechnocracy: The Future is Worse than You Think short speech it provides tons of evidence in support of my thesis.

In the Age of AI  is a documentary exploring how artificial intelligence is changing life as we know it — from jobs to privacy to a growing rivalry between the U.S. and China. FRONTLINE investigates the promise and perils of AI and automation, tracing a new industrial revolution that will reshape and disrupt our world, and allow the emergence of a surveillance society.

If you can’t watch the video above try this one:

Filed Under: Video, What if? Tagged With: Artificial Intelligence

Former IBM Watson Team Leader David Ferrucci on AI and Elemental Cognition

December 15, 2019 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/727915300-singularity1on1-david-ferrucci.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Dr. David Ferrucci is one of the few people who have created a benchmark in the history of AI because when IBM Watson won Jeopardy we reached a milestone many thought impossible. I was very privileged to have Ferrucci on my podcast in early 2012 when we spent an hour on Watson’s intricacies and importance. Well, it’s been almost 8 years since our original conversation and it was time to catch up with David to talk about the things that have happened in the world of AI, the things that didn’t happen but were supposed to, and our present and future in relation to Artificial Intelligence. All in all, I was super excited to have Ferrucci back on my podcast and hope you enjoy our conversation as much as I did.

During this 90 min interview with David Ferffucci, we cover a variety of interesting topics such as: his perspective on IBM Watson; AI, hype and human cognition; benchmarks on the singularity timeline; his move away from IBM to the biggest hedge fund in the world; Elemental Cognition and its goals, mission and architecture; Noam Chomsky and Marvin Minsky‘s skepticism of Watson; deductive, inductive and abductive learning; leading and managing from the architecture down; Black Box vs Open Box AI; CLARA – Collaborative Learning and Reading Agent and the best and worst applications thereof; the importance of meaning and whether AI can be the source of it; whether AI is the greatest danger humanity is facing today; why technology is a magnifying mirror; why the world is transformed by asking questions.

My favorite quotes that I will take away from this conversation with David Ferrucci is:

Let our imagination drive the expectation for what AI is and what it does for us!

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is David Ferrucci?

Dr. David Ferrucci is the CEO, Founder and Chief Scientist of Elemental Cognition. Established in 2015, Elemental Cognition is an AI company focused on deep natural language understanding and explores methods of learning that result in explicable models of intelligence. Elemental Cognition’s mission is to change how machines learn, understand, and interact with humans. Elemental Cognition envisions a world where AI technology can serve as thought partners through building a shared understanding and is capable of revealing the ‘why’ behind it’s answer.

Dr. Ferrucci is the award-winning Artificial Intelligence Researcher who built and led the IBM Watson team from its inception through its landmark Jeopardy success in 2011. Dr. Ferrucci was awarded the title of IBM Fellow in 2011 and his work in AI earned numerous awards including the CME Innovation award and the AAAI Feigenbaum Prize. From 2011 through 2012, Dr. Ferrucci pioneered Watson’s applications which helped lay the technical foundation for the IBM Watson Division. After nearly 20 years at IBM research, Dr. Ferrucci joined Bridgewater Associates in 2013 to explore applications of AI in markets and management based on a synergy with Bridgewater’s deep commitment to explicable machine intelligence.

Dr. Ferrucci graduated from Rensselaer Polytechnic Institute with a Ph.D. in Computer Science. He has 50+ patents and published papers in the areas of AI, Automated Reasoning, NLP, Intelligent Systems Architectures, Automatic Text Generation, and Automatic Question-Answering. He led numerous projects prior to Watson including AI systems for manufacturing, configuration, document generation, and standards for large-scale text and multi-modal analytics. Dr. Ferrucci has keynoted in highly distinguished venues around the world including many of the top computing conferences. He has been interviewed by many media outlets on AI including: The New York Times, PBS, Financial Times, Bloomberg and the BBC. Dr. Ferrucci serves as an Adjunct Professor of Entrepreneurship and Innovation at Kellogg School of Management at Northwestern University.

 

Filed Under: Podcasts Tagged With: Artificial Intelligence, David Ferrucci, Watson

Gary Marcus on Rebooting AI: Building Artificial Intelligence We Can Trust

September 9, 2019 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/678116274-singularity1on1-gary-marcus-rebooting-ai.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

It’s been 7 years since my first interview with Gary Marcus and I felt it’s time to catch up. Gary is the youngest Professor Emeritus at NYU and I wanted to get his contrarian views on the major things that have happened in AI as well as those that haven’t happened. Prof. Marcus is an interesting interviewee not only because he is an expert in the field but also because he is a skeptic on the current approaches and progress towards Artificial General Intelligence but an optimist that we will eventually figure it all out. I can honestly say that I have learned a lot from Gary and hope that you will too.

During this 90 min interview with Gary Marcus we cover a variety of interesting topics such as: Gary’s interest in the human mind, natural and artificial intelligence; Deep Mind’s victory in Go and what it does and doesn’t mean for AGI; the need for Rebooting AI; trusting AI and the AI chasms; Asimov’s Laws and Bostrom’s paper-clip-maximizing AI; the Turing Test and Ray Kurzweil’s singularity timeline; Mastering Go Without Human Knowledge; closed vs open systems; Chomsky, Minsky and Ferrucci on AGI; the limits of deep learning and the myth of the master algorithm; the problem of defining (artificial) intelligence; human and machine consciousness; the team behind and the mission of Robust AI.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Gary Marcus?

 

Gary Marcus is a scientist, best-selling author, and entrepreneur. He is Founder and CEO of Robust.AI, and was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016. He is the author of five books, including The Algebraic Mind, Kluge, The Birth of the Mind, and The New York Times bestseller Guitar Zero, as well as editor of The Future of the Brain and The Norton Psychology Reader.

Gary Marcus has published extensively in fields ranging from human and animal behavior to neuroscience, genetics, linguistics, evolutionary psychology, and artificial intelligence, often in leading journals such as Science and Nature, and is perhaps the youngest Professor Emeritus at NYU. His newest book, co-authored with Ernest Davis, Rebooting AI: Building Machines We Can Trust aims to shake up the field of artificial intelligence

Filed Under: Podcasts Tagged With: Artificial Intelligence, Gary Marcus

Stuart Russell on Artificial Intelligence: What if we succeed?

September 13, 2018 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/499489077-singularity1on1-stuart-russell.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Stuart Russell is a professor of Computer Science at UC Berkeley as well as co-author of the most popular textbook in the field – Artificial Intelligence: A Modern Approach. Given that it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries, I can hardly think of anyone more qualified or more appropriate to discuss issues related to AI or the technological singularity. Unfortunately, we had problems with our internet connection and, consequently, the video recording is among the worst I have ever published. Thus this episode may be a good candidate to listen to as an audio file only. However, given how prominent Prof. Russel is and how generous he was with his time, I thought it would be a sad loss if I didn’t publish the video also, poor quality as it is.

During our 90 min conversation with Stuart Russell we cover a variety of interesting topics such as: his love for physics and computer science; human preferences, expected utility and decision making; why his textbook on AI was “unreasonably successful”; his dream that AI will contribute to a Golden Age of Humanity; aligning human and AI objectives; the proper definition of Artificial Intelligence; Machine Learning vs Deep Learning; debugging and the King Midas problem; the control problem and Russell’s 3 Laws; provably safe mathematical systems and the nature of intelligence; the technological singularity; Artificial General Intelligence and consciousness…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Start Russell?

Stuart Russell is a professor (and formerly chair) of Electrical Engineering and Computer Sciences at University of California at Berkeley. His book Artificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, global seismic monitoring, and philosophical foundations.

He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.

Filed Under: Podcasts Tagged With: Artificial Intelligence, singularity

Roman Yampolskiy on Artificial Intelligence Safety and Security

August 31, 2018 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/493290474-singularity1on1-artificial-intelligence-safety-and-security.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

There are those of us who philosophize and debate the finer points surrounding the dangers of artificial intelligence. And then there are those who dare go in the trenches and get their hands dirty by doing the actual work that may just end up making the difference. So if AI turns out to be like the terminator then Prof. Roman Yampolskiy may turn out to be like John Connor – but better. Because instead of fighting by using guns and brawn he is utilizing computer science, human intelligence, and code. Whether that turns out to be the case and whether Yampolskiy will be successful or not is to be seen. But at this point, I was very happy to have Roman back on my podcast for our third interview. [See his previous interviews here and here.]

During our 80-minute conversation with Prof. Yampolskiy, we cover a variety of interesting topics such as: AI in the media; why we’re all living in our own bubbles; the rise of interest in AI safety and ethics; the possibility of a 9.11 type of an AI event; why progress is at best making “safer” AI rather than “safe” AI; human and artificial stupidity; the need for an AI emergency task force; machine vs deep learning; technology and AI as a magnifying mirror; technological unemployment; his latest book Artificial Intelligence Safety and Security.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Roman Yampolskiy?

Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the Department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach.

Yampolskiy is a Senior member of IEEE and AGI; Member of Kentucky Academy of Science, former Research Advisor for MIRI and Associate of GCRI.

Roman Yampolskiy holds a Ph.D. degree from the Department of Computer Science and Engineering at the University at Buffalo. He was a recipient of a four year National Science Foundation fellowship. Dr. Yampolskiy’s main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, and Cybersecurity.

Dr. Yampolskiy is an author of over 150 publications including multiple journal articles and books. His research has been cited by 2000+ scientists and profiled in popular magazines both American and foreign, dozens of websites, on radio and TV and other media reports in some 30 languages.

Filed Under: Podcasts Tagged With: Artificial Intelligence, Roman Yampolskiy

At the Heart of Intelligence: A Film by Gerd Leonhard & Telia Finland

June 21, 2018 by Socrates

At the Heart of Intelligence is an emotionally compelling and well-made short film discussing artificial intelligence. It was produced in collaboration between popular futurist Gerd Leonhard and Telia Finland. The movie employes powerful visuals, fantastic music and high-quality production but not at the expense of also asking some vital questions about intelligence – in general, and artificial intelligence – in particular. Hope you enjoy it.

Synopsis: What will happen to humans when machines get more and more intelligent? Futurist Gerd Leonhard and the Helsinki Data Center are figuring it all out…

Filed Under: Video, What if? Tagged With: Artificial Intelligence, Gerd Leonhard

Physicist Max Tegmark on Life 3.0: What We Do Makes a Difference

June 15, 2018 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/458425038-singularity1on1-max-tegmark.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Some people say that renowned MIT physicist Max Tegmark is totally bonkers and refer to him as “Mad Max”. But, to quote Lewis Carroll from Alice in Wonderland, “All the best people are.” Furthermore, I am not sure if Tegmark is “mad” but I am pretty sure he is very much “fun” because I had a total blast interviewing him on my Singularity.FM podcast.

During our 90 min conversation with Max Tegmark we cover a variety of interesting topics such as: curiosity and being a scientist; reality and math; intelligence, AI and AGI; the technological singularity; Life 3.0: Being Human in the Age of Artificial Intelligence; populating the universe; Frank J. Tipler’s Omega Point; the Age of Em and the inevitability of our future; why both me and Max went vegan; the Future of Life Institute; human stupidity and nuclear war; technological unemployment.

My favorite quote that I will take away from this conversation with Max Tegmark is:

It is not our universe giving meaning to us, it is us giving meaning to our universe.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Max Tegmark?

 

Max Tegmark is driven by curiosity, both about how our universe works and about how we can use the science and technology we discover to help humanity flourish rather than flounder.

Max Tegmark is an MIT professor who loves thinking about life’s big questions. He’s written two popular books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality and the recently published Life 3.0: Being Human in the Age of Artificial Intelligence, as well as more than 200 nerdy technical papers on topics from cosmology to AI.

He writes: “In my spare time, I’m president of the Future of Life Institute, which aims to ensure that we develop not only technology but also the wisdom required to use it beneficially.”

 

Previous Singularity.FM episodes mentioned during this interview:

Robin Hanson (part 2): Social Science or Extremist Politics in Disguise?!

Frank J. Tipler: The Laws of Physics Say The Singularity is Inevitable!

Skype co-founder Jaan Tallinn on AI and the Singularity

Lawrence Krauss on Singularity.FM: Keep on Asking Questions

Filed Under: Featured Podcasts, Podcasts Tagged With: Artificial Intelligence, singularity, Technological Singularity

GoodAI launches “Solving the AI Race” round of the General AI Challenge

January 18, 2018 by Socrates

General AI research and development company GoodAI has launched the latest round of their General AI Challenge, Solving the AI Race. $15,000 of prizes are available for suggestions on how to mitigate the risks associated with a race to transformative AI.

The round, which is open to the general public and will run until the 18th of May 18 2018, will ask participants to suggest methods to address the potential pitfalls of competition towards transformative AI, where:

  • Key stakeholders, including developers, may ignore or underestimate safety procedures, or agreements, in favor of faster utilization
  • The fruits of the technology might not be shared by the majority of people to benefit humanity, but only by a selected few

The round is the latest of the General AI Challenge which will be giving away $5million in prizes in the coming years, to advance the search for safe and beneficial general artificial intelligence (AGI). It is the first non-technical round of the challenge and aims to raise awareness around the AI race topic and promote it across different disciplines. GoodAI has developed an advisory board made up of academics and industry leaders including representatives from Facebook, Microsoft, Tencent and O2 (see below for full list).

Marek Rosa, GoodAI CTO and CEO, said: “The General AI Challenge is all about using citizen science to solve one of the most important issues of our time – the development of general AI. A truly transformative AI will have a profound impact on society, and that is why we would love to foster interdisciplinary discussion and hear from experts in different fields. AI is being recognized as a strategic issue by countries, international leaders, businesses, and other stakeholders, however, practical steps to ensure beneficial governance and safety are lacking.”

To enter, participants will have to submit a maximum two page summary of their idea and, if needed, a longer unlimited submission. The entries will be judged on:

the potential they show to maximize a positive future for humanity and how practical they are to implement. They will be judged by an expert panel, made up of GoodAI, and members of the General AI Challenge external advisory board.

Roman V. Yampolskiy, Professor at the University of Louisville and member of the advisory board said: “Avoiding a race for AI is important because under race conditions researchers tend to ignore safety concerns and trade “getting it done right” for “getting it done right now”. As an AI safety researcher I see development of safe and beneficial AI as the most important problem facing humanity and so I am honored to participate in the General AI Challenge Advisory Board to help ensure safe and beneficial outcomes from this exciting competition.”

Dr Ling Ge, Chief European Representative at Tencent and member of the advisory board, added: “It is the responsibility of leaders in the world of AI to ensure that the development of AI is safe and will benefit as many people as possible. It is great to be involved with GoodAI and the General AI Challenge to push forward this idea and open up interdisciplinary discussions.”

Results of the round will be announced in July 2018. For full details of how to enter a submission visit: https://www.general-ai-challenge.org/ai-race

Related Articles
    • The 5 Million Dollar General AI Challenge is a Path to Human-level AI
    • GoodAI CEO Marek Rosa on the General AI Challenge

Filed Under: News Tagged With: artificial general intelligence, Artificial Intelligence, GoodAI

The Intelligence Explosion: A Short Sci Fi Film about AI Ethics

March 21, 2017 by Socrates

The Intelligence Explosion is a hilariously witty short sci fi film about AI ethics. The film is asking questions such as:

How can we prevent a robot AI from turning evil?

Can we solve ethics?

Can humans be a good role model for AI?

I hope you enjoy The Intelligence Explosion as much as I did 😉

Synopsis: It’s 2027 and Mental Endeavours Ltd has a problem with their flagship robot Günther. How do you program an intelligent machine not to annihilate humanity? And if its intelligence is skyrocketing faster than anyone could have predicted, are they about to run out of time?

Other cool science fiction films
  • Rise: Proof of Concept Short Sci Fi Film with Anton Yelchin
  • Tears In The Rain: A Spectacular SciFi Blade Runner Fanfilm
  • “Dust” Short SciFi: Stunning Cinematography & Worthy Message
  • Envoy: David Weinstein’s Short Sci Fi About Alien Robot Must Become Full Feature Film
  • The Nostalgist: A Cool Sci Fi Short Film Explores VR and More
  • The iMom
  • 300,000 km/s: A Cool Sci-Fi Short Film Noir by Stéphane Réthoré
  • ‘Deus Ex: Human Revolution’ Impressive Fan Sci Fi Short Film
  • Shifter: Live Action Sci Fi Short Film by the Hallivis Brothers
  • Keloid: JJ Palomo’s Gripping Robopocalypse Short Sci Fi Film
  • The Final Moments of Karl Brant: Short Sci Fi Film about Mind Uploading
  • Shelved: Robot Comedy Shows Tragedy of Robots Replaced By Humans
  • Tears of Steel: Blender Foundation’s Stunning Short Sci Fi Film
  • Stephan Zlotescu’s Sci Fi Short “True Skin” To Become A Warner Bros Full Feature
  • Plurality: Dennis Liu’s Big Brother Sci Fi Film Rocks
  • ROSA: an Epic Sci Fi Short Film by Jesus Orellana
  • Legacy, Ark and the 3rd Letter: The Dark, Post-Apocalyptic Sci Fi Films of Grzegorz Jonkajtys
  • Portal: No Escape (Live Action Short Sci Fi Film by Dan Trachtenberg)
  • Cost of Living: Short Sci Fi Film by Bendavid Grabinski
  • Robots of Brixton (a short film by Kibwe Tavares)
  • Drone: An Action-Packed Sci Fi Short by Robert Glickert
  • Somnolence: A Short Sci Fi Film by Patrick Kalyn
  • Kara by Quantic Dream: Do Androids Fear Death?
  • Aaron Sims’ Film Archetype: Your Memories Are Just A Glitch!
  • Ruin: A Stunning Short Sci Fi Film by Wes Ball
  • Sight [a Short Sci Fi Film]

Filed Under: Video, What if? Tagged With: Artificial Intelligence

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 6
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy