• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

Artificial Intelligence

Melanie Mitchell on AI: Intelligence is a Complex Phenomenon

September 23, 2020 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/897084307-singularity1on1-melanie-mitchell.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute, and Professor of Computer Science at Portland State University. Prof. Mitchell is the author of a number of interesting books such as Complexity: A Guided Tour and Artificial Intelligence: A Guide for Thinking Humans. One interesting detail of her academic bio is that Douglas Hofstadter was her Ph.D. supervisor.

During this 90 min interview with Melanie Mitchell, we cover a variety of interesting topics such as: how she started in physics, went into math, and ended up in Computer Science; how Douglas Hofstadter became her Ph.D. supervisor; the biggest issues that humanity is facing today; my predictions of the biggest challenges of the next 100 days of the COVID19 pandemic; how to remain hopeful when it is hard to be optimistic; the problems in defining AI, thinking and human; the Turing Test and Ray Kurzweil’s bet with Mitchell Kapor; the Technological Singularity and its possible timeline; the Fallacy of First Steps and the Collapse of AI; Marvin Minsky’s denial of progress towards AGI; Hofstadter’s fear that intelligence may turn out to be a set of “cheap tricks”; the importance of learning and interacting with the world; the [hard] problem of consciousness; why it is us who need to sort ourselves out and not rely on God or AI; complexity, the future and why living in “Uncertain Times” is an unprecented opportunity.

My favorite quote that I will take away from this conversation with Melanie Mitchell is:

Intelligence is a very complex phenomenon and we should study it as such. It’s not the sum of a bunch of narrow intelligences but something much bigger.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

P.S. I owe special thanks to Robert Ziman without whom this interview would not have happened.

Who is Melanie Mitchell?

Melanie Mitchell is the Davis Professor of Complexity at the Santa Fe Institute, and Professor of Computer Science (currently on leave) at Portland State University. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems.

Melanie is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her book Complexity: A Guided Tour (Oxford University Press) won the 2010 Phi Beta Kappa Science Book Award and was named by Amazon.com as one of the ten best science books of 2009. Her latest book is Artificial Intelligence: A Guide for Thinking Humans (Farrar, Straus, and Giroux).

Melanie originated the Santa Fe Institute’s Complexity Explorer platform, which offers online courses and other educational resources related to the field of complex systems. Her online course “Introduction to Complexity” has been taken by over 25,000 students, and is one of Course Central’s “top fifty online courses of all time”.

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, Complexity, Melanie Mitchell

In the Age of AI [full film]

December 23, 2019 by Socrates

In the Age of AI is probably the best documentary that I have seen on Artificial Intelligence – as it is currently designed and used for, and not as it is theoretically supposed to be in the idealized utopian visions of some scientists, entrepreneurs, and futurists. Though this documentary came out after I delivered my NeoTechnocracy: The Future is Worse than You Think short speech it provides tons of evidence in support of my thesis.

In the Age of AI  is a documentary exploring how artificial intelligence is changing life as we know it — from jobs to privacy to a growing rivalry between the U.S. and China. FRONTLINE investigates the promise and perils of AI and automation, tracing a new industrial revolution that will reshape and disrupt our world, and allow the emergence of a surveillance society.

If you can’t watch the video above try this one:

Filed Under: Video, What if? Tagged With: AI, Artificial Intelligence

Former IBM Watson Team Leader David Ferrucci on AI and Elemental Cognition

December 15, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/727915300-singularity1on1-david-ferrucci.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Dr. David Ferrucci is one of the few people who have created a benchmark in the history of AI because when IBM Watson won Jeopardy we reached a milestone many thought impossible. I was very privileged to have Ferrucci on my podcast in early 2012 when we spent an hour on Watson’s intricacies and importance. Well, it’s been almost 8 years since our original conversation and it was time to catch up with David to talk about the things that have happened in the world of AI, the things that didn’t happen but were supposed to, and our present and future in relation to Artificial Intelligence. All in all, I was super excited to have Ferrucci back on my podcast and hope you enjoy our conversation as much as I did.

During this 90 min interview with David Ferffucci, we cover a variety of interesting topics such as: his perspective on IBM Watson; AI, hype and human cognition; benchmarks on the singularity timeline; his move away from IBM to the biggest hedge fund in the world; Elemental Cognition and its goals, mission and architecture; Noam Chomsky and Marvin Minsky‘s skepticism of Watson; deductive, inductive and abductive learning; leading and managing from the architecture down; Black Box vs Open Box AI; CLARA – Collaborative Learning and Reading Agent and the best and worst applications thereof; the importance of meaning and whether AI can be the source of it; whether AI is the greatest danger humanity is facing today; why technology is a magnifying mirror; why the world is transformed by asking questions.

My favorite quotes that I will take away from this conversation with David Ferrucci is:

Let our imagination drive the expectation for what AI is and what it does for us!

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is David Ferrucci?

Dr. David Ferrucci is the CEO, Founder and Chief Scientist of Elemental Cognition. Established in 2015, Elemental Cognition is an AI company focused on deep natural language understanding and explores methods of learning that result in explicable models of intelligence. Elemental Cognition’s mission is to change how machines learn, understand, and interact with humans. Elemental Cognition envisions a world where AI technology can serve as thought partners through building a shared understanding and is capable of revealing the ‘why’ behind it’s answer.

Dr. Ferrucci is the award-winning Artificial Intelligence Researcher who built and led the IBM Watson team from its inception through its landmark Jeopardy success in 2011. Dr. Ferrucci was awarded the title of IBM Fellow in 2011 and his work in AI earned numerous awards including the CME Innovation award and the AAAI Feigenbaum Prize. From 2011 through 2012, Dr. Ferrucci pioneered Watson’s applications which helped lay the technical foundation for the IBM Watson Division. After nearly 20 years at IBM research, Dr. Ferrucci joined Bridgewater Associates in 2013 to explore applications of AI in markets and management based on a synergy with Bridgewater’s deep commitment to explicable machine intelligence.

Dr. Ferrucci graduated from Rensselaer Polytechnic Institute with a Ph.D. in Computer Science. He has 50+ patents and published papers in the areas of AI, Automated Reasoning, NLP, Intelligent Systems Architectures, Automatic Text Generation, and Automatic Question-Answering. He led numerous projects prior to Watson including AI systems for manufacturing, configuration, document generation, and standards for large-scale text and multi-modal analytics. Dr. Ferrucci has keynoted in highly distinguished venues around the world including many of the top computing conferences. He has been interviewed by many media outlets on AI including: The New York Times, PBS, Financial Times, Bloomberg and the BBC. Dr. Ferrucci serves as an Adjunct Professor of Entrepreneurship and Innovation at Kellogg School of Management at Northwestern University.

 

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, David Ferrucci, Elemental Cognition, IBM Watson, Watson

Gary Marcus on Rebooting AI: Building Artificial Intelligence We Can Trust

September 9, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/678116274-singularity1on1-gary-marcus-rebooting-ai.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

It’s been 7 years since my first interview with Gary Marcus and I felt it’s time to catch up. Gary is the youngest Professor Emeritus at NYU and I wanted to get his contrarian views on the major things that have happened in AI as well as those that haven’t happened. Prof. Marcus is an interesting interviewee not only because he is an expert in the field but also because he is a skeptic on the current approaches and progress towards Artificial General Intelligence but an optimist that we will eventually figure it all out. I can honestly say that I have learned a lot from Gary and hope that you will too.

During this 90 min interview with Gary Marcus we cover a variety of interesting topics such as: Gary’s interest in the human mind, natural and artificial intelligence; Deep Mind’s victory in Go and what it does and doesn’t mean for AGI; the need for Rebooting AI; trusting AI and the AI chasms; Asimov’s Laws and Bostrom’s paper-clip-maximizing AI; the Turing Test and Ray Kurzweil’s singularity timeline; Mastering Go Without Human Knowledge; closed vs open systems; Chomsky, Minsky and Ferrucci on AGI; the limits of deep learning and the myth of the master algorithm; the problem of defining (artificial) intelligence; human and machine consciousness; the team behind and the mission of Robust AI.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Gary Marcus?

 

Gary Marcus is a scientist, best-selling author, and entrepreneur. He is Founder and CEO of Robust.AI, and was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016. He is the author of five books, including The Algebraic Mind, Kluge, The Birth of the Mind, and The New York Times bestseller Guitar Zero, as well as editor of The Future of the Brain and The Norton Psychology Reader.

Gary Marcus has published extensively in fields ranging from human and animal behavior to neuroscience, genetics, linguistics, evolutionary psychology, and artificial intelligence, often in leading journals such as Science and Nature, and is perhaps the youngest Professor Emeritus at NYU. His newest book, co-authored with Ernest Davis, Rebooting AI: Building Machines We Can Trust aims to shake up the field of artificial intelligence

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, Gary Marcus, Rebooting AI

Stuart Russell on Artificial Intelligence: What if we succeed?

September 13, 2018 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/499489077-singularity1on1-stuart-russell.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Stuart Russell is a professor of Computer Science at UC Berkeley as well as co-author of the most popular textbook in the field – Artificial Intelligence: A Modern Approach. Given that it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries, I can hardly think of anyone more qualified or more appropriate to discuss issues related to AI or the technological singularity. Unfortunately, we had problems with our internet connection and, consequently, the video recording is among the worst I have ever published. Thus this episode may be a good candidate to listen to as an audio file only. However, given how prominent Prof. Russel is and how generous he was with his time, I thought it would be a sad loss if I didn’t publish the video also, poor quality as it is.

During our 90 min conversation with Stuart Russell we cover a variety of interesting topics such as: his love for physics and computer science; human preferences, expected utility and decision making; why his textbook on AI was “unreasonably successful”; his dream that AI will contribute to a Golden Age of Humanity; aligning human and AI objectives; the proper definition of Artificial Intelligence; Machine Learning vs Deep Learning; debugging and the King Midas problem; the control problem and Russell’s 3 Laws; provably safe mathematical systems and the nature of intelligence; the technological singularity; Artificial General Intelligence and consciousness…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Start Russell?

Stuart Russell is a professor (and formerly chair) of Electrical Engineering and Computer Sciences at University of California at Berkeley. His book Artificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, global seismic monitoring, and philosophical foundations.

He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, singularity, Stuart Russell

Roman Yampolskiy on Artificial Intelligence Safety and Security

August 31, 2018 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/493290474-singularity1on1-artificial-intelligence-safety-and-security.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

There are those of us who philosophize and debate the finer points surrounding the dangers of artificial intelligence. And then there are those who dare go in the trenches and get their hands dirty by doing the actual work that may just end up making the difference. So if AI turns out to be like the terminator then Prof. Roman Yampolskiy may turn out to be like John Connor – but better. Because instead of fighting by using guns and brawn he is utilizing computer science, human intelligence, and code. Whether that turns out to be the case and whether Yampolskiy will be successful or not is to be seen. But at this point, I was very happy to have Roman back on my podcast for our third interview. [See his previous interviews here and here.]

During our 80-minute conversation with Prof. Yampolskiy, we cover a variety of interesting topics such as: AI in the media; why we’re all living in our own bubbles; the rise of interest in AI safety and ethics; the possibility of a 9.11 type of an AI event; why progress is at best making “safer” AI rather than “safe” AI; human and artificial stupidity; the need for an AI emergency task force; machine vs deep learning; technology and AI as a magnifying mirror; technological unemployment; his latest book Artificial Intelligence Safety and Security.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Roman Yampolskiy?

Dr. Roman V. Yampolskiy is a Tenured Associate Professor in the Department of Computer Engineering and Computer Science at the Speed School of Engineering, University of Louisville. He is the founding and current director of the Cyber Security Lab and an author of many books including Artificial Superintelligence: a Futuristic Approach.

Yampolskiy is a Senior member of IEEE and AGI; Member of Kentucky Academy of Science, former Research Advisor for MIRI and Associate of GCRI.

Roman Yampolskiy holds a Ph.D. degree from the Department of Computer Science and Engineering at the University at Buffalo. He was a recipient of a four year National Science Foundation fellowship. Dr. Yampolskiy’s main areas of interest are AI Safety, Artificial Intelligence, Behavioral Biometrics, and Cybersecurity.

Dr. Yampolskiy is an author of over 150 publications including multiple journal articles and books. His research has been cited by 2000+ scientists and profiled in popular magazines both American and foreign, dozens of websites, on radio and TV and other media reports in some 30 languages.

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, Artificial Intelligence Safety and Security, Roman Yampolskiy

At the Heart of Intelligence: A Film by Gerd Leonhard & Telia Finland

June 21, 2018 by Socrates

At the Heart of Intelligence is an emotionally compelling and well-made short film discussing artificial intelligence. It was produced in collaboration between popular futurist Gerd Leonhard and Telia Finland. The movie employes powerful visuals, fantastic music and high-quality production but not at the expense of also asking some vital questions about intelligence – in general, and artificial intelligence – in particular. Hope you enjoy it.

Synopsis: What will happen to humans when machines get more and more intelligent? Futurist Gerd Leonhard and the Helsinki Data Center are figuring it all out…

Filed Under: Video, What if? Tagged With: AI, Artificial Intelligence, At the Heart of Intelligence, Gerd Leonhard, Telia Finland

Physicist Max Tegmark on Life 3.0: What We Do Makes a Difference

June 15, 2018 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/458425038-singularity1on1-max-tegmark.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Some people say that renowned MIT physicist Max Tegmark is totally bonkers and refer to him as “Mad Max”. But, to quote Lewis Carroll from Alice in Wonderland, “All the best people are.” Furthermore, I am not sure if Tegmark is “mad” but I am pretty sure he is very much “fun” because I had a total blast interviewing him on my Singularity.FM podcast.

During our 90 min conversation with Max Tegmark we cover a variety of interesting topics such as: curiosity and being a scientist; reality and math; intelligence, AI and AGI; the technological singularity; Life 3.0: Being Human in the Age of Artificial Intelligence; populating the universe; Frank J. Tipler’s Omega Point; the Age of Em and the inevitability of our future; why both me and Max went vegan; the Future of Life Institute; human stupidity and nuclear war; technological unemployment.

My favorite quote that I will take away from this conversation with Max Tegmark is:

“It is not our universe giving meaning to us, it is us giving meaning to our universe.”

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Max Tegmark?

 

Max Tegmark is driven by curiosity, both about how our universe works and about how we can use the science and technology we discover to help humanity flourish rather than flounder.

Max Tegmark is an MIT professor who loves thinking about life’s big questions. He’s written two popular books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality and the recently published Life 3.0: Being Human in the Age of Artificial Intelligence, as well as more than 200 nerdy technical papers on topics from cosmology to AI.

He writes: “In my spare time, I’m president of the Future of Life Institute, which aims to ensure that we develop not only technology but also the wisdom required to use it beneficially.”

 

Previous Singularity.FM episodes mentioned during this interview:

Robin Hanson (part 2): Social Science or Extremist Politics in Disguise?!

Frank J. Tipler: The Laws of Physics Say The Singularity is Inevitable!

Skype co-founder Jaan Tallinn on AI and the Singularity

Lawrence Krauss on Singularity.FM: Keep on Asking Questions

Filed Under: Featured Podcasts, Podcasts Tagged With: AI, Artificial Intelligence, Life 3.0, Max Tegmark, singularity, Technological Singularity, The Future of Life Institute

GoodAI launches “Solving the AI Race” round of the General AI Challenge

January 18, 2018 by Socrates

General AI research and development company GoodAI has launched the latest round of their General AI Challenge, Solving the AI Race. $15,000 of prizes are available for suggestions on how to mitigate the risks associated with a race to transformative AI.

The round, which is open to the general public and will run until the 18th of May 18 2018, will ask participants to suggest methods to address the potential pitfalls of competition towards transformative AI, where:

  • Key stakeholders, including developers, may ignore or underestimate safety procedures, or agreements, in favor of faster utilization
  • The fruits of the technology might not be shared by the majority of people to benefit humanity, but only by a selected few

The round is the latest of the General AI Challenge which will be giving away $5million in prizes in the coming years, to advance the search for safe and beneficial general artificial intelligence (AGI). It is the first non-technical round of the challenge and aims to raise awareness around the AI race topic and promote it across different disciplines. GoodAI has developed an advisory board made up of academics and industry leaders including representatives from Facebook, Microsoft, Tencent and O2 (see below for full list).

Marek Rosa, GoodAI CTO and CEO, said: “The General AI Challenge is all about using citizen science to solve one of the most important issues of our time – the development of general AI. A truly transformative AI will have a profound impact on society, and that is why we would love to foster interdisciplinary discussion and hear from experts in different fields. AI is being recognized as a strategic issue by countries, international leaders, businesses, and other stakeholders, however, practical steps to ensure beneficial governance and safety are lacking.”

To enter, participants will have to submit a maximum two page summary of their idea and, if needed, a longer unlimited submission. The entries will be judged on:

the potential they show to maximize a positive future for humanity and how practical they are to implement. They will be judged by an expert panel, made up of GoodAI, and members of the General AI Challenge external advisory board.

Roman V. Yampolskiy, Professor at the University of Louisville and member of the advisory board said: “Avoiding a race for AI is important because under race conditions researchers tend to ignore safety concerns and trade “getting it done right” for “getting it done right now”. As an AI safety researcher I see development of safe and beneficial AI as the most important problem facing humanity and so I am honored to participate in the General AI Challenge Advisory Board to help ensure safe and beneficial outcomes from this exciting competition.”

Dr Ling Ge, Chief European Representative at Tencent and member of the advisory board, added: “It is the responsibility of leaders in the world of AI to ensure that the development of AI is safe and will benefit as many people as possible. It is great to be involved with GoodAI and the General AI Challenge to push forward this idea and open up interdisciplinary discussions.”

Results of the round will be announced in July 2018. For full details of how to enter a submission visit: https://www.general-ai-challenge.org/ai-race

Related Articles
    • The 5 Million Dollar General AI Challenge is a Path to Human-level AI
    • GoodAI CEO Marek Rosa on the General AI Challenge

Filed Under: News Tagged With: AGI, AI, AI Race, artificial general intelligence, Artificial Intelligence, GoodAI

The Intelligence Explosion: A Short Sci Fi Film about AI Ethics

March 21, 2017 by Socrates

The Intelligence Explosion is a hilariously witty short sci fi film about AI ethics. The film is asking questions such as:

How can we prevent a robot AI from turning evil?

Can we solve ethics?

Can humans be a good role model for AI?

I hope you enjoy The Intelligence Explosion as much as I did 😉

Synopsis: It’s 2027 and Mental Endeavours Ltd has a problem with their flagship robot Günther. How do you program an intelligent machine not to annihilate humanity? And if its intelligence is skyrocketing faster than anyone could have predicted, are they about to run out of time?

Other cool science fiction films
  • Rise: Proof of Concept Short Sci Fi Film with Anton Yelchin
  • Tears In The Rain: A Spectacular SciFi Blade Runner Fanfilm
  • “Dust” Short SciFi: Stunning Cinematography & Worthy Message
  • Envoy: David Weinstein’s Short Sci Fi About Alien Robot Must Become Full Feature Film
  • The Nostalgist: A Cool Sci Fi Short Film Explores VR and More
  • The iMom
  • 300,000 km/s: A Cool Sci-Fi Short Film Noir by Stéphane Réthoré
  • ‘Deus Ex: Human Revolution’ Impressive Fan Sci Fi Short Film
  • Shifter: Live Action Sci Fi Short Film by the Hallivis Brothers
  • Keloid: JJ Palomo’s Gripping Robopocalypse Short Sci Fi Film
  • The Final Moments of Karl Brant: Short Sci Fi Film about Mind Uploading
  • Shelved: Robot Comedy Shows Tragedy of Robots Replaced By Humans
  • Tears of Steel: Blender Foundation’s Stunning Short Sci Fi Film
  • Stephan Zlotescu’s Sci Fi Short “True Skin” To Become A Warner Bros Full Feature
  • Plurality: Dennis Liu’s Big Brother Sci Fi Film Rocks
  • ROSA: an Epic Sci Fi Short Film by Jesus Orellana
  • Legacy, Ark and the 3rd Letter: The Dark, Post-Apocalyptic Sci Fi Films of Grzegorz Jonkajtys
  • Portal: No Escape (Live Action Short Sci Fi Film by Dan Trachtenberg)
  • Cost of Living: Short Sci Fi Film by Bendavid Grabinski
  • Robots of Brixton (a short film by Kibwe Tavares)
  • Drone: An Action-Packed Sci Fi Short by Robert Glickert
  • Somnolence: A Short Sci Fi Film by Patrick Kalyn
  • Kara by Quantic Dream: Do Androids Fear Death?
  • Aaron Sims’ Film Archetype: Your Memories Are Just A Glitch!
  • Ruin: A Stunning Short Sci Fi Film by Wes Ball
  • Sight [a Short Sci Fi Film]

Filed Under: Video, What if? Tagged With: AI, Artificial Intelligence, Intelligence Explosion

GoodAI CEO Marek Rosa on the General AI Challenge

March 4, 2017 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/310691984-singularity1on1-marek-rosa.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Marek Rosa is the founder and CEO of Keen Software, an independent video game development studio. After the success of Keen Software game titles such as Space Engineers, Marek founded and funded GoodAI with a 10 million dollar personal investment thereby finally being able to pursue his lifelong dream of building General Artificial Intelligence. Most recently Marek launched the General AI Challenge with a fund of 5 million dollars to be given away in the next few years.

During our 80 min discussion with Marek Rosa we cover a wide variety of interesting topics such as: why curiosity is his driving force; his desire to understand the universe; Marek’s journey from game development into Artificial General Intelligence [AGI]; his goal to maximize humanity’s future options; the mission, people and strategy behind GoodAI; his definitions of intelligence and AI; teleology and the direction of the Universe; adaptation, intelligence, evolution and survivability; roadmaps, milestones and obstacles on the way to AGI; the importance of theory of intelligence; the hard problem of creating Good AI; the General AI Challenge…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Marek Rosa?

Marek Rosa is CEO/CTO at GoodAI, a general artificial intelligence R&D company, and the CEO at Keen Software House, an independent game development studio best known for their best-seller Space Engineers (2mil+ copies sold). Both companies are based in Prague, Czech Republic. Marek has been interested in artificial intelligence since childhood. Marek started his career as a programmer but later transitioned to a leadership role. After the success of the Keen Software House titles, Marek was able to personally fund GoodAI, his new general AI research company building human-level artificial intelligence, with $10mil. GoodAI started in January 2014 and has grown to an international team of 20 researchers.

Filed Under: Podcasts Tagged With: AGI, AI, Artificial Intelligence, GoodAI, Marek Rosa

Neuromorphic Chips: a Path Towards Human-level AI

September 2, 2016 by Dan Elton

Neuromorphic ChipsRecently we have seen a slew of popular films that deal with artificial intelligence – most notably The Imitation Game, Chappie, Ex Machina, and Her. However, despite over five decades of research into artificial intelligence, there remain many tasks which are simple for humans that computers cannot do. Given the slow progress of AI, for many the prospect of computers with human-level intelligence seems further away today than it did when Isaac Asimov‘s classic I, Robot was published in 1950. The fact is, however, that today the development of neuromorphic chips offers a plausible path to realizing human-level artificial intelligence within the next few decades.

Starting in the early 2000’s there was a realization that neural network models – based on how the human brain works – could solve many tasks that could not be solved by other methods. The buzzphrase ‘deep learning‘ has become a catch-all term for neural network models and related techniques, as is shown by a plotting of the frequency of the phrase using Google Trends:

//
trends.embed.renderExploreWidget(“TIMESERIES”, {“comparisonItem”:[{“keyword”:”neural network”,”geo”:””,”time”:”all”},{“keyword”:”deep learning”,”geo”:””,”time”:”all”},{“keyword”:”machine learning”,”geo”:””,”time”:”all”}],”category”:0,”property”:””}, {});
//

Most deep learning practitioners acknowledge that the recent popularity of ‘deep learning’ is driven by hardware, in particular GPUs. The core algorithms of neural networks, such as the backpropagation algorithm for calculating gradients was developed in the 1970s and 80s, and convolutional neural networks were developed in the late 90s.

Neuromorphic chips are the logical next step from the use of GPUs. While GPU architectures are designed for computer graphics, neuromorphic chips can implement neural networks directly into hardware. Neuromorphic chips are currently being developed by a variety of public and private entities, including DARPA, the EU, IBM and Qualcomm.

The representation problem

A key difficulty solved by neural networks is the problem of programming conceptual categories into a computer, also called the “representation problem”. Programming a conceptual category requires constructing a representation in the computer’s memory to which phenomena in the world can be mapped. For example “Clifford” would be mapped to the category of “dog” and also “animal” and “pet”, while a VW Beatle would be mapped to “car”. Constructing a robust mapping is very difficult since the members of a category can vary greatly in their appearance – for instance a “human” may be male or female, old or young, and tall or short. Even a simple object, like a cube, will appear different depending on the angle it is viewed from and how it is lit. Since such conceptual categories are constructs of the human mind, it makes sense that we should look at how the brain itself stores representations. Neural networks store representations in the connections between neurons (called synapses), each of which contains a value called a “weight”. Instead of being programmed, neural networks learn what weights to use through a process of training. After observing enough examples, neural networks can categorize new objects they have never seen before, or at least offer a best guess. Today neural networks have become the dominant methodology for solving classification tasks such as handwriting recognition, speech to text, and object recognition.

Massive parallelism

Neural networks are based on simplified mathematical models of how the brain’s neurons operate. Today’s hardware is very inefficient when it comes to simulating neural network models, however. This inefficiency can be traced to fundamental differences between how the brain operates vs how digital computers operate. While computers store information as a string of 0s and 1s, the synaptic “weights” the brain uses to store information can fall anywhere in a range of values – ie. the brain is analog rather than digital. More importantly, in a computer the number of signals that can be processed at the same time is limited by the number of CPU cores – this may be between 8-12 on a typical desktop or 1000-10,000 on a supercomputer. While 10,000 sounds like a lot, this is tiny compared to the brain, which simultaneous processes up to a trillion (1,000,000,000,000) signals in a massively parallel fashion.

Low power consumption

The two main differences between brains and today’s computers (parallelism & analog storage) contribute to another difference, which is the brain’s energy efficiency. Natural selection made the brain remarkably energy efficient, since hunting for food is difficult. The human brain consumes only 20 Watts of a power, while a supercomputing complex capable of simulating a tiny fraction of the brain can consume millions of Watts. The main reason for this is that computers operate at much higher frequencies than the brain and power consumption typically grows with the cube of frequency. Additionally, as a general rule digital circuitry consumes more power than analog – for this reason, some parts of today’s cellphones are being built with analog circuits to improve battery life. A final reason for the high power consumption of today’s chips is that they require all signals be perfectly synchronized by a central clock, requiring a timing distribution system that complicates circuit design and increases power consumption by up to 30%. Copying the brain’s energy efficient features (low frequencies, massive parallelism, analog signals, and asynchronicity) makes a lot of economic sense and is currently one of the main driving forces behind the development of neuromorphic chips.

Fault tolerance

Another difference between neuromorphic chips and conventional computer hardware is the fact that, like the brain, they are fault-tolerant – if a few components fail the chip continues functioning normally. Some neuromorphic chip designs can sustain defect rates as high as 25%. This is very different than today’s computer hardware, where the failure of a single component usually renders the entire chip unusable. The need for precise fabrication has driven up the cost of chip production exponentially as component sizes have become smaller. Neuromorphic chips require lower fabrication tolerances and thus are cheaper to make.

The Crossnet approach

Many different design architectures are being pursued and developed, with varying degrees of brain-like architecture. Some chips, like Google’s tensor processing unit — which powered Deep Mind’s much lauded victory in Go – are proprietary. Plenty of designs for neuromorphic hardware can be found in the academic literature, though. Many designs use a pattern called a crossbar latch, which is a grid of nanowires connected by ‘latching switches’. At Stony Brook University, professor Konstantin K. Likharev has designed a neuromorphic network called the “Crossnet”.

Generic Structure of a feedforward CrossNet

[Figure about depicts a layout, showing two ‘somas’, or circuits that simulate the basic functions of a neuron. The green circles play the role of synapses.  From presentation of K.K. Likharev, used with permission.]

One possible layout is shown above. Electronic devices called ‘somas’ play the role of the neuron’s cell body, which is to add up the inputs and fire an output.  In neuromorphic hardware, somas may mimic neurons with several different levels of sophistication, depending on what is required for the task at hand. For instance, somas may generate spikes (sequences of pulses) just like neurons in the brain. There is growing evidence that sequences of spikes in the brain carry more information than just the average firing rate alone, which previously had been considered the most important quantity.  Spikes are carried through the two types of neural wires, axons and dendrites, which are represented by the red and blue lines in figure 2. The green circles are connections between these wires that play the role of synapses. Each of these ‘latching switches’ must be able to hold a ‘weight’, which is encoded in either a variable capacitance or variable resistance. In principle, memristors would be an ideal component here, if one could be developed that could be mass produced. Crucially, all of the crossnet architecture can be implemented in traditional silicon-based (“CMOS”-like) technology. Each crossnet (as shown in the figure) is designed so they can be stacked, with additional wires connecting somas on different layers. In this way, neuromorphic crossnet technology can achieve component densities that rival the human brain.

Likarev’s design is still theoretical, but there are already several neuromorphic chips in production, such as IBM’s TrueNorth chip, which features spiking neurons, and Qualcomm’s “Zeroeth” project. NVIDIA is currently making major investments in deep learning hardware, and the next generation of NVIDIA devices dedicated for deep learning will likely look closer to neuromorphic chips than traditional GPUs. Another important player is the startup Nervana systems, which was recently acquired by Intel for $400 million.  Many governments are are investing large amounts of money into academic research on neuromorphic chips as well. Prominent examples include the EU’s BrainScaleS project, the UK’s SpiNNaker project, and DARPA’s SyNAPSE program.

Near-future applications

Neuromorphic hardware will make deep learning orders of magnitude faster and more cost effective and thus will be the key driver behind enhanced AI in the areas of big data mining, character recognition, surveillance, robotic control and driverless car technology. Because neuromorphic chips have low power consumption it is conceivable that some day in the near future all cell phones will contain a neuromorphic chip which will perform tasks such as speech to text or translating road signs from foreign languages. Currently apps that perform deep learning tasks require connecting to the cloud to perform the necessary computations. The low power consumption of neuromorphic chips also makes them attractive for military field robotics, which currently are limited by their high power consumption, which quickly drains their batteries.

Cognitive architectures

According to Prof. Likharev, neuromorphic chips are the only current technology which can conceivably “mimic the mammalian cortex with practical power consumption”. Prof. Likharev estimates that his own ‘crossnet’ technology can in principle implement the same number of neurons and connections as the brain on approximately 10 x 10 cms of silicon. Conceivably, production of a 10×10 cm chip will be practical in only a few years, as most of the requisite technologies are already in place.  However, implementing a human level AI or artificial general intelligence (AGI) with a neuromorphic chip will require much more than just just creating the requisite number of neuron and connections. The human brain consists of thousands of interacting components or subnetworks. A collection of components and their pattern of connection is known as a ‘cognitive architecture’.   The cognitive architecture of the brain is largely unknown, but there are serious efforts underway to map it, most notably Obama’s BRAIN initiative and the EU’s Human Brain Project, which has the ambitious (some say overambitious) goal of simulating the entire human brain in the next decade. Neuromorphic chips are perfectly suited to testing out different hypothetical cognitive architectures and simulating how cognitive architectures may change due to aging or disease. In principle, AGI could also be developed using an entirely different cognitive architecture, that bares little resemblance to the human brain.

Conclusion

Considering how much money is being invested in neuromorphic chips, already one can now see a path which leads to AGI. The major unknown is how long it will take for a suitable cognitive architecture to be developed.  The fundamental physics of neuromorphic hardware is solid – they can mimic the brain in component density and power consumption and with thousands of times the speed. Even if some governments seek to ban the development of AGI, it will be realized by someone, somewhere. What happens next is a matter of intense speculation.  If an AGI is capable of recursive self-improvement and had access to the internet, the results could be disastrous for humanity. As discussed by the philosopher Nick Bolstrom and others, developing containment and ‘constrainment’ methods for AI is not as easy as merely ‘installing a kill switch’ or putting the hardware in a Faraday cage. Therefore, we best start thinking hard about such issues now, before it is too late.

 

About the Author:

Dan EltonDan Elton is a physics PhD candidate at the Institute for Advanced Computational Science at Stony Brook University. He is currently looking for employment in the areas of machine learning and data science. In his spare time he enjoys writing about the effects of new technologies on society. He blogs at www.moreisdifferent.com and tweets at @moreisdifferent.

 

Further reading:

Monroe, Don. “Neuromorphic Computing Gets Ready for the (Really) Big Time” Communications of the ACM, Vol. 57 No. 6, Pages 13-15

Filed Under: Op Ed Tagged With: AI, Artificial Intelligence, neuromorphic chips

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 6
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Nikola Danaylov @ Frankfurt AI Meetup
  • Gus Hosein on Privacy: We’ve been well-meaning but stupid
  • Francesca Ferrando on Philosophical Posthumanism
  • Kim Stanley Robinson on Climate Change and the Ministry for the Future
  • Matthew Cole on Vegan Sociology, Ethics, Transhumanism and Technology

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • Gadgets
  • Lists
  • Music
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Survey
  • Tips
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 3,500 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, better business and better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your own ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Donate
  • My Gear
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” — Nikola Danaylov

Copyright © 2009-2021 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy