Michio Kaku on Singularity 1 on 1: Science is the Engine of Prosperity!

Thumbnail image for Michio Kaku on Singularity 1 on 1: Science is the Engine of Prosperity!

Dr. Michio Kaku is a theoretical physicist, bestselling author, acclaimed public speaker, renowned futurist, and popularizer of science. As co-founder of String Field Theory, Dr. Kaku carries on Einstein’s quest to unite the four fundamental forces of nature into a single grand unified theory of everything. You will not be surprised to hear that Michio Kaku has been on my guest dream-list since I started Singularity 1 on 1, and I was beyond ecstatic to finally have an opportunity to speak to him. During our 90 […]

Read more… →

Vitalik Buterin on Singularity 1 on 1: Ethereum is a Decentralized Consensus Platform

Thumbnail image for Vitalik Buterin on Singularity 1 on 1: Ethereum is a Decentralized Consensus Platform

Digital  crypto-currencies such as Bitcoin are the singularity of money and, after spending some time educating myself, I have turned from a skeptic into a fan. If you are not familiar with the topic then I suggest that you start with my interview with Andreas Antonopoulos. In that conversation we lay down the basics and discuss why “Bitcoin is not currency; it’s the internet of money!” If you are already intellectually comfortable with Bitcoin, then it is time to talk about Bitcoin 2.0 and the best candidate so […]

Read more… →

Mech: Human Trials [Short Sci-Fi Film]

Thumbnail image for Mech: Human Trials [Short Sci-Fi Film]

After a serious accident, a man is introduced to a designer street drug promising to restore his ravaged body. Desperate to mend himself, he becomes consumed by the drug – only to discover it is threatening his humanity. “Mech: Human Trials” was written, directed and produced by Patrick Kalyn

Read more… →

Stuart Armstrong: The future is going to be wonderful [If we don't get whacked by the existential risks]

Thumbnail image for Stuart Armstrong: The future is going to be wonderful [If we don't get whacked by the existential risks]

Stuart Armstrong is a James Martin research fellow at the Future of Humanity Institute at Oxford where he looks as issues such as existential risks in general and Artificial Intelligence in particular. Stuart is also the author of Smarter Than Us: The Rise of Machine Intelligence and, after participating in a fun futurist panel discussion with him – Terminator or Transcendence, I knew it is time to interview Armstrong on Singularity 1 on 1. During our conversation with Stuart we cover issues such as: his transition from hard […]

Read more… →

Practopoiesis: How cybernetics of biology can help AI

Thumbnail image for Practopoiesis: How cybernetics of biology can help AI

By creating any form of AI we must copy from biology. The argument goes as follows. A brain is a biological product. And so must be then its products such as perception, insight, inference, logic, mathematics, etc. By creating AI we inevitably tap into something that biology has already invented on its own. It follows thus that the more we want the AI system to be similar to a human—e.g., to get a better grade on the Turing test—the more we need to copy the biology. […]

Read more… →

Steven Kotler: Flow is the doorway to more that most of us seek!

Thumbnail image for Steven Kotler: Flow is the doorway to more that most of us seek!

This is my second interview with Steven Kotler. Last time we discussed Abundance: The Future is Better than You Think – the book Steven co-wrote with Peter Diamandis. This time we are here to discuss reaching our optimum state of performance or what many of us call Flow. During our conversation with Steven Kotler we cover issues such as: the definition of flow as an optimal state of performance; his profound personal experiences of being in flow; the likelyhood of putting flow in a pill or video […]

Read more… →

The Hawking Fallacy Argued, A Personal Opinion

Thumbnail image for The Hawking Fallacy Argued, A Personal Opinion

This article is a response to a piece written by Singularity Utopia (hereforth called SU) entitled The Hawking Fallacy. Briefly, the Hawking Fallacy is SU’s attempt to describe any negative or fearful reaction to strong artificial intelligence, also known as artificial general intelligence, as an irrational fear or a logical fallacy. SU further holds the opinion that when AGI eventuates it will almost certainly be benevolent and usher in a period of abundance and super-intelligence that will within a very short time result in the technological […]

Read more… →