• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

Socrates

Nikola Danaylov at Dark Futures: NeoTechnocracy – The Future is Worse than You Think

December 15, 2019 by Socrates

This is the short closing speech I delivered at the 2019 Dark Futures meetup in Toronto. Not my finest speech but, since event organizer and futurist Nikolas Badminton kindly gave me a video of my keynote, I thought it may be good to share it publicly and get your critical feedback.

Feel free to post your comments below.

Title: NeoTechnocracy: The Future is Worse than You Think

Description:  Technology is the new religion, Silicon Valley is the new chosen land and entrepreneurs are the new chosen people. They promise a future that is better than we think – a techno-heaven of abundance and, naturally, immortality. And we are all believers now.

But are we the masters, or are we the tools of our tools? Are we exhibiting religious fetishism for technological objects? Are we creating personality-cults around techno-prophets? Are we falling for new techno-religions – such as dataism? Is power in the hands of those behind, or those in front of the screen?

NeoTechnocracy: The Future is Worse than You Think

(speech content)

In 2016, my wife Julie and I took a road trip through California. Needless to say, Los Angeles and San Francisco were among our points of interest. Now, if you were going by car as we were, chances are that the very first thing you will see upon entering LA is those makeshift camps of tens of thousands of homeless Americans.

Well, 2 years before our trip, Peter Diamandis published his best-seller “Abundance” and told us that the future is better than we think. In it, Diamandis claimed that we can solve all of humanity’s grand challenges with enough capital, technology, and “the right people” – whom Peter titled the Technophilantropists. And, yet, there we were, in his hometown, in the one place in the world with the highest concentration of all of the above, and we witnessed shocking poverty, severe drought, environmental destruction and crumbling infrastructure.

I got so shocked that I decided to do some research. Only to get even more shocked in discovering that if you calculate the cost of living the “Golden State” of California, is, in fact, America’s poorest, because perhaps 1 out of 4 live at or below the poverty line. So while California has the 5th largest economy in the world and the largest in the US, according to McKinsey’s, it ranks 46th among the states for opportunity, 43rd for fiscal stability, and dead last for quality of living.

This paradoxical situation raises many important questions. For example: How is it that poorer countries such as Canada, that have less access to advanced technology and much fewer billionaires, somehow end up having a happier, healthier and longer-living population, free health care, lower crime rates and lower degree of homelessness?

More importantly, is it a mere coincidence that the state with the most billionaires and the most advanced tech is also the poorest? 

And, finally, the blasphemous question: What if the future is worse than we think? How would we know it?

Well, we would know it by looking at the present.

We already saw that in California, abundance is a myth. And I have spoken in the past about how tech companies create scarcity to sell abundance while charging an arm and a leg. And how they pretend to be solving humanity’s grand challenges. Take Facebook. Facebook is not solving humanity’s grand challenges. All it does is micro-targeting of ads to sell you things. And so do Google, YouTube, Twitter, Instagram, Amazon, and most others. All in all, if you think of it, despite their noble rhetoric there is very little saving the world and a whole lot of selling going on. Which is why California itself is in the predicament it is in today.

This is the myth of the technophilantropists – a few entrepreneurial nerds who save the world by technological revolution, while making trillions of dollars. But this revolution is not your grandfather’s revolution. Because this revolution is market-friendly. This revolution is one that Venture Capitalists can invest in. This revolution  is lead from the top, not from the bottom. And this revolution is for-profit. So its greatest accomplishment may turn out to be translating old-school consumerism into the digital realm.

But, of course, a revolution which merely replaces those on top is not a revolution at all – it’s a coup. Because there is no paradigm change. Thus, Silicon Valley gave us not only fake news but also fake revolution, fake change, fake friends, fake saving the world, fake ethics, fake privacy, fake freedom, and, as we can see in the streets of LA and San Fran – fake abundance.

The reality is that Big Tech is nothing more than a classic extractive industry. So if in the 20th century the biggest companies were mining fossil fuels, today the biggest companies are mining data. And just like mining companies devastated our natural environment, today, Big Tech is devastating our social environment. Just like in the 20 century terrible crimes were committed in the developing countries where we had colonialism and sometimes genocide. Today we have data colonialism and, in places like Myanmar, genocide powered by Facebook – with 10,000 dead and a million refugees. That’s why Amnesty International says that Facebook and Google are a threat to human rights. And, I say that the technophilantropists are simply digital robber barons.

It used to be that biology was destiny. Today it may turn out that data is destiny. And, if it is indeed true that data is power, then absolute data about everything that we do may turn out to be the absolute power. Because as Big Tech collects the data, as they classify, trade and sell it, what they are selling is not mere data. What they are selling is us. They are selling our identity. They are selling our values, they are selling our hopes, they are selling our dreams, and they are selling our fears. They are selling our past. And they are selling our future. Ultimately, they are selling our power of choice and self-determination. With the hidden goal of making our stories work for them. Because they believe that they know what is best for us.

Elon Musk once said that whatever disseminates power enhances democracy, and whatever concentrates power undermines it. I say that this unparalleled concentration of power is pushing us towards neotechnocracy. Neotechnocracy where those who make the tech tell us what to see and not see, what books to read, what movies to watch, what to buy, who to have as our friends, where to go to school, where to live, where to work, whom to marry, who to believe, who to vote for, when to feel happy or sad. Because they are creating the greatest brain-washing propaganda machine the world has ever seen. And we are becoming a panopticon society where personal choice, privacy, and freedom are so threatened that even our thoughts are not likely to remain safe or private forever.

The neotechnocrats believe that all problems, including those created by technology, can and will be solved by more and better technology. And that they are the smartest and best people to solve them, while naturally making trillions of dollars.

That is the story of Silicon Valley. A story of idealism turned narcissism turned sociopathy. A story where, like Facebook, Big Tech started as magic, then it went manic, and now it is going monstrous. They say they want to save the world. I say they may end up destroying it. Because when ignorance, arrogance, and power converge you have a recipe not only for self-deception but also for self-destruction.

The stories I shared with you today are not about the future. They are about how things are – in the present. And an invitation to imagine how they may be different. That is why in South America, the Indigenous Indians speaking Inara dialect perceive the future as being behind them. And their word for future means behind time. Because we can see the past right in front of us, but we can’t see the future. And they perceive the future as time coming from behind us and rushing into view in front of us, as the future becomes the present.

So telling you that the future is better than you think is just as ridiculous a claim as telling you that the future is worse than you think. Because the future is wide open for it is not a place we arrive at. It is not like Disney Land – a trademark property owned by a corporation. And, as long as the idea that the future is something we arrive at, it would be owned by those who sell us that story.

Instead, the future is something that we all create. The future is a public good. It is a story that we all tell collectively. And it is neither worse nor better than we think. It simply is. Or, rather, will be. And, while doing that it may be good to remember Frank Herbert’s 1965 story about the people who turned their thinking over to machines in the hope that this would set them free only to find themselves enslaved by other people with machines.

Many conferences talk about the fantastic possibilities created by new tech. Dark futures is different. But please don’t be afraid!

Because fear is the mind killer. And because dark times are not necessarily hopeless times.

Please make your mind adjust to the darkness even though it may want to run away.

Because the more hardships we face, the easier it will be to navigate the darkness, and the easier it would be to create a brighter future.

Thank you!

Filed Under: Op Ed Tagged With: Dark Futures, NeoTechnocracy, Nikola Danaylov

Former IBM Watson Team Leader David Ferrucci on AI and Elemental Cognition

December 15, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/727915300-singularity1on1-david-ferrucci.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

Dr. David Ferrucci is one of the few people who have created a benchmark in the history of AI because when IBM Watson won Jeopardy we reached a milestone many thought impossible. I was very privileged to have Ferrucci on my podcast in early 2012 when we spent an hour on Watson’s intricacies and importance. Well, it’s been almost 8 years since our original conversation and it was time to catch up with David to talk about the things that have happened in the world of AI, the things that didn’t happen but were supposed to, and our present and future in relation to Artificial Intelligence. All in all, I was super excited to have Ferrucci back on my podcast and hope you enjoy our conversation as much as I did.

During this 90 min interview with David Ferffucci, we cover a variety of interesting topics such as: his perspective on IBM Watson; AI, hype and human cognition; benchmarks on the singularity timeline; his move away from IBM to the biggest hedge fund in the world; Elemental Cognition and its goals, mission and architecture; Noam Chomsky and Marvin Minsky‘s skepticism of Watson; deductive, inductive and abductive learning; leading and managing from the architecture down; Black Box vs Open Box AI; CLARA – Collaborative Learning and Reading Agent and the best and worst applications thereof; the importance of meaning and whether AI can be the source of it; whether AI is the greatest danger humanity is facing today; why technology is a magnifying mirror; why the world is transformed by asking questions.

My favorite quotes that I will take away from this conversation with David Ferrucci is:

Let our imagination drive the expectation for what AI is and what it does for us!

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is David Ferrucci?

Dr. David Ferrucci is the CEO, Founder and Chief Scientist of Elemental Cognition. Established in 2015, Elemental Cognition is an AI company focused on deep natural language understanding and explores methods of learning that result in explicable models of intelligence. Elemental Cognition’s mission is to change how machines learn, understand, and interact with humans. Elemental Cognition envisions a world where AI technology can serve as thought partners through building a shared understanding and is capable of revealing the ‘why’ behind it’s answer.

Dr. Ferrucci is the award-winning Artificial Intelligence Researcher who built and led the IBM Watson team from its inception through its landmark Jeopardy success in 2011. Dr. Ferrucci was awarded the title of IBM Fellow in 2011 and his work in AI earned numerous awards including the CME Innovation award and the AAAI Feigenbaum Prize. From 2011 through 2012, Dr. Ferrucci pioneered Watson’s applications which helped lay the technical foundation for the IBM Watson Division. After nearly 20 years at IBM research, Dr. Ferrucci joined Bridgewater Associates in 2013 to explore applications of AI in markets and management based on a synergy with Bridgewater’s deep commitment to explicable machine intelligence.

Dr. Ferrucci graduated from Rensselaer Polytechnic Institute with a Ph.D. in Computer Science. He has 50+ patents and published papers in the areas of AI, Automated Reasoning, NLP, Intelligent Systems Architectures, Automatic Text Generation, and Automatic Question-Answering. He led numerous projects prior to Watson including AI systems for manufacturing, configuration, document generation, and standards for large-scale text and multi-modal analytics. Dr. Ferrucci has keynoted in highly distinguished venues around the world including many of the top computing conferences. He has been interviewed by many media outlets on AI including: The New York Times, PBS, Financial Times, Bloomberg and the BBC. Dr. Ferrucci serves as an Adjunct Professor of Entrepreneurship and Innovation at Kellogg School of Management at Northwestern University.

 

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, David Ferrucci, Elemental Cognition, IBM Watson, Watson

Katy Cook on the Psychology of Silicon Valley

December 8, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/724153912-singularity1on1-katy-cook.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

Katy Cook‘s recent book the Psychology of Silicon Valley: Ethical Threats and Emotional Unintelligence in the Tech Industry is a must-read for anyone interested in understanding the conflicting motivations, mythologies, identities and inherent tensions within Silicon Valley. It offers a unique understanding as per why we have seen the magic, manic and monstrous trajectory of Big Tech, it catalogs what the impact has been, and it offers a way forward. All in all, I learned a ton from Katy Cook and loved having her on my podcast. In fact, I honestly feel that I didn’t do justice to how absolutely fantastic her book is so I highly recommend that you simply go and grab a free copy of the Psychology of Silicon Valley and judge for yourselves.

During this 1 h 40 min interview with Katy Cook, we cover a variety of interesting topics such as: her original interest in the mental health effects of tech; her journey from being a counselor to studying psychology, sociology, the psychology of progress and ending up in ethics in tech; the relationship between power and empathy; her fantastic book the Psychology of Silicon Valley; the importance of socializing oneself without the mediation of a computer; emotional intelligence as a foundation for ethics; how origin stories and culture shape tech companies; why intelligence is a gift but compassion is a choice; why Katy decided to give away the electronic version of her book as a free open access; the treatment of workers by Big Tech such as Amazon; inequality as the best predictor for revolution; the importance of diversity; why Instagram is the most depressed and depressing platform; the vulnerability of young adults and children to social media; the Center for Technology Awareness that Katy co-founded.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Katy Cook?

Katy Cook is the author of The Psychology of Silicon Valley: Ethical Threats and Emotional Unintelligence in the Tech Industry, founder of the nonprofit Centre for Technology Awareness, and a consultant and speaker on ethics and technology. Katy holds a Ph.D. in Clinical, Educational, and Health Psychology from University College London, and Masters degrees in English and Psychology, and a BA in English Literature.

Filed Under: Podcasts Tagged With: Katy Cook, Psychology of Silicon Valley, Silicon Valley

Cathy O’Neil on Weapons of Math Destruction: How Big Data Threatens Democracy

September 25, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/686578525-singularity1on1-weapons-of-math-destruction.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

Cathy O’Neil is a math Ph. D. from Harvard and a data-scientist who hopes to someday have a better answer to the question, “what can a non-academic mathematician do that makes the world a better place?” In the meantime, she wrote a seminal book titled Weapons of Math Destruction: how big data increases inequality and threatens democracy. In my view, this is a must-read book for anyone who thinks that algorithms are by definition a fair and unbiased way to produce a given result. As O’Neil notes in her TED Talk: “the era of blind faith in big data must end.” (Yuval Harari calls this belief a new techno religion – aka dataism.)

During this 90 min interview with Cathy O’Neil, we cover a variety of interesting topics such as: Cathy’s path to and love of Math; Manifest Destiny, American Exceptionalism and why we don’t count the dead With God On Our Side; how and why she became a hedge-fund quant; trusting and fearing the authority of math; why her book is titled Weapons of Math Destruction; Andrew “Boz” Bosworth’s ugly memo that Facebook’s actions were ‘de facto good’ – even if they led to deaths; Mark Zuckerberg’s good for the world but not good for Facebook email; the inherent biases and flaws of PredPol and other Minority Report type of predictive software; AI and the singularity; why intelligence is more than information retrieval; techno-solutionism and why technology is not enough; ethics and accountability; a Hippocratic oath for data scientists and engineers; why I believe that Instagram is among the worst weapons of math destruction; why technology is a magnifying mirror.

My favorite quotes that I will take away from Cathy O’Neil’s Weapons of Math Destruction are:

“Algorithms are opinions embedded in code”

“Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination. And that’s something only humans can provide.”

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Cathy O’Neil?

Cathy O’Neil earned a Ph.D. in math from Harvard, was a postdoc at the MIT math department, and a professor at Barnard College where she published a number of research papers in arithmetic algebraic geometry. She then switched over to the private sector, working as a quant for the hedge fund D.E. Shaw in the middle of the credit crisis, and then for RiskMetrics, a risk software company that assesses risk for the holdings of hedge funds and banks. She left finance in 2011 and started working as a data scientist in the New York start-up scene, building models that predicted people’s purchases and clicks. She wrote Doing Data Science in 2013 and launched the Lede Program in Data Journalism at Columbia in 2014. She is a regular contributor to Bloomberg View and wrote the book Weapons of Math Destruction: how big data increases inequality and threatens democracy. She recently founded ORCAA, an algorithmic auditing company.

Filed Under: Podcasts Tagged With: Big Data, Cathy O'Neil, Weapons of Math Destruction

Technology is a Magnifying Mirror, Not a Crystal Ball

September 19, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/684356345-singularity1on1-magnifying-mirror.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

“Mirror, mirror on the wall, who’s the smartest species of them all?”
“You, oh Homo Sapiens, are smart, it is true. But AI will be smarter even than you.”
***

The most popular myth about technology is perhaps the myth that technology is a crystal ball. A crystal ball because it allegedly allows us to see the future. And to evaluate if that is indeed true, or not, we have to understand the etymology of the word technology – what it means and stands for, or at least what it used to mean and stand for.

The word technology comes from two Greek words – techne and logos. Techne means art, skill, craft, or the way, manner, or means by which a thing is gained. Logos means word, the utterance by which inward thought is expressed, a saying, or an expression. So, literally, technology means words or discourse about the way things are gained. In other words, technology is merely “how” we do things and not “why” we do them or “what” we should be doing. Because it is not an end in itself but rather merely a means to an end.

So technology is not a crystal ball because it does not help us see the future. Instead, technology is a magnifying mirror because it merely reflects our present and, more importantly, who we are.

Technology is a mirror because it reflects the engineers, designers, and programmers who make it. But it is also a mirror to humanity in general and all of our collective dreams, hopes and fears, our knowledge and our ignorance, our strengths, and weaknesses, our good, and our evil. But it is not a normal kind of mirror because technology magnifies and amplifies things – so it always has unforeseen consequences. And the key point here is that technology doesn’t have an essence of its own because it merely reflects our own essence.

So, instead of focusing exclusively on polishing the mirror – i.e. improving technology, we might want to invest some time and resources on improving the image we ourselves project in it – i.e. who we are being, what we are doing and why we are doing it.

Therefore, ultimately, it is not about technology. It’s about us.

Because, as I’ve said many times before, you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.

And there are many historical examples of how better technology did not make our lives better but worse. For example, historian Yuval Noah Harari called the Agrarian Revolution “history’s greatest fraud.” [Because in every way measurable – i.e. health, longevity, work hours per week, nutrition, infant mortality, etc, we were better off as hunter-gatherers.] And today, if we are not careful, we are running the risk that our current technological revolution may also turn out to be our epoch’s greatest fraud. And you can see that nowhere better than in Silicon Valley and Facebook.

Why Facebook? Because Facebook started as magic, then it became manic and, with the Cambridge Analytica revelations, we realized it has become monstrous. And it is not hard to see that most technologies we have invented since the industrial revolution either already follow a similar path from magic through manic to monstrous, or are in danger of doing that. Because humanity is magic, manic and monstrous. And technology reflects us. Examples abound but I can’t think of anything better than plastic.

You see, in the early 20th-century plastic was literally marketed as the magic material. Because you could do almost anything out of plastic but cheaper, faster and easier. And so we quickly became manic obsessive with plastic and did build almost everything out of it. But today it is not hard to see that we are neck-deep in the monstrous stage because whole areas of our oceans contain more plastic pieces than fish. And, to give you a tiny example of just how bad it has become, check this out:

we now produce 1,000,000 plastic water bottles per minute on our planet.

What is worse is that, at best, only 9% ever get recycled. The other 910,000 plastic bottles per minute, end up in the environment. And, of course, water bottles are but a tiny fraction of the total plastic production on our planet. So it is no surprise that we are literally drowning in this originally magic, then manic and now monstrous technology. [Why would AI be any different?!]

So technology doesn’t help us see the future. It only helps us see ourselves. And if we put garbage in, we are going to get garbage out. Only this time it’s exponential. Ditto with stupidity, prejudice or evil.

Therefore, we can’t really fix technology unless we fix ourselves first. Because technology is a magnifying mirror, not a crystal ball.

Filed Under: Podcasts Tagged With: Magnifying mirror, Technology

Gary Marcus on Rebooting AI: Building Artificial Intelligence We Can Trust

September 9, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/678116274-singularity1on1-gary-marcus-rebooting-ai.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

It’s been 7 years since my first interview with Gary Marcus and I felt it’s time to catch up. Gary is the youngest Professor Emeritus at NYU and I wanted to get his contrarian views on the major things that have happened in AI as well as those that haven’t happened. Prof. Marcus is an interesting interviewee not only because he is an expert in the field but also because he is a skeptic on the current approaches and progress towards Artificial General Intelligence but an optimist that we will eventually figure it all out. I can honestly say that I have learned a lot from Gary and hope that you will too.

During this 90 min interview with Gary Marcus we cover a variety of interesting topics such as: Gary’s interest in the human mind, natural and artificial intelligence; Deep Mind’s victory in Go and what it does and doesn’t mean for AGI; the need for Rebooting AI; trusting AI and the AI chasms; Asimov’s Laws and Bostrom’s paper-clip-maximizing AI; the Turing Test and Ray Kurzweil’s singularity timeline; Mastering Go Without Human Knowledge; closed vs open systems; Chomsky, Minsky and Ferrucci on AGI; the limits of deep learning and the myth of the master algorithm; the problem of defining (artificial) intelligence; human and machine consciousness; the team behind and the mission of Robust AI.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Gary Marcus?

 

Gary Marcus is a scientist, best-selling author, and entrepreneur. He is Founder and CEO of Robust.AI, and was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016. He is the author of five books, including The Algebraic Mind, Kluge, The Birth of the Mind, and The New York Times bestseller Guitar Zero, as well as editor of The Future of the Brain and The Norton Psychology Reader.

Gary Marcus has published extensively in fields ranging from human and animal behavior to neuroscience, genetics, linguistics, evolutionary psychology, and artificial intelligence, often in leading journals such as Science and Nature, and is perhaps the youngest Professor Emeritus at NYU. His newest book, co-authored with Ernest Davis, Rebooting AI: Building Machines We Can Trust aims to shake up the field of artificial intelligence

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, Gary Marcus, Rebooting AI

Prof. Steve Fuller on Transhumanism: Ask yourself what is human?

August 25, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/670734185-singularity1on1-steve-fuller.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

Prof. Steve Fuller is the author of 25 books including a trilogy relating to the idea of a ‘post-’ or ‘trans-‘ human future, and most recently, Nietzschean Meditations: Untimely Thoughts at the Dawn of the Transhuman Age. He has an incredibly broad amount of knowledge from a diversity of disciplines and I have to admit that I had a total blast interviewing him. In fact, I feel we could have easily gone for another 2 hours while still having fun. And so there is a great chance I will ask Prof. Fuller for another interview very soon indeed.

During this 2h 15 min interview with Steve Fuller we cover a variety of interesting topics such as: the social foundations of knowledge and our shared love of books; Transhumanism as a scientistic way of understanding who we are; the proactionary vs the precautionary principle; Pierre Teilhard de Chardin and the Omega Point; Julian and Aldous Huxley’s diverging takes on Transhumanism; David Pearce’s Hedonistic Imperative as a concept straight out of Brave New World; the concept and meaning of being human, transhuman and posthuman; humanity’s special place in the cosmos; my Socratic Test of (Artificial) Intelligence; Transhumanism as a materialist theology; Elon Musk, cosmism and populating Mars; de-extinction, genetics and the sociological elements of a given species; the greatest issues that humanity is facing today; AI, the Singularity and armed conflict; morphological freedom and becoming human; longevity and the Death is Wrong argument; Zoltan Istvan and the Transhumanist Wager; Transhumanism as a way of entrenching rather than transcending one’s original views…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Steve Fuller?

Steve Fuller is Auguste Comte Professor of Social Epistemology in the Department of Sociology at the University of Warwick, UK.

Originally trained in history, philosophy and sociology of science at Columbia, Cambridge and Pittsburgh, Fuller is best known for his foundational work in the field of ‘social epistemology’, which is the name of a quarterly journal that he founded in 1987 as well as the first of his nearly 25 books. From 2011 to 2014 he published a trilogy of books relating to the idea of a ‘post-’ or ‘trans-‘ human future, all published with Palgrave Macmillan: Humanity 2.0: What It Means to Be Human Past, Present and Future (2011), Preparing for Life in Humanity 2.0 (2012) and (with Veronika Lipinska) The Proactionary Imperative: A Foundation for Transhumanism (2014).

Prof. Fuller’s most recent books include Knowledge: The Philosophical Quest in History (Routledge 2015), The Academic Caesar (Sage 2016), Post-Truth: Knowledge as a Power Game (Anthem 2018) and most recently, Nietzschean Meditations: Untimely Thoughts at the Dawn of the Transhuman Age (Schwabe 2019). His works have been translated into around thirty languages. He was awarded a D.Litt. by the University of Warwick in 2007 for sustained lifelong contributions to scholarship. He is also a Fellow of the Royal Society of Arts, the UK Academy of Social Sciences, and the European Academy of Sciences and Arts.

Filed Under: Podcasts Tagged With: Nietzschean Meditations, Steve Fuller, transhuman, transhumanism

Cory Doctorow on Walkaway: This will all be so great if we don’t screw it up

August 16, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/666768938-singularity1on1-cory-doctorow-walkaway.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

photo by Jonathan Worth

Cory Doctorow is probably my all time most favorite science fiction writer. The reason for that is simple – Doctorow is not only a great story-teller but also an activist. To paraphrase Karl Marx, writers have tried to capture and describe the world but the point, however, is to change it. And Cory is a fantastic example of an author who doesn’t spend his life in solitude or writing retreats. No. Doctorow jumps right in the trenches and is not afraid to get his hands dirty in doing what is necessary and what is right. Needless to say, I was elated to have him back on my podcast but, if you haven’t seen his 1st interview, you may want to start here: Cory Doctorow on AI.

During today’s 90-minute interview with Cory Doctorow, we cover a variety of interesting topics such as: why Walkaway is an optimistic disaster novel; the history and concept of walkaway; elite panic and A Paradise Built in Hell; the purpose, function and necessity of the nation-state; modern monetary theory and the new green deal; exponential technology, post scarcity and abundance; the Economic Possibilities for our Grandchildren; Resisting Reduction, Transhumanism and immortality; Radicalized and our present moment; the biggest issues that our civilization is facing; AI, the singularity and technological unemployment; Ada Palmer, human agency, the past and the future; polarization and the scientific method; Karl Schroeder‘s tremendous impact on both Cory and me…

My 2 favorite quotes that I will take away from this interview with Cory Doctorow are:

Multiplicity is better than a singularity.

The reason to care about the destiny of technology and our civilization is not merely because getting it wrong will be terrible but also because getting it right will be amazing. There is so much more at stake than averting apocalypse. There is ushering in utopia.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Cory Doctorow?

Cory Doctorow (craphound.com) is a science fiction novelist, blogger and technology activist. He is the co-editor of the popular weblog Boing Boing (boingboing.net), and a contributor to many magazines, websites and newspapers. He is a special consultant to the Electronic Frontier Foundation (eff.org), a non-profit civil liberties group that defends freedom in technology law, policy, standards and treaties. He holds an honorary doctorate in computer science from the Open University (UK), where he is a Visiting Professor; he is also a MIT Media Lab Research Affiliate and a Visiting Professor of Practice at the University of South Carolina’s School of Library and Information Science. In 2007, he served as the Fulbright Chair at the Annenberg Center for Public Diplomacy at the University of Southern California.

His novels have been translated into dozens of languages and are published by Tor Books, Head of Zeus (UK), Titan Books (UK) and HarperCollins (UK). He has won the Locus, Prometheus, Copper Cylinder, White Pine and Sunburst Awards, and been nominated for the Hugo, Nebula and British Science Fiction Awards.

His recent books include RADICALIZED (2019) and WALKAWAY (2017), science fiction for adults; IN REAL LIFE, a young adult graphic novel created with Jen Wang (2014); and INFORMATION DOESN’T WANT TO BE FREE, a business book about creativity in the Internet age (2014).

His latest young adult novel is HOMELAND, the bestselling sequel to 2008’s LITTLE BROTHER. His New York Times Bestseller LITTLE BROTHER was published in 2008. His latest short story collection is WITH A LITTLE HELP, available in paperback, ebook, audiobook and limited edition hardcover. In 2011, Tachyon Books published a collection of his essays, called CONTEXT: FURTHER SELECTED ESSAYS ON PRODUCTIVITY, CREATIVITY, PARENTING, AND POLITICS IN THE 21ST CENTURY (with an introduction by Tim O’Reilly) and IDW published a collection of comic books inspired by his short fiction called CORY DOCTOROW’S FUTURISTIC TALES OF THE HERE AND NOW. THE GREAT BIG BEAUTIFUL TOMORROW, a PM Press Outspoken Authors chapbook, was also published in 2011.

LITTLE BROTHER was nominated for the 2008 Hugo, Nebula, Sunburst and Locus Awards. It won the Ontario Library White Pine Award, the Prometheus Award as well as the Indienet Award for bestselling young adult novel in America’s top 1000 independent bookstores in 2008; it was the San Francisco Public Library’s One City/One Book choice for 2013. It has also been adapted for stage by Josh Costello.

He co-founded the open source peer-to-peer software company OpenCola, and serves on the boards and advisory boards of the Participatory Culture Foundation, the Clarion Foundation, the Open Technology Fund and the Metabrainz Foundation.

Filed Under: Podcasts Tagged With: Cory Doctorow, Radicalized, transhumanism, Walkaway

Ex-Google Design Ethicist Tristan Harris on Technology and Human Downgrading

June 16, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/637450962-singularity1on1-tristan-harris.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

Tristan Harris is one of my heroes. And I don’t know about you but I am much more demanding and harder on my heroes. I just expect them to hold themselves to a higher standard, to know more, to do more, to be more and, perhaps most of all, to live and breathe their own message every moment. So when a hero of mine falls short of my [unrealistic?!] hopes and expectations I am rather disappointed, even heartbroken.

It starts with simple things, like having a fantastic podcast presciently titled Your Undivided Attention yet failing to walk your talk and provide your own undivided attention when being interviewed yourself. Getting so distracted by technology and people around you that I end up repeating unanswered questions multiple times and having to edit a number of moments out the final video. How can you ask people to give you their “undivided attention” if you are not willing or able to give your own “undivided” attention when it’s your turn?

It proceeds with what seems to me is a clear lack of understanding of what ethics means or stands for. [No, it is not about the extreme, niche case of the trolley dilemma that predominantly computer scientists are concerned with and misbelieve that ethics is about.] It peaks with a common Silicon Valley misconception of the very definition of technology, which in my view is often at the root of our consequent problems. And it winds down with a Ted Kaczynski misquote while admitting to never reading him.

It is inconsistencies and gaps like that which say a lot in my view. And, unfortunately, I can keep going with a number of other examples. But my interview was never intended to diminish Tristan Harris or his work. Especially since I completely agree with him on both the urgency and the importance of our current technological crisis. I also believe that Tristan Harris is a genuine, honest, humble, smart, eloquent and well-intentioned guy who has identified a huge problem and decided to devote his life for solving it. He has also managed to garner more public attention and bring more focus to the issue than most anyone I know. And those are all commendable things. As well as a great foundation to build on. But, after dealing with Tristan’s team for nearly a year and after doing this interview, it seems to me that neither Tristan Harris nor his colleagues are yet the people they have to be in order to make the difference that they want to make. Of course, none of us is perfect, me least of all, and so I remain with the hope that Tristan and the Center for Humane Technology would, in time, become the people that they have to be to solve the fundamental problem they want to solve. Or else we may all be screwed.

It is also very likely that I simply did an extremely poor job not only at conducting this interview but also at connecting with and especially reading Tristan Harris. Which is why I recommend that you start by watching one of Tristan’s TED talks together with his most recent Humane: A New Agenda for Tech presentation that I have attached below before you watch my interview with him. I believe that those are much better examples of what he represents and stands for.

Who is Tristan Harris?

Called the “closest thing Silicon Valley has to a conscience” by The Atlantic magazine, Tristan Harris was the former Design Ethicist at Google. He is a world expert on how technology steers us all, leaving Google to engage the issue publicly. Tristan spent over a decade understanding subtle psychological forces, from his childhood as a magician to working with the Stanford Persuasive Technology Lab, to his role as CEO of Apture, which was acquired by Google. He has been featured on 60 Minutes, TED, The Atlantic, the PBS News Hour, and more. He has worked with major technology CEOs and briefed Heads of State and other political leaders.

Tristan Harris on Singularity.FM

During my 70-minute interview with Tristan Harris, we cover a variety of interesting topics such as: Tristan’s magician background and the universal hackability of human nature; his studies at Stanford’s Persuasive Tech Lab; his journey to founding the Center for Humane Technology; high tech’s race down our brain stems and human downgrading; the definition and ethics of [persuasive] technology; Tristan’s biggest fear that tech is destroying our ability to see reality in shared ways, agree on the facts, coordinate and take action; why he believes that Silicon Valley is an existential threat; the dangers of being exponential; the possible solutions to our technological problems.

My favorite quote that I will take away from our conversation with Tristan Harris is this:

I want people to understand what’s happening and going wrong with technology as an interconnected system of harms. That we don’t have addiction or isolation happening separately from people believing in more conspiracy theories. There’s a relationship between people being more isolated and being more vulnerable to conspiracy theories on YouTube that are maximizing their attention. There’s a relationship between shorter attention spans and people only being able to say short brief things about an increasingly complex world that leads to more polarization. So there’s an interconnected system of harms that’s equivalent to social climate change that’s tilting the social fabric.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Filed Under: Podcasts Tagged With: Center for Humane Technology, Human Downgrading, Time Well Spent, Tristan Harris

Andreas Antonopoulos: Just because you don’t need bitcoin, doesn’t mean it’s not needed.

April 17, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/607331712-singularity1on1-andreas-antonopoulos-2.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

I first met Andreas Antonopoulos at the 2014 Bitcoin Expo conference in Toronto, Canada. At that time Andreas was already established as the most publicly recognized expert in the field of crypto-currency, not in the least due to his impressive capacity to take the geek out of bitcoin and make it relevant to and understandable by everyone. To top it off, Antonopoulos is extremely eloquent, has an impressively broad spectrum of knowledge and is an admitted disruptarian. No wonder that my 1st interview with Andreas was so popular. Unfortunately, it took five years before I finally managed to get him back on my show but I hope you enjoy it as much as I did because he is as brilliantly illuminating as ever.

During my 60-minute interview with Andreas Antonopoulos, we cover a variety of interesting topics such as: why he is first and foremost an educator and an author; his recent books Mastering Bitcoin and The Internet of Money; why we need stronger privacy in the bitcoin protocol; blockchain vs bitcoin; Mike Hearn’s claim that bitcoin has failed; power, influence and governance; crypto-exchanges, price manipulation and regulation; the Ethereum DAO hard fork; whether proof of stake is the future leading consensus mechanism; bitcoin’s energy consumption…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Andreas Antonopoulos?

 

Andreas Antonopoulos is a best-selling author, speaker, educator, and one of the world’s foremost bitcoin and open blockchain experts. He is known for delivering electric talks that combine economics, psychology, technology, and game theory with current events, personal anecdote, and historical precedent effortlessly transliterating the complex issues of blockchain technology out of the abstract and into the real world.

In 2014, Antonopoulos authored the groundbreaking book, Mastering Bitcoin (O’Reilly Media), widely considered to be the best technical guide ever written about the technology. His second book, The Internet of Money, unveiled the “why” of bitcoin—and became a bestseller on Amazon— and led to the wildly successful follow-up The Internet of Money Volume Two. His fourth book, Mastering Ethereum (O’Reilly Media) was published in December of 2018.

He is a teaching fellow with the University of Nicosia, serves on the Oversight Committee for the Bitcoin Reference Rate at the Chicago Mercantile Exchange, and has appeared as an expert witness in hearings around the world, including the Australian Senate Banking Committee and the Canadian Senate Commerce, Banking and Finance Committee.

Filed Under: Podcasts

Nikola Danaylov at Devolutions HQ: Artificial Intelligence and the Future of Technology

March 23, 2019 by Socrates

Last week I got interviewed by Devolutions HQ. I enjoyed the interview a lot and thought Yann did a fantastic job editing it. So I decided to share it with you. This way, hopefully, you can enjoy it too. But don’t shy away from criticism 😉

Finally, towards the end of the interview, we explain how you can be one of 3 people to receive a free copy of my book Conversations with the Future.

Show Notes:

In today’s episode, I have the pleasure of having Nikola Danaylov, aka Socrates, to Devolutions HQ. He’s an international bestselling author, keynote speaker, futurist, strategic adviser, and philosopher. His book Conversations with the Future: 21 Visions for the 21st Century is a #1 Amazon Bestseller and his Singularity.FM is one of the most widely sought out podcasts in the niche. During the show, we discuss all sorts of interesting IT related information, whether it is a futuristic toothbrush or the importance of having a personal code of ethics when developing software. It’s a bit longer than usual, so feel free to minimize the screen and just listen to the audio.

Just so everyone is clear. We know that our audience comes from a wide variety of backgrounds, both religious and political so the views, information, or opinions expressed in this video are solely those of the individuals involved and do not necessarily represent those of Devolutions Inc its employees.

Timestamps:

Introduction – What Socrates was all about [2:17]

Question #1 – Explain what you mean by Technology is the How, not the Why or the What? [4:05]

Question #2 – Are there any dangers in relying too much on technology? [5:28]

Question #3 – What advice or encouragement would you like to give to young people going into the tech field? [10:12]

Question #4 – How do you think AI will affect or influence the IT workplace? [12:54]

Question #5 – What is the basic concept of the singularity. [20:11] – 17 Definitions of the Technological Singularity: https://www.singularityweblog.com/17-… [21:08]

Question #6 – What can we do to stay human during these technological advances? [27:02]

Book Giveaway – Tell us what you thought about the show below and we will randomly pick three winners to get a copy of Nik’s book! [32:37]

Filed Under: Video

Sir Martin Rees on the Future: Prospects for Humanity

March 12, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/588844359-singularity1on1-martin-rees.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

Today my guest is world-renowned cosmologist and Astronomer Royal Sir Martin Rees. Martin has written 11 books and more than 500 scientific papers on topics ranging from the Big Bang and cosmology to technology and the future of humanity. Sir Rees has also been concerned with the threats stemming from humanity’s ever-heavier ‘footprint’ on the global environment and with the runaway consequences of ever more powerful technologies. His new book On the Future: prospects for humanity addresses these issues.

During our 90-minute interview with Martin Rees, we cover a variety of interesting topics such as: why he is a scientist and teacher first and foremost; his interest in existential risks and policy; his journey from math to astronomy and cosmology; his environmental and nuclear concerns; the necessity for ethics in science; my interview with Lawrence Krauss; his greatest fear and biggest dream; AI and the Singularity; technological unemployment, UBI and taxation; the future of space exploration; the problem of consciousness; his bet with and differences from Steven Pinker; the major issues humanity is facing in the 21st century; the limits of science and a theory of everything.

My favorite quote that I will take away from Martin Rees’ book is:

“We need to think globally, we need to think rationally, we need to think long term, empowered by 21st-century technology but guided by values that science alone can’t provide.”

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

On the Future: Prospects for Humanity [Book Trailers]

Humanity has reached a critical moment. Our world is unsettled and rapidly changing, and we face existential risks over the next century. Various prospects for the future—good and bad—are possible. Yet our approach to the future is characterized by short-term thinking, polarizing debates, alarmist rhetoric, and pessimism. In this short, exhilarating book, renowned scientist and bestselling author Martin Rees argues that humanity’s future depends on our taking a very different approach to thinking about and planning for tomorrow.

Who is Martin Rees?

Martin Rees is a cosmologist and space scientist. He is based in Cambridge, where he has been Director of the Institute of Astronomy, a Research Professor, and Master of Trinity College. He was President of the Royal Society (the academy of science for UK and Commonwealth) during 2005-2010. In 2005 he was appointed to the UK’s House of Lords. He belongs to numerous foreign academies including those of the US, Russia, Japan and the Vatican and has received many international awards for his research, including the Balzan, Crafoord, Gruber and Templeton prizes. He writes and lectures extensively for general audiences and is the author of nine books. In addition to his involvement in international science and policy, he has been concerned with the threats stemming from humanity’s ever-heavier ‘footprint’ on the global environment, and with the runaway consequences of ever more powerful technologies. His new book On the Future: prospects for humanity addresses these issues.

Speaking as both an astronomer and “a concerned member of the human race,” Sir Martin Rees examines our planet and its future from a cosmic perspective. He urges action to prevent dark consequences from our scientific and technological development.

A post-apocalyptic Earth, emptied of humans, seems like the stuff of science fiction TV and movies. But in this short, surprising talk, Lord Martin Rees asks us to think about our real existential risks — natural and human-made threats that could wipe out humanity. As a concerned member of the human race, he asks: What’s the worst thing that could possibly happen?

Filed Under: Podcasts, Profiles Tagged With: Martin Rees, On the Future, Prospects for Humanity

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 61
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Nikola Danaylov at Dark Futures: NeoTechnocracy – The Future is Worse than You Think
  • Former IBM Watson Team Leader David Ferrucci on AI and Elemental Cognition
  • Katy Cook on the Psychology of Silicon Valley
  • Cathy O’Neil on Weapons of Math Destruction: How Big Data Threatens Democracy
  • Technology is a Magnifying Mirror, Not a Crystal Ball

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • Gadgets
  • Lists
  • Music
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Survey
  • Tips
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 3,500 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, better business and better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your own ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Donate
  • My Gear
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” — Nikola Danaylov

Copyright © 2009-2019 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are okay with it.AcceptPrivacy policy