• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

Socrates

Katy Cook on the Psychology of Silicon Valley

December 8, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/724153912-singularity1on1-katy-cook.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

Katy Cook‘s recent book the Psychology of Silicon Valley: Ethical Threats and Emotional Unintelligence in the Tech Industry is a must-read for anyone interested in understanding the conflicting motivations, mythologies, identities and inherent tensions within Silicon Valley. It offers a unique understanding as per why we have seen the magic, manic and monstrous trajectory of Big Tech, it catalogs what the impact has been, and it offers a way forward. All in all, I learned a ton from Katy Cook and loved having her on my podcast. In fact, I honestly feel that I didn’t do justice to how absolutely fantastic her book is so I highly recommend that you simply go and grab a free copy of the Psychology of Silicon Valley and judge for yourselves.

During this 1 h 40 min interview with Katy Cook, we cover a variety of interesting topics such as: her original interest in the mental health effects of tech; her journey from being a counselor to studying psychology, sociology, the psychology of progress and ending up in ethics in tech; the relationship between power and empathy; her fantastic book the Psychology of Silicon Valley; the importance of socializing oneself without the mediation of a computer; emotional intelligence as a foundation for ethics; how origin stories and culture shape tech companies; why intelligence is a gift but compassion is a choice; why Katy decided to give away the electronic version of her book as a free open access; the treatment of workers by Big Tech such as Amazon; inequality as the best predictor for revolution; the importance of diversity; why Instagram is the most depressed and depressing platform; the vulnerability of young adults and children to social media; the Center for Technology Awareness that Katy co-founded.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Katy Cook?

Katy Cook is the author of The Psychology of Silicon Valley: Ethical Threats and Emotional Unintelligence in the Tech Industry, founder of the nonprofit Centre for Technology Awareness, and a consultant and speaker on ethics and technology. Katy holds a Ph.D. in Clinical, Educational, and Health Psychology from University College London, and Masters degrees in English and Psychology, and a BA in English Literature.

Filed Under: Podcasts Tagged With: Katy Cook, Psychology of Silicon Valley, Silicon Valley

Cathy O’Neil on Weapons of Math Destruction: How Big Data Threatens Democracy

September 25, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/686578525-singularity1on1-weapons-of-math-destruction.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

Cathy O’Neil is a math Ph. D. from Harvard and a data-scientist who hopes to someday have a better answer to the question, “what can a non-academic mathematician do that makes the world a better place?” In the meantime, she wrote a seminal book titled Weapons of Math Destruction: how big data increases inequality and threatens democracy. In my view, this is a must-read book for anyone who thinks that algorithms are by definition a fair and unbiased way to produce a given result. As O’Neil notes in her TED Talk: “the era of blind faith in big data must end.” (Yuval Harari calls this belief a new techno religion – aka dataism.)

During this 90 min interview with Cathy O’Neil, we cover a variety of interesting topics such as: Cathy’s path to and love of Math; Manifest Destiny, American Exceptionalism and why we don’t count the dead With God On Our Side; how and why she became a hedge-fund quant; trusting and fearing the authority of math; why her book is titled Weapons of Math Destruction; Andrew “Boz” Bosworth’s ugly memo that Facebook’s actions were ‘de facto good’ – even if they led to deaths; Mark Zuckerberg’s good for the world but not good for Facebook email; the inherent biases and flaws of PredPol and other Minority Report type of predictive software; AI and the singularity; why intelligence is more than information retrieval; techno-solutionism and why technology is not enough; ethics and accountability; a Hippocratic oath for data scientists and engineers; why I believe that Instagram is among the worst weapons of math destruction; why technology is a magnifying mirror.

My favorite quotes that I will take away from Cathy O’Neil’s Weapons of Math Destruction are:

“Algorithms are opinions embedded in code”

“Big Data processes codify the past. They do not invent the future. Doing that requires moral imagination. And that’s something only humans can provide.”

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Cathy O’Neil?

Cathy O’Neil earned a Ph.D. in math from Harvard, was a postdoc at the MIT math department, and a professor at Barnard College where she published a number of research papers in arithmetic algebraic geometry. She then switched over to the private sector, working as a quant for the hedge fund D.E. Shaw in the middle of the credit crisis, and then for RiskMetrics, a risk software company that assesses risk for the holdings of hedge funds and banks. She left finance in 2011 and started working as a data scientist in the New York start-up scene, building models that predicted people’s purchases and clicks. She wrote Doing Data Science in 2013 and launched the Lede Program in Data Journalism at Columbia in 2014. She is a regular contributor to Bloomberg View and wrote the book Weapons of Math Destruction: how big data increases inequality and threatens democracy. She recently founded ORCAA, an algorithmic auditing company.

Filed Under: Podcasts Tagged With: Big Data, Cathy O'Neil, Weapons of Math Destruction

Technology is a Magnifying Mirror, Not a Crystal Ball

September 19, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/684356345-singularity1on1-magnifying-mirror.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

“Mirror, mirror on the wall, who’s the smartest species of them all?”
“You, oh Homo Sapiens, are smart, it is true. But AI will be smarter even than you.”
***

The most popular myth about technology is perhaps the myth that technology is a crystal ball. A crystal ball because it allegedly allows us to see the future. And to evaluate if that is indeed true, or not, we have to understand the etymology of the word technology – what it means and stands for, or at least what it used to mean and stand for.

The word technology comes from two Greek words – techne and logos. Techne means art, skill, craft, or the way, manner, or means by which a thing is gained. Logos means word, the utterance by which inward thought is expressed, a saying, or an expression. So, literally, technology means words or discourse about the way things are gained. In other words, technology is merely “how” we do things and not “why” we do them or “what” we should be doing. Because it is not an end in itself but rather merely a means to an end.

So technology is not a crystal ball because it does not help us see the future. Instead, technology is a magnifying mirror because it merely reflects our present and, more importantly, who we are.

Technology is a mirror because it reflects the engineers, designers, and programmers who make it. But it is also a mirror to humanity in general and all of our collective dreams, hopes and fears, our knowledge and our ignorance, our strengths, and weaknesses, our good, and our evil. But it is not a normal kind of mirror because technology magnifies and amplifies things – so it always has unforeseen consequences. And the key point here is that technology doesn’t have an essence of its own because it merely reflects our own essence.

So, instead of focusing exclusively on polishing the mirror – i.e. improving technology, we might want to invest some time and resources on improving the image we ourselves project in it – i.e. who we are being, what we are doing and why we are doing it.

Therefore, ultimately, it is not about technology. It’s about us.

Because, as I’ve said many times before, you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.

And there are many historical examples of how better technology did not make our lives better but worse. For example, historian Yuval Noah Harari called the Agrarian Revolution “history’s greatest fraud.” [Because in every way measurable – i.e. health, longevity, work hours per week, nutrition, infant mortality, etc, we were better off as hunter-gatherers.] And today, if we are not careful, we are running the risk that our current technological revolution may also turn out to be our epoch’s greatest fraud. And you can see that nowhere better than in Silicon Valley and Facebook.

Why Facebook? Because Facebook started as magic, then it became manic and, with the Cambridge Analytica revelations, we realized it has become monstrous. And it is not hard to see that most technologies we have invented since the industrial revolution either already follow a similar path from magic through manic to monstrous, or are in danger of doing that. Because humanity is magic, manic and monstrous. And technology reflects us. Examples abound but I can’t think of anything better than plastic.

You see, in the early 20th-century plastic was literally marketed as the magic material. Because you could do almost anything out of plastic but cheaper, faster and easier. And so we quickly became manic obsessive with plastic and did build almost everything out of it. But today it is not hard to see that we are neck-deep in the monstrous stage because whole areas of our oceans contain more plastic pieces than fish. And, to give you a tiny example of just how bad it has become, check this out:

we now produce 1,000,000 plastic water bottles per minute on our planet.

What is worse is that, at best, only 9% ever get recycled. The other 910,000 plastic bottles per minute, end up in the environment. And, of course, water bottles are but a tiny fraction of the total plastic production on our planet. So it is no surprise that we are literally drowning in this originally magic, then manic and now monstrous technology. [Why would AI be any different?!]

So technology doesn’t help us see the future. It only helps us see ourselves. And if we put garbage in, we are going to get garbage out. Only this time it’s exponential. Ditto with stupidity, prejudice or evil.

Therefore, we can’t really fix technology unless we fix ourselves first. Because technology is a magnifying mirror, not a crystal ball.

Filed Under: Podcasts Tagged With: Magnifying mirror, Technology

Gary Marcus on Rebooting AI: Building Artificial Intelligence We Can Trust

September 9, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/678116274-singularity1on1-gary-marcus-rebooting-ai.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

It’s been 7 years since my first interview with Gary Marcus and I felt it’s time to catch up. Gary is the youngest Professor Emeritus at NYU and I wanted to get his contrarian views on the major things that have happened in AI as well as those that haven’t happened. Prof. Marcus is an interesting interviewee not only because he is an expert in the field but also because he is a skeptic on the current approaches and progress towards Artificial General Intelligence but an optimist that we will eventually figure it all out. I can honestly say that I have learned a lot from Gary and hope that you will too.

During this 90 min interview with Gary Marcus we cover a variety of interesting topics such as: Gary’s interest in the human mind, natural and artificial intelligence; Deep Mind’s victory in Go and what it does and doesn’t mean for AGI; the need for Rebooting AI; trusting AI and the AI chasms; Asimov’s Laws and Bostrom’s paper-clip-maximizing AI; the Turing Test and Ray Kurzweil’s singularity timeline; Mastering Go Without Human Knowledge; closed vs open systems; Chomsky, Minsky and Ferrucci on AGI; the limits of deep learning and the myth of the master algorithm; the problem of defining (artificial) intelligence; human and machine consciousness; the team behind and the mission of Robust AI.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Gary Marcus?

 

Gary Marcus is a scientist, best-selling author, and entrepreneur. He is Founder and CEO of Robust.AI, and was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016. He is the author of five books, including The Algebraic Mind, Kluge, The Birth of the Mind, and The New York Times bestseller Guitar Zero, as well as editor of The Future of the Brain and The Norton Psychology Reader.

Gary Marcus has published extensively in fields ranging from human and animal behavior to neuroscience, genetics, linguistics, evolutionary psychology, and artificial intelligence, often in leading journals such as Science and Nature, and is perhaps the youngest Professor Emeritus at NYU. His newest book, co-authored with Ernest Davis, Rebooting AI: Building Machines We Can Trust aims to shake up the field of artificial intelligence

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, Gary Marcus, Rebooting AI

Prof. Steve Fuller on Transhumanism: Ask yourself what is human?

August 25, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/670734185-singularity1on1-steve-fuller.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

Prof. Steve Fuller is the author of 25 books including a trilogy relating to the idea of a ‘post-’ or ‘trans-‘ human future, and most recently, Nietzschean Meditations: Untimely Thoughts at the Dawn of the Transhuman Age. He has an incredibly broad amount of knowledge from a diversity of disciplines and I have to admit that I had a total blast interviewing him. In fact, I feel we could have easily gone for another 2 hours while still having fun. And so there is a great chance I will ask Prof. Fuller for another interview very soon indeed.

During this 2h 15 min interview with Steve Fuller we cover a variety of interesting topics such as: the social foundations of knowledge and our shared love of books; Transhumanism as a scientistic way of understanding who we are; the proactionary vs the precautionary principle; Pierre Teilhard de Chardin and the Omega Point; Julian and Aldous Huxley’s diverging takes on Transhumanism; David Pearce’s Hedonistic Imperative as a concept straight out of Brave New World; the concept and meaning of being human, transhuman and posthuman; humanity’s special place in the cosmos; my Socratic Test of (Artificial) Intelligence; Transhumanism as a materialist theology; Elon Musk, cosmism and populating Mars; de-extinction, genetics and the sociological elements of a given species; the greatest issues that humanity is facing today; AI, the Singularity and armed conflict; morphological freedom and becoming human; longevity and the Death is Wrong argument; Zoltan Istvan and the Transhumanist Wager; Transhumanism as a way of entrenching rather than transcending one’s original views…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Steve Fuller?

Steve Fuller is Auguste Comte Professor of Social Epistemology in the Department of Sociology at the University of Warwick, UK.

Originally trained in history, philosophy and sociology of science at Columbia, Cambridge and Pittsburgh, Fuller is best known for his foundational work in the field of ‘social epistemology’, which is the name of a quarterly journal that he founded in 1987 as well as the first of his nearly 25 books. From 2011 to 2014 he published a trilogy of books relating to the idea of a ‘post-’ or ‘trans-‘ human future, all published with Palgrave Macmillan: Humanity 2.0: What It Means to Be Human Past, Present and Future (2011), Preparing for Life in Humanity 2.0 (2012) and (with Veronika Lipinska) The Proactionary Imperative: A Foundation for Transhumanism (2014).

Prof. Fuller’s most recent books include Knowledge: The Philosophical Quest in History (Routledge 2015), The Academic Caesar (Sage 2016), Post-Truth: Knowledge as a Power Game (Anthem 2018) and most recently, Nietzschean Meditations: Untimely Thoughts at the Dawn of the Transhuman Age (Schwabe 2019). His works have been translated into around thirty languages. He was awarded a D.Litt. by the University of Warwick in 2007 for sustained lifelong contributions to scholarship. He is also a Fellow of the Royal Society of Arts, the UK Academy of Social Sciences, and the European Academy of Sciences and Arts.

Filed Under: Podcasts Tagged With: Nietzschean Meditations, Steve Fuller, transhuman, transhumanism

Cory Doctorow on Walkaway: This will all be so great if we don’t screw it up

August 16, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/666768938-singularity1on1-cory-doctorow-walkaway.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

photo by Jonathan Worth

Cory Doctorow is probably my all time most favorite science fiction writer. The reason for that is simple – Doctorow is not only a great story-teller but also an activist. To paraphrase Karl Marx, writers have tried to capture and describe the world but the point, however, is to change it. And Cory is a fantastic example of an author who doesn’t spend his life in solitude or writing retreats. No. Doctorow jumps right in the trenches and is not afraid to get his hands dirty in doing what is necessary and what is right. Needless to say, I was elated to have him back on my podcast but, if you haven’t seen his 1st interview, you may want to start here: Cory Doctorow on AI.

During today’s 90-minute interview with Cory Doctorow, we cover a variety of interesting topics such as: why Walkaway is an optimistic disaster novel; the history and concept of walkaway; elite panic and A Paradise Built in Hell; the purpose, function and necessity of the nation-state; modern monetary theory and the new green deal; exponential technology, post scarcity and abundance; the Economic Possibilities for our Grandchildren; Resisting Reduction, Transhumanism and immortality; Radicalized and our present moment; the biggest issues that our civilization is facing; AI, the singularity and technological unemployment; Ada Palmer, human agency, the past and the future; polarization and the scientific method; Karl Schroeder‘s tremendous impact on both Cory and me…

My 2 favorite quotes that I will take away from this interview with Cory Doctorow are:

Multiplicity is better than a singularity.

The reason to care about the destiny of technology and our civilization is not merely because getting it wrong will be terrible but also because getting it right will be amazing. There is so much more at stake than averting apocalypse. There is ushering in utopia.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Cory Doctorow?

Cory Doctorow (craphound.com) is a science fiction novelist, blogger and technology activist. He is the co-editor of the popular weblog Boing Boing (boingboing.net), and a contributor to many magazines, websites and newspapers. He is a special consultant to the Electronic Frontier Foundation (eff.org), a non-profit civil liberties group that defends freedom in technology law, policy, standards and treaties. He holds an honorary doctorate in computer science from the Open University (UK), where he is a Visiting Professor; he is also a MIT Media Lab Research Affiliate and a Visiting Professor of Practice at the University of South Carolina’s School of Library and Information Science. In 2007, he served as the Fulbright Chair at the Annenberg Center for Public Diplomacy at the University of Southern California.

His novels have been translated into dozens of languages and are published by Tor Books, Head of Zeus (UK), Titan Books (UK) and HarperCollins (UK). He has won the Locus, Prometheus, Copper Cylinder, White Pine and Sunburst Awards, and been nominated for the Hugo, Nebula and British Science Fiction Awards.

His recent books include RADICALIZED (2019) and WALKAWAY (2017), science fiction for adults; IN REAL LIFE, a young adult graphic novel created with Jen Wang (2014); and INFORMATION DOESN’T WANT TO BE FREE, a business book about creativity in the Internet age (2014).

His latest young adult novel is HOMELAND, the bestselling sequel to 2008’s LITTLE BROTHER. His New York Times Bestseller LITTLE BROTHER was published in 2008. His latest short story collection is WITH A LITTLE HELP, available in paperback, ebook, audiobook and limited edition hardcover. In 2011, Tachyon Books published a collection of his essays, called CONTEXT: FURTHER SELECTED ESSAYS ON PRODUCTIVITY, CREATIVITY, PARENTING, AND POLITICS IN THE 21ST CENTURY (with an introduction by Tim O’Reilly) and IDW published a collection of comic books inspired by his short fiction called CORY DOCTOROW’S FUTURISTIC TALES OF THE HERE AND NOW. THE GREAT BIG BEAUTIFUL TOMORROW, a PM Press Outspoken Authors chapbook, was also published in 2011.

LITTLE BROTHER was nominated for the 2008 Hugo, Nebula, Sunburst and Locus Awards. It won the Ontario Library White Pine Award, the Prometheus Award as well as the Indienet Award for bestselling young adult novel in America’s top 1000 independent bookstores in 2008; it was the San Francisco Public Library’s One City/One Book choice for 2013. It has also been adapted for stage by Josh Costello.

He co-founded the open source peer-to-peer software company OpenCola, and serves on the boards and advisory boards of the Participatory Culture Foundation, the Clarion Foundation, the Open Technology Fund and the Metabrainz Foundation.

Filed Under: Podcasts Tagged With: Cory Doctorow, Radicalized, transhumanism, Walkaway

Nikola Danaylov on Universal Grammar, Language and AI

June 26, 2019 by Socrates

This is a 2016 interview I did for Tobias Martens discussing a variety of topics including Universal Grammar, Language, AI, and the singularity. While my ideas have evolved since I did this interview, I don’t think that there have been any fundamental changes. So, given that I believe that there is still merit in the ideas we discussed with Tobias, I thought it would be good if I were to share this publically for your feedback.

Here is Tobias Marten’s original write up for this interview:

Language singularity: Make Alexa and Siri talk with each other!

By Tobias Martens tm@whoelse.ai

For my master thesis about Jeremy Rifkin’s theory of near-zero marginal cost societies, I contacted Nikola in 2016 to interview him about an idea: A brand that is self-explanatory to every kind of Internet user by a universal understood grammar.

“Internet in a child-like language”, “universal language for AI”, or “a simplified programming principle for human language” were my early approaches to explain thoughts about the concept of a file format based on “who else?” relationships in language.

My theory: If Internet services can be easier explained to users in a simplified grammar based on “who else?” questions e.g. “Who else needs a ride-share?” (=UBER), “Who else looks for an apartment rental?” (=AirBnB), “Who else wants to go on a date?” (=Tinder), maybe AIs could as well communicate better amongst each other by a standardized vocabulary as language protocol.

Nikola suggested publishing our conversation, but I decided to wait. Some of the considerations and theories felt as too spontaneously thought off.

In 2018 we turned the idea into the whoelse.ai project. Currently, we work together with the German Institute for Norms to formulate the first-ever standardization proposal for language explainability and NLP compatibility.

We believe the DIN Spec consortium is an exciting opportunity for AI developers and IoT manufacturers alike. The increasing number of use cases for voice-based interfaces make Voice Internet interoperability and address protocols not only viable but necessary.

In the next couple of months, we consolidate inputs from the AI research and industry user community. Today we start by re-posting the original conversation between me and Nikola.

We plan to publish a draft version of the DIN Spec at O´Reilly AI Europe in October 2019. This post is an invitation to collaboration. Join us at whoelse.ai and reach out: tm@whoelse.ai

Filed Under: What if?

Ex-Google Design Ethicist Tristan Harris on Technology and Human Downgrading

June 16, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/637450962-singularity1on1-tristan-harris.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

Tristan Harris is one of my heroes. And I don’t know about you but I am much more demanding and harder on my heroes. I just expect them to hold themselves to a higher standard, to know more, to do more, to be more and, perhaps most of all, to live and breathe their own message every moment. So when a hero of mine falls short of my [unrealistic?!] hopes and expectations I am rather disappointed, even heartbroken.

It starts with simple things, like having a fantastic podcast presciently titled Your Undivided Attention yet failing to walk your talk and provide your own undivided attention when being interviewed yourself. Getting so distracted by technology and people around you that I end up repeating unanswered questions multiple times and having to edit a number of moments out the final video. How can you ask people to give you their “undivided attention” if you are not willing or able to give your own “undivided” attention when it’s your turn?

It proceeds with what seems to me is a clear lack of understanding of what ethics means or stands for. [No, it is not about the extreme, niche case of the trolley dilemma that predominantly computer scientists are concerned with and misbelieve that ethics is about.] It peaks with a common Silicon Valley misconception of the very definition of technology, which in my view is often at the root of our consequent problems. And it winds down with a Ted Kaczynski misquote while admitting to never reading him.

It is inconsistencies and gaps like that which say a lot in my view. And, unfortunately, I can keep going with a number of other examples. But my interview was never intended to diminish Tristan Harris or his work. Especially since I completely agree with him on both the urgency and the importance of our current technological crisis. I also believe that Tristan Harris is a genuine, honest, humble, smart, eloquent and well-intentioned guy who has identified a huge problem and decided to devote his life for solving it. He has also managed to garner more public attention and bring more focus to the issue than most anyone I know. And those are all commendable things. As well as a great foundation to build on. But, after dealing with Tristan’s team for nearly a year and after doing this interview, it seems to me that neither Tristan Harris nor his colleagues are yet the people they have to be in order to make the difference that they want to make. Of course, none of us is perfect, me least of all, and so I remain with the hope that Tristan and the Center for Humane Technology would, in time, become the people that they have to be to solve the fundamental problem they want to solve. Or else we may all be screwed.

It is also very likely that I simply did an extremely poor job not only at conducting this interview but also at connecting with and especially reading Tristan Harris. Which is why I recommend that you start by watching one of Tristan’s TED talks together with his most recent Humane: A New Agenda for Tech presentation that I have attached below before you watch my interview with him. I believe that those are much better examples of what he represents and stands for.

Who is Tristan Harris?

Called the “closest thing Silicon Valley has to a conscience” by The Atlantic magazine, Tristan Harris was the former Design Ethicist at Google. He is a world expert on how technology steers us all, leaving Google to engage the issue publicly. Tristan spent over a decade understanding subtle psychological forces, from his childhood as a magician to working with the Stanford Persuasive Technology Lab, to his role as CEO of Apture, which was acquired by Google. He has been featured on 60 Minutes, TED, The Atlantic, the PBS News Hour, and more. He has worked with major technology CEOs and briefed Heads of State and other political leaders.

Tristan Harris on Singularity.FM

During my 70-minute interview with Tristan Harris, we cover a variety of interesting topics such as: Tristan’s magician background and the universal hackability of human nature; his studies at Stanford’s Persuasive Tech Lab; his journey to founding the Center for Humane Technology; high tech’s race down our brain stems and human downgrading; the definition and ethics of [persuasive] technology; Tristan’s biggest fear that tech is destroying our ability to see reality in shared ways, agree on the facts, coordinate and take action; why he believes that Silicon Valley is an existential threat; the dangers of being exponential; the possible solutions to our technological problems.

My favorite quote that I will take away from our conversation with Tristan Harris is this:

I want people to understand what’s happening and going wrong with technology as an interconnected system of harms. That we don’t have addiction or isolation happening separately from people believing in more conspiracy theories. There’s a relationship between people being more isolated and being more vulnerable to conspiracy theories on YouTube that are maximizing their attention. There’s a relationship between shorter attention spans and people only being able to say short brief things about an increasingly complex world that leads to more polarization. So there’s an interconnected system of harms that’s equivalent to social climate change that’s tilting the social fabric.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Filed Under: Podcasts Tagged With: Center for Humane Technology, Human Downgrading, Time Well Spent, Tristan Harris

Andreas Antonopoulos: Just because you don’t need bitcoin, doesn’t mean it’s not needed.

April 17, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/607331712-singularity1on1-andreas-antonopoulos-2.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

I first met Andreas Antonopoulos at the 2014 Bitcoin Expo conference in Toronto, Canada. At that time Andreas was already established as the most publicly recognized expert in the field of crypto-currency, not in the least due to his impressive capacity to take the geek out of bitcoin and make it relevant to and understandable by everyone. To top it off, Antonopoulos is extremely eloquent, has an impressively broad spectrum of knowledge and is an admitted disruptarian. No wonder that my 1st interview with Andreas was so popular. Unfortunately, it took five years before I finally managed to get him back on my show but I hope you enjoy it as much as I did because he is as brilliantly illuminating as ever.

During my 60-minute interview with Andreas Antonopoulos, we cover a variety of interesting topics such as: why he is first and foremost an educator and an author; his recent books Mastering Bitcoin and The Internet of Money; why we need stronger privacy in the bitcoin protocol; blockchain vs bitcoin; Mike Hearn’s claim that bitcoin has failed; power, influence and governance; crypto-exchanges, price manipulation and regulation; the Ethereum DAO hard fork; whether proof of stake is the future leading consensus mechanism; bitcoin’s energy consumption…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Andreas Antonopoulos?

 

Andreas Antonopoulos is a best-selling author, speaker, educator, and one of the world’s foremost bitcoin and open blockchain experts. He is known for delivering electric talks that combine economics, psychology, technology, and game theory with current events, personal anecdote, and historical precedent effortlessly transliterating the complex issues of blockchain technology out of the abstract and into the real world.

In 2014, Antonopoulos authored the groundbreaking book, Mastering Bitcoin (O’Reilly Media), widely considered to be the best technical guide ever written about the technology. His second book, The Internet of Money, unveiled the “why” of bitcoin—and became a bestseller on Amazon— and led to the wildly successful follow-up The Internet of Money Volume Two. His fourth book, Mastering Ethereum (O’Reilly Media) was published in December of 2018.

He is a teaching fellow with the University of Nicosia, serves on the Oversight Committee for the Bitcoin Reference Rate at the Chicago Mercantile Exchange, and has appeared as an expert witness in hearings around the world, including the Australian Senate Banking Committee and the Canadian Senate Commerce, Banking and Finance Committee.

Filed Under: Podcasts

Nikola Danaylov at Devolutions HQ: Artificial Intelligence and the Future of Technology

March 23, 2019 by Socrates

Last week I got interviewed by Devolutions HQ. I enjoyed the interview a lot and thought Yann did a fantastic job editing it. So I decided to share it with you. This way, hopefully, you can enjoy it too. But don’t shy away from criticism 😉

Finally, towards the end of the interview, we explain how you can be one of 3 people to receive a free copy of my book Conversations with the Future.

Show Notes:

In today’s episode, I have the pleasure of having Nikola Danaylov, aka Socrates, to Devolutions HQ. He’s an international bestselling author, keynote speaker, futurist, strategic adviser, and philosopher. His book Conversations with the Future: 21 Visions for the 21st Century is a #1 Amazon Bestseller and his Singularity.FM is one of the most widely sought out podcasts in the niche. During the show, we discuss all sorts of interesting IT related information, whether it is a futuristic toothbrush or the importance of having a personal code of ethics when developing software. It’s a bit longer than usual, so feel free to minimize the screen and just listen to the audio.

Just so everyone is clear. We know that our audience comes from a wide variety of backgrounds, both religious and political so the views, information, or opinions expressed in this video are solely those of the individuals involved and do not necessarily represent those of Devolutions Inc its employees.

Timestamps:

Introduction – What Socrates was all about [2:17]

Question #1 – Explain what you mean by Technology is the How, not the Why or the What? [4:05]

Question #2 – Are there any dangers in relying too much on technology? [5:28]

Question #3 – What advice or encouragement would you like to give to young people going into the tech field? [10:12]

Question #4 – How do you think AI will affect or influence the IT workplace? [12:54]

Question #5 – What is the basic concept of the singularity. [20:11] – 17 Definitions of the Technological Singularity: https://www.singularityweblog.com/17-… [21:08]

Question #6 – What can we do to stay human during these technological advances? [27:02]

Book Giveaway – Tell us what you thought about the show below and we will randomly pick three winners to get a copy of Nik’s book! [32:37]

Filed Under: Video

Sir Martin Rees on the Future: Prospects for Humanity

March 12, 2019 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/588844359-singularity1on1-martin-rees.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | Android | RSS

Today my guest is world-renowned cosmologist and Astronomer Royal Sir Martin Rees. Martin has written 11 books and more than 500 scientific papers on topics ranging from the Big Bang and cosmology to technology and the future of humanity. Sir Rees has also been concerned with the threats stemming from humanity’s ever-heavier ‘footprint’ on the global environment and with the runaway consequences of ever more powerful technologies. His new book On the Future: prospects for humanity addresses these issues.

During our 90-minute interview with Martin Rees, we cover a variety of interesting topics such as: why he is a scientist and teacher first and foremost; his interest in existential risks and policy; his journey from math to astronomy and cosmology; his environmental and nuclear concerns; the necessity for ethics in science; my interview with Lawrence Krauss; his greatest fear and biggest dream; AI and the Singularity; technological unemployment, UBI and taxation; the future of space exploration; the problem of consciousness; his bet with and differences from Steven Pinker; the major issues humanity is facing in the 21st century; the limits of science and a theory of everything.

My favorite quote that I will take away from Martin Rees’ book is:

“We need to think globally, we need to think rationally, we need to think long term, empowered by 21st-century technology but guided by values that science alone can’t provide.”

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

On the Future: Prospects for Humanity [Book Trailers]

Humanity has reached a critical moment. Our world is unsettled and rapidly changing, and we face existential risks over the next century. Various prospects for the future—good and bad—are possible. Yet our approach to the future is characterized by short-term thinking, polarizing debates, alarmist rhetoric, and pessimism. In this short, exhilarating book, renowned scientist and bestselling author Martin Rees argues that humanity’s future depends on our taking a very different approach to thinking about and planning for tomorrow.

Who is Martin Rees?

Martin Rees is a cosmologist and space scientist. He is based in Cambridge, where he has been Director of the Institute of Astronomy, a Research Professor, and Master of Trinity College. He was President of the Royal Society (the academy of science for UK and Commonwealth) during 2005-2010. In 2005 he was appointed to the UK’s House of Lords. He belongs to numerous foreign academies including those of the US, Russia, Japan and the Vatican and has received many international awards for his research, including the Balzan, Crafoord, Gruber and Templeton prizes. He writes and lectures extensively for general audiences and is the author of nine books. In addition to his involvement in international science and policy, he has been concerned with the threats stemming from humanity’s ever-heavier ‘footprint’ on the global environment, and with the runaway consequences of ever more powerful technologies. His new book On the Future: prospects for humanity addresses these issues.

Speaking as both an astronomer and “a concerned member of the human race,” Sir Martin Rees examines our planet and its future from a cosmic perspective. He urges action to prevent dark consequences from our scientific and technological development.

A post-apocalyptic Earth, emptied of humans, seems like the stuff of science fiction TV and movies. But in this short, surprising talk, Lord Martin Rees asks us to think about our real existential risks — natural and human-made threats that could wipe out humanity. As a concerned member of the human race, he asks: What’s the worst thing that could possibly happen?

Filed Under: Podcasts, Profiles Tagged With: Martin Rees, On the Future, Prospects for Humanity

Chapter 5: The Importance of Story [Narratives and MTP’s]

March 9, 2019 by Socrates

ReWriting the Human Story: How Our Story Determines Our Future

an alternative thought experiment by Nikola Danaylov

 

Chapter 5: The Importance of Story

“It’s like everyone tells a story about themselves inside their own head. Always. All the time. That story makes you what you are. We build ourselves out of that story.” Patrick Rothfuss

Stories are not just stories. Stories matter. Because, to paraphrase Friedrich Nietzsche, if one has a sufficiently strong “why” one can endure any “how.” And the “why” comes not from facts or events. It comes from the story we attach to them. This desire to meaning is secondary only to the desire to survive: as soon as survival is not at stake, meaning becomes the primary motivation. But sometimes, even when survival is at stake, meaning can provide motivation for survival. That’s how important story is.

For example, if one is suffering greatly one can decide that it is meaningless to go on and give up on life. Or, like Viktor Frankl, one can choose a story that attaches positive meaning to their suffering and thus be motivated to endure even the living hell of Auschwitz. And this is true for individuals, or larger groups of people, such as corporations, religions or nations. As Frankl said:

“Between stimulus and response there is a space. In that space is our power to choose our response. In our response lies our growth and our freedom.”

So, we are free to choose the story even when we are not free to choose our circumstances. Thus, the story is our “why” while the circumstances are merely our “how.” And the story is what ultimately makes the difference. Because a story is something that helps us feel connected to a reason and, more importantly, to a purpose.

For example, in the case of larger groups of people – such as corporations, story is the glue that brings everyone together and motivates them to cooperate and overcome obstacles. And so, in the past couple of decades, the most successful organizations have come up with what has been called a Massively Transformative Purpose [MTP]. [Salim Ismail, Exponential Organizations, page 53]

An MTP is the distilled essence of a story that captures who this organization is for and what’s its mission or purpose. For example, Google’s MTP is “organize the world’s information.” TED’s MTP is “ideas worth spreading.” Deep Mind’s MTP is “Solve intelligence. Use it to make the world a better place.” Calico’s MTP is “solve death.” Mark Zuckerberg’s new charity foundation’s MTP is “cure all disease.” Doctors Without Borders’ MTP is “medical aid where it is needed most.” [Having an MTP is particularly important for Millennials for whom the story of money is often not a sufficient “why.”]

Larger groupings of people such as religions and nations also utilize the power of story to forge their respective religious or national identity. Thus Judaism, Christianity, Islam, and Hinduism all tell their own mythical stories. And the more we believe in those stories, the more we identify ourselves as Jewish, Christian, Muslim or Hindu. But nations or ideologies such as Liberalism, Humanism, Feminism, and Capitalism do it too. So, a Japanese identifies with the story that Japan – i.e., Nippon, is the “land of the rising Sun” – i.e., “the land of the Gods.” An American identifies with the story of “the land of opportunity” where everyone is free to pursue the “American dream.” A capitalist identifies with the “invisible hand” of the “free market”. A humanist identifies with the story that humanity is “the pinnacle of evolution”, “the supreme intelligence” and “the ultimate authority.”

Filed Under: ReWriting the Human Story Tagged With: ReWriting the Human Story, story

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 61
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Katy Cook on the Psychology of Silicon Valley
  • Cathy O’Neil on Weapons of Math Destruction: How Big Data Threatens Democracy
  • Technology is a Magnifying Mirror, Not a Crystal Ball
  • Gary Marcus on Rebooting AI: Building Artificial Intelligence We Can Trust
  • Prof. Steve Fuller on Transhumanism: Ask yourself what is human?

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • Gadgets
  • Lists
  • Music
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Survey
  • Tips
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 3,500 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, better business and better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your own ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Donate
  • My Gear
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” — Nikola Danaylov

Copyright © 2009-2019 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are okay with it.AcceptPrivacy policy