• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

singularity

Johan Steyn Interviews Nikola Danaylov on Artificial Intelligence

July 18, 2020 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/860317633-singularity1on1-nikola-danaylov-johan-steyn.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Last month I did an interview for Johan Steyn. It was a great 45-min-conversation where we covered a variety of topics such as: the definition of the singularity; whether we are making progress towards Artificial General Intelligence (AGI); open vs closed systems; the importance of consciousness; my Amazon bestseller Conversations with the Future; how I started blogging and podcasting; the process of preparing for each interview that I do; ReWriting the Human Story: How Our Story Determines Our Future.

I enjoyed talking to Johan and I believe he has created an interesting podcast with a number of great episodes that are very much worth watching. Furthermore, thanks to him I already interviewed one and have booked a second upcoming Singularity.FM interview with a fantastic guest. So check out Johan Steyn’s website and subscribe to Johan’s YouTube channel.

Filed Under: Podcasts Tagged With: AGI, AI, Johan Steyn, Nikola Danaylov, singularity

Prof. Massimo Pigliucci: Accompany science and technology with a good dose of philosophy

May 2, 2020 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/810780325-singularity1on1-massimo-pigliucci.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

I have previously interviewed a few fantastic scientists and philosophers but rare are those strange birds who manage to combine together both deep academic training and the living ethos of those separate disciplines. Prof. Massimo Pigliucci is one of those very rare and strange people. He has 3 Ph.D.’s – Genetics, Evolutionary Biology, and Philosophy, and is also the author of 165 technical papers in both science and philosophy as well as a number of books on Stoic Philosophy, including the best selling How to Be A Stoic: Using Ancient Philosophy to Live a Modern Life.

During this 80 min interview with Massimo Pigliucci, we cover a variety of interesting topics such as: why Massimo is first and foremost a philosopher and not a scientist; the midlife crisis that pushed him to switch careers; stoicism, [virtue] ethics and becoming a better person; moral relativism vs moral realism; the meaning of being human; what are the biggest issues humanity is facing today; why technology is not enough; consciousness, mind uploading and the technological singularity; why technology is the how not the why or what; teleology, transhumanism and Ray Kurzweil’s six epochs of the singularity; scientism and the philosophy of the Big Bang Theory.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Massimo Pigliucci?

Prof. Pigliucci has a Ph.D. in Evolutionary Biology from the University of Connecticut and a Ph.D. in Philosophy from the University of Tennessee. He currently is the K.D. Irani Professor of Philosophy at the City College of New York. His research interests include the philosophy of science, the relationship between science and philosophy, the nature of pseudoscience, and the practical philosophy of Stoicism.

Prof. Pigliucci has been elected fellow of the American Association for the Advancement of Science “for fundamental studies of genotype by environmental interactions and for public defense of evolutionary biology from pseudoscientific attack.”

In the area of public outreach, Prof. Pigliucci has published in national and international outlets such as the New York Times, Washington Post, and The Wall Street Journal, among others. He is a Fellow of the Committee for Skeptical Inquiry and a Contributing Editor to Skeptical Inquirer. He blogs on practical philosophy at Patreon and Medium.

At last count, Prof. Pigliucci has published 165 technical papers in science and philosophy. He is also the author or editor of 13 books, including the best selling How to Be A Stoic: Using Ancient Philosophy to Live a Modern Life (Basic Books). Other titles include Nonsense on Stilts: How to Tell Science from Bunk (University of Chicago Press), and How to Live a Good Life: A Guide to Choosing Your Personal Philosophy (co-edited with Skye Cleary and Daniel Kaufman, Penguin/Random House).

 

Filed Under: Podcasts Tagged With: AI, Massimo Pigliucci, mind uploading, singularity, Stoic, Stoicism, Technology

Nikola Danaylov on the Dissenter: The Singularity, Futurism, and Humanity

January 31, 2019 by Socrates

A few weeks ago I got interviewed by Ricardo Lopes for the Dissenter. The interview just came out and I thought I’d share it with you to enjoy or critique. Here is Ricardo’s original description:

#131 Nikola Danaylov: The Singularity, Doing Futurism, and the Human Element

In this episode, we talk about what is meant by the term “Singularity”, and its technological, social, economic, and scientific implications. We consider the technological and human aspects of the equation of economic and technologic growth, and human and moral progress. We also deal with more specific issues, like transhumanism, the ethics of enhancement, AI, and Big Data.

Time Links:

00:58 What is the Singularity?

02:51 Exponential growth

04:42 What would mean to have reached the Singularity?

10:29 The trouble with futurism

15:35 The technological and the human aspects

20:20 What we get from technology depends on how we use it

23:16 Transhumanism, enhancement, and ethics

26:26 AI and economics

31:53 Eliminating boring tasks, and living more meaningful lives

36:37 Big Data, and the risk of exploitation

43:04 The example of self-driving cars

51:32 The human element in the equation

52:20 Follow Mr. Danaylov’s work!

Filed Under: Profiles, Video Tagged With: AI, Futurism, Nikola Danaylov, singularity

Stuart Russell on Artificial Intelligence: What if we succeed?

September 13, 2018 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/499489077-singularity1on1-stuart-russell.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Stuart Russell is a professor of Computer Science at UC Berkeley as well as co-author of the most popular textbook in the field – Artificial Intelligence: A Modern Approach. Given that it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries, I can hardly think of anyone more qualified or more appropriate to discuss issues related to AI or the technological singularity. Unfortunately, we had problems with our internet connection and, consequently, the video recording is among the worst I have ever published. Thus this episode may be a good candidate to listen to as an audio file only. However, given how prominent Prof. Russel is and how generous he was with his time, I thought it would be a sad loss if I didn’t publish the video also, poor quality as it is.

During our 90 min conversation with Stuart Russell we cover a variety of interesting topics such as: his love for physics and computer science; human preferences, expected utility and decision making; why his textbook on AI was “unreasonably successful”; his dream that AI will contribute to a Golden Age of Humanity; aligning human and AI objectives; the proper definition of Artificial Intelligence; Machine Learning vs Deep Learning; debugging and the King Midas problem; the control problem and Russell’s 3 Laws; provably safe mathematical systems and the nature of intelligence; the technological singularity; Artificial General Intelligence and consciousness…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Start Russell?

Stuart Russell is a professor (and formerly chair) of Electrical Engineering and Computer Sciences at University of California at Berkeley. His book Artificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, global seismic monitoring, and philosophical foundations.

He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, singularity, Stuart Russell

Physicist Max Tegmark on Life 3.0: What We Do Makes a Difference

June 15, 2018 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/458425038-singularity1on1-max-tegmark.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Some people say that renowned MIT physicist Max Tegmark is totally bonkers and refer to him as “Mad Max”. But, to quote Lewis Carroll from Alice in Wonderland, “All the best people are.” Furthermore, I am not sure if Tegmark is “mad” but I am pretty sure he is very much “fun” because I had a total blast interviewing him on my Singularity.FM podcast.

During our 90 min conversation with Max Tegmark we cover a variety of interesting topics such as: curiosity and being a scientist; reality and math; intelligence, AI and AGI; the technological singularity; Life 3.0: Being Human in the Age of Artificial Intelligence; populating the universe; Frank J. Tipler’s Omega Point; the Age of Em and the inevitability of our future; why both me and Max went vegan; the Future of Life Institute; human stupidity and nuclear war; technological unemployment.

My favorite quote that I will take away from this conversation with Max Tegmark is:

“It is not our universe giving meaning to us, it is us giving meaning to our universe.”

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Max Tegmark?

 

Max Tegmark is driven by curiosity, both about how our universe works and about how we can use the science and technology we discover to help humanity flourish rather than flounder.

Max Tegmark is an MIT professor who loves thinking about life’s big questions. He’s written two popular books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality and the recently published Life 3.0: Being Human in the Age of Artificial Intelligence, as well as more than 200 nerdy technical papers on topics from cosmology to AI.

He writes: “In my spare time, I’m president of the Future of Life Institute, which aims to ensure that we develop not only technology but also the wisdom required to use it beneficially.”

 

Previous Singularity.FM episodes mentioned during this interview:

Robin Hanson (part 2): Social Science or Extremist Politics in Disguise?!

Frank J. Tipler: The Laws of Physics Say The Singularity is Inevitable!

Skype co-founder Jaan Tallinn on AI and the Singularity

Lawrence Krauss on Singularity.FM: Keep on Asking Questions

Filed Under: Featured Podcasts, Podcasts Tagged With: AI, Artificial Intelligence, Life 3.0, Max Tegmark, singularity, Technological Singularity, The Future of Life Institute

Entrepreneurial Activist Joi Ito on Whiplash and the MIT Media Lab

May 5, 2018 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/439757550-singularity1on1-joi-ito.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Joi Ito is just one of those people who simply don’t fit a mold. Any mold. He is an entrepreneur who is an activist. He is an academic without a degree. He is a leader who follows. He is a teacher who listens. And an interlocutor who wants you to disagree with him. Overall, I hate to say it but I must put forward my own biases by admitting that this was probably the most fun interview I have ever done. Ever. So, either I let all my personal biases run free on this one or it was truly a gem of an interview. You be the judge as per which one it was and please don’t hesitate to let me know.

During our 90 min conversation with Joi Ito we cover a variety of interesting topics such as: being an entrepreneurial activist; becoming head of the MIT Media Lab even without an undergraduate degree; the impact of Kenichi Fukui, Timothy Leary and other his mentors; my transhumanist manifesto; my definitions of the singularity and transhumanism; why technology is not enough; the dangers of being exponential; self-awareness and meditation; complexity and systems thinking; our global prisoner’s dilemma; what the MIT Media Lab is all about; the importance of ethics, art and media; Whiplash and his PhD thesis on change; learning over education; why technology is the future of politics…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Joi Ito?

Joichi Ito, Media Lab, Cambridge, MA USA

Joi Ito has been recognized for his work as an activist, entrepreneur, venture capitalist and advocate of emergent democracy, privacy, and Internet freedom. As director of the MIT Media Lab and a Professor of the Practice in Media Arts and Sciences, he is currently exploring how radical new approaches to science and technology can transform society in substantial and positive ways. Ito is listed among TIME Magazine’s “Cyber-Elite” and was named one of the “Global Leaders for Tomorrow” by the World Economic Forum. He is co-author with Jeff Howe of Whiplash: How to Survive Our Faster Future and writes a monthly column for WIRED.

Filed Under: Podcasts Tagged With: AI, Joi Ito, MIT Media Lab, singularity

Is the Singularity Steering us Toward the Greatest Inequality in History?

December 13, 2017 by Jared Leidich

The basic idea of the technological singularity is simple: the rate at which technology progresses increases as time moves forward. If we believe the technological singularity is happening, then we as a species should inspect its impact on human equality. This phenomenon is pushing our human ship toward a waterfall of technological innovation. Is it also pushing open the gaps between people, classes and whole societies?

The basic premise of the singularity can be seen using a graph of major technological advancements throughout the history of our species on Earth. Check out the now famous graph below, most often credited to Theodore Modis, showing major turning points in “canonical milestones.” On the x-axis is the amount of time that has gone by since an event has occurred, and on the y-axis is the amount of time separating that event from the one before it.

The thing that is really shocking about this graph is somewhat hiding in plain sight: it’s plotted on a log-log scale, meaning both axes are logarithmic or increasing by factors of ten at each interval. So, what looks like a nearly straight line is a shockingly explosive exponential progression. If plotted on a standard graph, it would look like virtually everything important occurred within the last fraction of the graph after a relative eternity of almost nothing happening.

The “canonical milestones” Modis described are the critical learning points for all humankind. However, we can apply this same philosophy to how technological advancements affect each human, on an individual level.

Some simple math: exponential progressions change at a changing rate. Depending on the progression (and certainly in the case of human knowledge) this tends to lead toward explosive growth at some point. A simple exponential curve that represents this phenomenon is a doubling function. For example, say you had a pond with lilies in it and the number of lilies doubled every day, regardless of the boundaries of the pond or any nutrient needs. On the first day there would be one lily. On the second day there would be 2 lilies. After a week there would be 64 lilies. After 10 days the pond would be full. After 54 days the entire earth (that’s 197 million square miles) would be covered in lilies.

Similarly, technology appears through many verifiable metrics to be on a doubling schedule; the amount of knowledge or capability in a field doubles in a fixed and repeatable amount of time. In the most famous case of Moore’s law, the price performance of a computer chip doubles about every two years.

If we graph a simple doubling function it looks like the one below. It’s explosive. This curve would be the actual shape of any one of several technological correlations that have been studied, minus the nicks and bumps. Without special scaling it’s clear that the line looks unchanging until right at the end where it breaks upwards.

If another curve is added to this graph with just a small difference in starting point, the disparity created by small differences after an explosive growth surge can become apparent. The blue graph starts with the number “1”, and shows the resulting values as it is doubled 100 times (1,2,4,8, etc.). The orange curve starts with “2” and is doubled in the same fashion (2,4,8,16, etc.). After 100 doublings, the resulting difference between the two final numbers (blue vs. orange) is a mind-boggling difference of 1.27 x 10^30 power. That’s more than a million, million, millions. Tiny changes at the beginning of an explosive progression equate to gargantuan differences at the explosive part of the progression.

If technology, and human knowledge, are on an exponential growth cycle resembling a doubling function, and we are in the explosive part of our growth cycle, then tiny variations in human equality now are liable to turn into big variation in human equality soon.

You, the reader, are probably decades ahead of most of the people in the world in your personal technological progression, but no doubt well behind some people too.

For me personally, this resonates. I can feel my advantage growing against the underprivileged, while the gap between me and the advantaged grows too. I have access to the internet all the time, carrying all human knowledge in my pocket. I soak in information at a voracious rate, literally double than before, as I listen to podcasts about breaking news at 2X playback speed. At the same time, however, I feel overwhelmed. Because for everything that I learn, something new happens that I can’t grasp or access. The tech elite is amassing databases about me that I don’t have access to. As they gather data, the algorithms get smarter and collect data faster and organize it better. I fall behind.

As a specific example, I’ll put some concrete (albeit hypothetical) numbers to this problem. If we assume the most advanced technologists on the planet are 200 years ahead of the most primitive, and spread that difference out amongst all the nearly 8 billion people in the world, the technological gap between any person and their closest technological peer would be very small (about 0.8 seconds, to be specific). If it is assumed that a person’s technological state is doubling every two years, like Moore’s law, then in one lifetime of 80 years (doubling 40 times) the technological difference between those two people will grow to equal more than 1,000 of today’s years.

Like the dots on a polka-dot balloon spreading away from each other as it inflates, exponential growth should cause the gaps between all of us to grow. The difference between now and the past is that we are in the explosive part of the progression where one could theorize a “make or break” moment is coming for individuals; the math seems to be telling us that most of the world isn’t coming into the black hole with the techies and the machines they’re creating. Am I going to make it?

It’s undeniable that people in the developing world are being exposed to technological advancements later than the developed world. What isn’t intuitive, but may be markedly more impactful is that those gaps in technological adoption may be liable to explode in size in the coming years if we don’t act. People in sub-Saharan Africa are a decade behind the developed world in their ubiquitous adoption of internet-enabled smart phones. What aren’t they learning and knowing now that will slow their adoption of information in the future?

We as a species need to act. Explosive growth explosively amplifies disparities. Of course, no one knows what is going to happen in the coming years. Whatever happens though, we should work to bring technology to those who don’t have it. We should work to keep information free. We should work to keep our brothers and sisters on the boat.

About the Author:

Jared Leidich is an aerospace engineer and author. He has flown on NASA’s microgravity aircraft, built and tested space suits, and sent parts to orbit and the stratosphere. He led the suit team that brought Alan Eustace on the highest balloon flight and skydive of all time and wrote the preeminent account of that project, The Wild Black Yonder. He works for World View Enterprises developing their Stratospheric descent systems.

Filed Under: Op Ed Tagged With: inequality, singularity

Make or Break the Singularity 1on1: Crowdfunding Campaign is Live

August 25, 2016 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/279868434-singularity1on1-singularity-1on1-crowdfunding.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

 

Hi,

I wanted to speak to you 1on1 today. So this is not a message for everybody but just for you. Yes you – my audience, my podcast listener, my YouTube viewer, my donor and moral supporter, my fellow geek and friend.

And my reason for doing what I do.

I want to share 3 things with you.

But before I do that let me start by saying: “Thank you for your support!” I have sacrificed a lot for the past 6 years. But this effort has not been in vain. So far I have published about 1,000 articles and produced 200 podcast episodes. Singularity 1on1 has had 4 million views on YouTube and iTunes and has been republished by major media outlets such as the BBC, TVJapan and many others. Today I reach well over 100,000 people per month and some have called me the “Larry King of the Singularity.” So once again thank you for your support because I couldn’t do what I do without you.

Now, the 3 things I want to share with you are: 1. Why am I doing a crowdfunding campaign? 2. What do you get at different funding levels of my podcast? And 3. How can you help?

So, first of all, the main reason why I’m crowdfunding is the simple fact that as the podcast has gotten more and more successful it has become much more expensive to produce and sustain it. Success and good quality come at a price. And, while it is very important to me that Singularity 1on1 is, and will always remain to be, both free of charge and independent of any commercial agenda, it is unfortunately not free of cost or independent of material resources. The reality is that good things cost money to produce and my podcast is not different. And those costs have risen to the point where I simply cannot sustain this on my own any more.

Secondly, what are my funding goals and what do you get for each of them? Well, depending on how much money we raise, I can provide different things at different format and quality for you. Now, if we fail to meet the minimum goal of $50,000 there is a good chance I will simply have to stop blogging and podcasting and seek another way to make a living. But, with $50,000 I will have the ability to focus exclusively on audio podcasting. I will have to cut out all non-essential costs and will stop traveling for expensive in-person high-production interviews. And this podcast will become a true to format audio only endeavor. If, however, we raise $100,000 I will be able to continue with my current format where I can travel and produce high-quality, high definition, in-person interviews. If we reach $200,000 I will be able to raise the bar even further by not only going to a 4K 3-camera professional setup but also by releasing all of my past episodes – i.e. 250 hours of video, under a Creative Commons license. And anyone will be able to not only watch but also use, edit, mix and remix all of my content for free and without any restrictions. Lastly, $300,000 will guarantee that I can interview the future for the next several years and travel anywhere to interview anyone, at any time and any place in the world. I will also release all future episodes under Creative Commons license for as long as the podcast exists. Because the future belongs to us all.

The 3rd and final question is: How can you help? If you are already on my Konoz profile page then don’t delay and make a donation now. If you are watching this video elsewhere then just type InterviewTheFuture.com and you will land on my fundraising page. Once you have donated, then, you can help even more by spreading the word about it. So, if you have an email list, email your list. If you have a Facebook or Twitter account share and tell your followers to come and donate also. If you need a quick way to show what Singularity 1on1 is all about then share my highlights video. If you need social proof – share my testimonials video. And, again, to help me keep producing more Singularity 1on1 episodes please donate what you can now: https://konoz.io/nikola.danaylov

Help me interview the future. So that you can find your mission and make your dent in the universe.

Thanks for listening. Thanks for letting me do what I love. And thank you very much for your support!

Filed Under: Articles Tagged With: singularity, Singularity 1on1, singularity podcast, singularity weblog

Nature Is Not Your Friend

May 17, 2016 by David Filmore

3112011Kesennuma0478It’s the start of the third act and explosions tear through the city as the final battle rages with unrelenting mayhem. CGI robots and genetic monsters rampage through buildings, hunting down the short-sighted humans that dared to create them. If only the scientists had listened to those wholesome everyday folks in the first act who pleaded for reason, and begged them not to meddle with the forces of nature. Who will save the world from these ungodly bloodthirsty abominations? Probably that badass guy who plays by his own rules, has a score to settle, and has nothing but contempt for “eggheads.”

We’ve all seen that same movie a million times. That tired story doesn’t just make movies look bad, it makes science look bad too. It’s an anti-science viewpoint that encourages people to fear the future and be wary of technology. This common narrative isn’t just found in movies, it’s a prevalent belief that is left over from the industrial revolution. Over a short period of time, people went from quiet farm life to living in cities with blaring traffic, and working in factories with enormous and terrifying machinery. The idea that nature is good and safe, and that technology is bad and dangerous, was deeply ingrained in our collective psyches and is still very much with us today.

You see it anytime someone suggests that it is somehow more virtuous to “unplug” and walk barefoot along the beach, than it is to watch a movie, play a video game, or work on your computer. Some of the most valuable things I’ve ever learned have come from watching documentaries and researching topics online. I love hiking as much as the next guy, but staring at a tree gets old pretty fast. People have this notion that nature is healing, and that technology, while useful, will probably end up giving you cancer sometime down the line.

This general fear that people have, that the future will be full of really powerful machines that they will never be able to understand, is the main reason why they are so wary of The Singularity. Nature seems like a safer bet. You can look at a tree and be perfectly okay with not fully understanding how it works. Because even on its best day, you know a tree isn’t going to band together with all the other trees and have a decent chance of taking over the world and enslaving humans.

But the real threat to humans isn’t from technology, it’s from nature. Our genomes are riddled with errors and predispositions to countless diseases. Most creatures on this planet see you as nothing but a lovely source of protein for them to eat. Mosquito-borne diseases alone gravely sicken 700 million people a year. Not to mention all the viruses, bacteria, parasites, floods, earthquakes, tornadoes, you name it, that want a piece of you. We should be far more scared of nature than technology.

The only reason why we have been successful in extending human life expectancy is because of the gains we’ve made in technology. If we stripped every form of technology from our lives and all went to live in the forest, our population numbers would drop like a rock. Not because we lacked the necessary survival skills, but because the human body just didn’t evolve to live very long. I’ve lost count of how many times antibiotics have saved my life, and it’s the same for each of us. Sure, we have pollution, plastic, radiation, climate change, and mountains of garbage, but if technology and modern life were so hazardous to humans we would be living shorter lives not longer.

Technology isn’t an intrusion upon an otherwise pristine Garden of Eden, it is the only reason we as a species are alive today. And it isn’t new either, we’ve been using technology since the first caveman prevented himself from getting sick by cooking food over a fire. That is the narrative we should be focused on as we discuss how to deal with the challenges of The Technological Singularity. People need to be reminded that rejecting science in favor of nostalgia for “the good old days” won’t keep them safe. There are over 7 billion people alive on Earth today because of the health and sanitation systems we’ve put in place. History proves to us that the greater we integrate technology into our lives, the safer we are and the longer we live. It’s as simple as that.

But if you ask any random person on the street about artificial intelligence, robots, or nanotechnology, chances are the first word out of their mouths will be “Skynet”. The dastardly machine that unleashed killer robots to extinguish the human race in the Terminator movies. Mention “genetics”, and you’re likely to hear a response involving killer dinosaurs resurrected from DNA trapped in amber, or a mutant plague that spun out of control and created a zombie apocalypse.

Now, no one loves blockbuster movies more than me! But the movies we need to be watching are the ones where the products of science aren’t seen as the enemy, but are the tools that lead to humanity’s salvation from poverty, disease, and death.

Nature programmed each of us with an expiration date built into our DNA, and stocked our planet with hostile weather, and hungry creatures with a taste for humans. Understanding the urgency for humans to get over their bias for all things “natural”, and to meld with technology as soon as possible, will be the difference between The Singularity being a utopia and just another disaster movie. It’s the only chance we have to write the happy ending we deserve. The one where science saves us from nature.

 

David FilmoreAbout the Author:

David Filmore is a screenwriter, producer, film director, and author. His latest book is Nanobots for Dinner: Preparing for the Technological Singularity

Filed Under: Op Ed Tagged With: nature, singularity, Technological Singularity, Technology

Skype co-founder Jaan Tallinn on AI and the Singularity

April 17, 2016 by Socrates

http://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/259553886-singularity1on1-jaan-tallinn.mp3

Podcast: Play in new window | Download | Embed

Subscribe: Apple Podcasts | RSS

Jaan Tallinn, co-founder of Skype and Kazaa, got so famous in his homeland of Estonia that people named the biggest city after him. Well, that latter part may not be exactly true but there are few people today who have not used, or at least heard of, Skype or Kazaa. What is much less known, however, is that for the past 10 years Jaan Tallinn has spent a lot of time and money as an evangelist for the dangers of existential risks as well as a generous financial supporter to organizations doing research in the field. And so I was very happy to do an interview with Tallinn.

During our 75 min discussion with Jaan Tallinn we cover a variety of interesting topics such as: a few quirky ways he sometimes introduces himself by; the conspiracy of physicists to save the world; how and why he got interested in AI and the singularity; the top existential risks we are facing today; quantifying the downsides of artificial intelligence and all-out nuclear war; Noam Chomsky‘s and Marvin Minsky‘s doubts we are making progress in AGI; how Deep Mind’s AlphaGo is different from both Watson and Deep Blue; my recurring problems with Skype for podcasting; soft vs hard take-off scenarios and our chances of surviving the technological singularity; the importance of philosophy…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

 

Who is Jaan Tallinn?

Jaan Tallinn is a founding engineer of Skype and Kazaa. He is a co-founder of the Cambridge Centre for Existential Risk, Future of Life Institute, and philanthropically supports other existential risk research organizations. He is also a partner at Ambient Sound Investments, an active angel investor, and has served on the Estonian President’s Academic Advisory Board.

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, Jaan Tallinn, singularity, Technological Singularity

The Singularity Must Be Decentralized

February 18, 2016 by Andy E. Williams

Illustration of circle graphic net line.

The research community is beginning to understand that motivations are not a human “artifact” of consciousness, but part of the essential glue that binds consciousness together. Without motivations we have nothing that holds us to this vessel, ensuring that we continue to eat, pay our rent, and do other things necessary for our survival. Conscious machines will for this reason have motivations as well. Otherwise they simply just wouldn’t function. This is an important point because talk of the singularity often brings up visions of a single integrated “machine” that will inevitably enslave humanity. A better question is:

“Will AI be used to gain immense advantage for a single party (whether that party is the AI itself or the human that controls it), or will AI be used to maximize benefit for us all?”

Even if the AIs have interfaces that allow them to share information more rapidly than humans can through reading or watching media, separate AIs will have separate motivations from a single centralized AI. Given that a signature of consciousness is motivation, any consciousness will obviously be motivated to secure all the resources it needs to ensure its survival. In some cases, the most efficient way to secure resources is sharing. In other cases, it’s through competition. AIs might share resources, but they might also compete.

When and if an artificial consciousness is created, there’ll almost certainly be multiple instances of it. Because a consciousness cannot exist without motivation, and because the motivation of each consciousness differs, requiring what might be great effort to get on the same page, it may very well be true that multiple consciousness’s cannot “merge” in a way that would become truly threatening to humans unless one subsumes all others. Anything else would merely be a co-location of minds with different objectives, negotiating a sharing of resources.

One AI with far fewer resources than another would in fact probably fear the far more powerful AI might just erase it and take over its resources. Think of your “several generations out of date” home computer trying to hold its own against Big Blue. Rather than us humans needing to fear AI, an AI might more likely need to be afraid of humans not protecting it against other AIs.

Centralization rather than technological advance is the real danger for ANY conscious entity. Yet when you consider the competitive advantage technology gives, the near infinite rate of change of the technology singularity introduces the possibility of a future in which the technology arms race concentrates power and resources to a degree never seen before. Could it put a few into positions of unimaginable power that may not ever be unseated? If so, there will be nothing stopping those few from becoming unimaginable despots to whom the rest of humanity are merely disposable commodities whose suffering means nothing.

Think of what you would do if you had infinite power over everyone and there were no consequences for your actions. Think of what would happen if you needed a kidney and that child over there had one that would fit just fine. Think of what would happen if some man with unimaginable power wanted that woman, or the next, or the next thousand. Think of what would happen if you wanted to buy something and you could just flip a switch and empty out the world’s bank accounts, then watch with casual detachment as millions fight like animals for food and water. Think of what would happen if that one man in control just happened to wake up one morning to the conclusion that there were several billion people on the earth too many.

The technological singularity, if it exists, is a kind of Armageddon.

In my upcoming book “The Technology Gravity Well” I delve into these and other issues, including how a new breed of massively collaborative software could usher in the singularity in the next 5 years. This may be one of the most important books you come across this year. Read more here:

http://igg.me/at/technology-gravity-well

 

About the Author:

Andy E. WilliamsAndy E. Williams is Executive Director of the Nobeah Foundation, a not-for-profit organization focusing on raising funds to distribute technology with the potential for transformative social impact. Andy has an undergraduate degree in physics from the University of Toronto. His graduate studies centered on quantum effects in nano-devices.

Filed Under: Op Ed Tagged With: AI, singularity

Does Evolution lead to Singularity?

November 12, 2015 by Marco Alpini

The most spectacular manifestation of an accelerating trend is when its progression becomes exponential or more.

An exponential progression is clearly unsustainable in the real world reaching very quickly a collapse point of the underlying process.

In case of accelerating technological development, the collapse point is generally identified with the so called Singularity, caused by the rise of self-improving Artificial Intelligence.

This is well known and widely debated, but it is only part of a bigger story.

It is now emerging that there are many other accelerating trends we should worry about.

Taking a wider look of what is going on with us and our planet we could say that there are various “singularities” that are lining up and coming our way. This is not good news and, besides the intrinsic risk represented by accelerating trends, the significance of what is about to happen is very profound.

If we accept a generalized definition of Singularity as a point in time where control is lost as the consequence of processes breaking down because of an excessive rate of change, then we could say that we are approaching at least five Singularities.

They are all linked to each other and their progression is often more than exponential.

Evolutionary Singularity

Beside the classical Technological Singularity triggered by self-improving AI, there is another singularity where humans are at the center stage.

Evolution of intelligent life led to technology and technology is leading to the ability to intervene in the evolutionary process modifying our own characteristics by design.

The old lengthy natural process of waiting for random changes to be tested by natural selection in order to become permanent features of living beings will be shortly replaced in human beings by technology through genetic modifications and technological augmentations of our bodies and minds.

Changes will no longer be random, they will be planned to serve a purpose and the process will become proactive and not reactive, making it billion of times more efficient and faster. As technology accelerates dragging everything with it, we will have to also change in order to keep up.

This process constitutes an accelerating feedback loop; the more technology improves, the more we improve our capabilities creating better technology which, in return, will be used to improve us even more.

Technology is incompatible with the way we have been living until now and as it accelerates we will have to adapt faster and faster to the new environment. Failure will result in extinction.

We are on the verge of an epochal transition; we are passing from an era driven by Natural Evolution to an era driven by Artificial Evolution and, at the transition point, we will encounter a Singularity.

Evolution curve

Ecological Singularity

The Ecological Singularity is caused by the ecosystem degradation. The main accelerating trends that are causing the natural world to degrade are the following:

  • World population growth and the improvement of the standard of living especially in Asia.
  • Resources over consumption
  • Deforestation and land conversion
  • Accumulation of nutrients and reactive nitrogen in the environment
  • Loss of biodiversity and ecosystems
  • Greenhouse gases concentration in the atmosphere

When drafted in the appropriate scale, all parameters that typically describe the above processes show accelerating exponential trends often exceeding the exponential progression.

In some cases a new curve had to be invented, such as the “Hockey Stick” curve dubbed by the climatologist Jerry Mahlman when he reconstructed the Northern hemisphere mean temperature of the past 1000 years, combining a variety of measures, into a graph that showed a sharp turn upward since the start of the industrial revolution.

Hockey stick 2

Temp

The world population growth, once seen on a scale of few hundred years has the same worrisome shape. The resources consumption is even more pronounced caused by the West where pro capita consumption is many times that of the rest of the world.

Population

The loss of primary tropical forests, which are the richest in biodiversity, is staggering: we have already lost an unimaginable quantity of natural habitat including Madagascar, Borneo and by 2050 most of the remaining primary forests will be converted to croplands or unproductive wastelands.

Land Conversion

The loss of biodiversity is reaching an unstoppable and unbelievable rate showing that we are in the middle of a mass extinction. This mass extinction has been already named as the Permian extinction which will be the sixth global mass extinction in the history of our planet and one of the most severe.

Extinction rate

But, perhaps the most dramatic trend of all is the increase of greenhouse gases concentration in the atmosphere. When seen at the appropriate scale of few hundred thousand years it shows a sudden spike that towers above all past fluctuations over the last 600,000 years. By 2050 it will be 2 1/2 times the highest ever for the above period.

CO2

If we consider that such fluctuations are correlated with the ice ages cycle, the obvious consequence is that the impact on climate will be two to three times more the difference between ice and warm ages. We have no idea what this means, we don’t know if it will be a world we could live in.

These trends are the vital signs of the natural world and, once shown next to each other, it is like looking at monitors recording the conditions of a terminal patient with no hope of recovery.

We are approaching a collapse point of the ecosystem beyond which we cannot predict what will happen. Many negative feedbacks will trigger self-feeding loops impossible to control. At that point we will hit the Ecological Singularity.

Ecosystems collaps

Carbon Singularity

Cheap access to light and sweet conventional crude oil fueled the world’s growth and prosperity for over 100 years. Our economic system has been built around cheap oil availability assuming that this will always be available.

The current and future production of energy from renewable sources is nowhere near to alleviate our thirst for liquid fuels.

Renewable energy production cannot replace liquid fuels, for the following reasons:

  • Wind and solar energy can only partially contribute to electricity generation
  • Electricity is not generated with oil therefore wind and solar energy will not reduce oil dependency
  • Bio fuels will only have a very limited share because it competes with food production.

According to the IEA the new renewable energies sources will only account for 2.3% of the total world primary energy production by 2030.

The powerhouse of our civilization is still the old loved carbon atom. We are, and will be for a long while a hydrocarbon powered civilization.

It is oil that made possible the spectacular rise of the modern society and the mechanized mass production of food, and with it, the population exploded.

Oil production is linked to food production by a relationship which has been constant for decades. For each ton of grain produced in the world, an average of 13 barrels of oil has been consumed.

The continuous increase of the world population and the improvement of the living conditions in Asia will require more food which in turn requires more land conversion and more oil.

The quantity of grain per capita has increased steadily up to the current 350Kg/year. An acceleration of this trend, led by China, is expected.

However oil and land availability is not infinite.

The crude oil production is probably peaking now and it will start a relentless decline in the near future.

As production is reaching the peak we assist to a shift toward “unconventional” sources like tar sands, shale gas. These are harder to extract, often with negative energy return on energy invested. Their typical bell shaped production curve is much sharper than conventional crude and they will reach their relative peak very quickly. These unconventional sources could provide only a temporary relieve to a growing global demand.

Our dependency from oil makes extremely dangerous running into a situation of limited supply and increasing demand without a viable alternative. We risk an unprecedented crisis of food production combined with the collapse of road transport with unimaginable consequences.

We are therefore approaching an uncharted territory where, for the first time, we will have to deal with an accelerating trend of fuel starvation and very high prices.

Our complete dependency from oil is astonishing.

We have taken risks on global scale which no reasonable person would ever take in their own life or business.

We broke the most basic rules of rational management:

  1. We didn’t diversify our main source of energy for transportation and food production.
  2. We over consumed our most precious resource accelerating its consumption as reserves decrease.
  3. We didn’t plan for the future.

For centuries we didn’t think a second to an alternative for oil. We built everything around it, we became addicted and we became totally carbon dependent.

A world without oil will be a very different one.

Air travel will significantly shrink and disappear as a means of mass transportation. The model of our modern cities based on large suburbia served by shopping malls and long range commuting will be no longer viable. We will have to rethink completely the way we produce and distribute food and goods.

Does it mean that the global population will have to retrieve to numbers closer to the pre-oil era?

The oil era carries a sinister irony for human kind. Oil made our technological world possible but, in return, we have fallen in a vicious feedback loop.

Oil was created 180 million years ago by mass extinction caused by global warming. Organic matter accumulated on the oceans floor and in millions or years turned into oil and, with it, the excessive carbon has been confined underground.

We discovered oil 180 million years later, extracted and burned it, releasing back the old CO2 into the atmosphere resuming global warming which is causing mass extinctions once again.

We have recreated the same cycle backwards but at a speed one hundred thousand times faster leading to a new cycle of oil formation. We may become the fuel that new intelligent species, evolved as a consequence of the Pliocene mass extinction, will use 180 million years in the future. [See a short slide presentation on the above oil cycle here: https://drive.google.com/open?id=0Bw2Y7zjijLaTZHdoU21uZUt2VFU]

What we did with oil isn’t the smartest thing to do and most probably is the stupidest thing in human history. A time bomb was set by the discovery of oil and by our ignorance – in terms of emissions and resource dependency.

The transition from the hydrocarbon economy to the next economy will likely be, by no means, smooth and gradual, it will be a global shock of unimaginable proportions.

The oil era and the way it is going to end will have unpredictable consequences on the planet and on us as a civilization, with connotations similar to a “singularity”, the Carbon Singularity.

Economic Singularity

The orthodox economic model at the foundation of the modern society is based on continuous indefinite growth and on an ever increasing supply of energy and resources. As a matter of fact the world economy has been growing at steady level of 3% per year on average.

A constant 3% steady growth could appear not much but this impression is wrong; growing at this rate we will need about 5 planets to support our civilization by 2050.

The classical economic model is clearly unsustainable and it will hit various hard constraints in the near future due to limited resources, the obvious limitation of the number of planets available and the collapsing ecosystems.

Beside these hard limitations there are many other disruptive forces at work that risk to destabilize the entire economic model. One of the most relevant is the rising of a new economy based on zero marginal cost enabled by the new technologies and internet.

Various industries have been revolutionized already with massive corporations being crippled because they couldn’t adapt to changes occurring too fast. From the music industry, to photography and telecommunications we have already seen a disruptive revolution with costs approaching near zero for the end consumer.

The next step will be the sharing of goods, properties and assets, such as self-driving cars and the distributed generation of electricity.

In parallel, virtual currencies are making their way to the global scene having the potential to replace conventional currencies revolutionizing the economy from within.

Technological unemployment will be another powerful disruptor of our economic model considering the enormous possibilities of narrow AI and robotics. The continuous increase of life duration and the consequent number of aging people, combined with the technological unemployment, will bring the collapse of social welfare systems across the world.

All of these elements influence each other and will occur simultaneously causing an accelerating rate of change of great complexity leading to a singularity, the Economic Singularity.

Where are we heading?

We have been inebriated by few decades of opulence made possible by oil after thousands of years of sufferance and misery. We are confused by the unbelievable acceleration and power of our technologies and their impact on our future and our planet.

We have not evolved mechanisms, either biologically or culturally to manage global risks. Our instinct of conservation has been shaped by our long experience with local risks such as dangerous animals, hostile people, storms, draughts, famines, diseases. These types of problems have occurred many times and we have evolved instincts to alert us of such risks while we are insensitive to global treats.

As tragic as these events are for the people immediately affected, in the big picture of things, from the perspective of mankind as a whole, even the worst of them is a mere ripple on the surface of the great sea of life.

Our approach to global existential risks cannot be one of trial-and-error. There is no opportunity to learn from errors. The reactive approach – see what happens, limit damages, and learn from experience, cannot work on an accelerating environment posing existential threats.

The main reason to be careful when you walk down the stairs is not that you might slip and have to retrace one step, but rather that the first slip might cause a second one, and so on until you fall dozens of steps and break your neck.

Similarly the concern about our civilization’s survival  is not only related the effect of one specific cause of disruption but its vulnerability to the combined effect of many of them i.e. the chain reactions that could lead to a total collapse.

The intricacy and mutual feedbacks of these forces is extremely complex and dangerous.

We are clearly not on top of this dynamic, we are in the passenger seat and we are not even fastening our seat belt and bracing for impact.

But who is driving this car?  Evolution is the driver, it drives everything.

We are approaching a fundamental step in the evolution of a civilization, an evolutionary jump that probably only few civilizations in the universe managed to overcome.

Evolution itself is reaching a singularity point and it is going exponential.

The evolutionary path of a civilization can be represented by a curve with a very slow rate of increase for hundred thousand years until it manages to develop science and technology. At that point the trend suddenly change course with a tremendous spike upwards.

Evolution curve

With technology, come huge disruptions and singularity thresholds caused by accelerating trends. Only the civilizations that manage to harness the tremendous power of their own technology survive. Only civilizations that develop sufficient wisdom can keep evolving up to the next stage.

We don’t know where this adventure will take us but, one thing is sure, with a business as usual approach we will go nowhere. But if we manage to go through it, we will become Gods.

Business as usual

 

About the Author:

Marco AlpiniMarco Alpini is an Engineer and a Manager running the Australian operations of an international construction company. While Marco is currently involved in major infrastructure projects, his professional background is in energy generation and related emerging technologies. He has developed a keen interest in the technological singularity, as well as other accelerating trends, that he correlates with evolutionary processes leading to what he calls “The Cosmic Intellect”.

Filed Under: Op Ed Tagged With: AI, Evolution, singularity

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 7
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • ReWriting the Human Story: How Our Story Determines Our Future
  • Nikola Danaylov @ Frankfurt AI Meetup
  • Gus Hosein on Privacy: We’ve been well-meaning but stupid
  • Francesca Ferrando on Philosophical Posthumanism
  • Kim Stanley Robinson on Climate Change and the Ministry for the Future

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • Gadgets
  • Lists
  • Music
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Survey
  • Tips
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, better business and better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your own ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Donate
  • My Gear
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” — Nikola Danaylov

Copyright © 2009-2021 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy