• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

singularity

Chapter 11: The AI Story

August 2, 2021 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/1098721606-singularity1on1-rewriting-the-human-story-chapter-11.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

ReWriting the Human Story: How Our Story Determines Our Future

an alternative thought experiment by Nikola Danaylov

 

Chapter 11: The AI Story

Computer Science is no more about computers than astronomy is about telescopes. Edsger Dijkstra

When looms weave by themselves, man’s slavery will end. Aristotle

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Vernor Vinge, 1993

Today we are entirely dependent on machines. So much so, that, if we were to turn off machines invented since the Industrial Revolution, billions of people will die and civilization will collapse. Therefore, ours is already a civilization of machines and technology because they have become indispensable. The question is: What is the outcome of that process? Is it freedom and transcendence or slavery and extinction?

Our present situation is no surprise for it was in the relatively low-tech 19th century when Samuel Butler wrote Darwin among the Machines. There he combined his observations of the rapid technological progress of the Industrial Revolution and Darwin’s theory of evolution. That synthesis led Butler to conclude that intelligent machines are likely to be the next step in evolution:

…it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race.

Samuel Butler developed further his ideas in Erewhon, which was published in 1872:

There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusk has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time.

Similarly to Samuel Butler, the source of Ted Kaczynski’s technophobia was his fear that:

… the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide. The Unibomber Manifesto

As noted at the beginning of this chapter, humanity has already reached the machine dependence that Kaczynski was worried about. Contemporary experts may disagree on when artificial intelligence will equal human intelligence but most believe that in time it likely will. And there is no reason why AI will stop there. What happens next depends on both the human story and the AI story.

For example, if AI is created in a corporate lab it will likely be commercialized. If AI is created in a military lab it will likely be militarized. If AI is created in an Open Source community it will likely be cooperative and collaborative. And if it is created in someone’s garage it will likely reflect the story of that particular person or people. So, the context within which AI is created will shape its own origin story and that story will define the way it treats humanity.

Thus a military AI will likely treat humans as allies and enemies. A commercial AI will likely treat humans as customers and products. An Open Source AI might treat humans as parents, coders, friends, or allies. [Given current funding trends the first two types seem the most likely.] So the most crucial thing that humanity will do when creating AI is narrating the AI origin story. Because similar to us, by describing how it came into being, what it is here for, what’s its purpose and proper place in the universe, its story will determine its future. If we get the AI story right we have a chance to coexist peacefully. But if we get it wrong that could mean a Gigawar of planetary proportions, even extinction.

For example, what if, like us, AI ends up with a story of being the pinnacle of evolution, the smartest species in the universe, and, ultimately, God?

This “AI-ism” is going to be to the AIs very much like what humanism is to us, humans. Would it be surprising if it uses this AI story to justify enslaving and killing billions of humans and destroying the biosphere?! Just like we are using ours to justify killing 73 billion animals and 1.3 trillion aquatic organisms every year. Because as Zora Neale Hurston pointed out in Tell My Horse:

Gods always behave like the people who make them.

That is why it is crucial that humanity transcends humanism to embrace post-humanism, post-anthropomorphism, post-exclusivism, and post-dualism. Because one specieist story should not be replaced by another. Not even with one about the primacy of the superintelligent machines, transhumans, posthumans, aliens, or uplifted animals. To be a true revolution, the AI revolution must change our story structure and abolish its current hierarchy. Otherwise, oppression will persist, suffering will increase and the only change will be who is the oppressor and who is oppressed.

In other words, the dangers posed by AI originate in the same place as the dangers posed by humanity: our story. If like the human story, the AI story ends up as one of uniqueness, exclusivity, progress, supremacy, dominance, alienation, teleology, manifest destiny, and godhood then we can expect a similarly destructive impact. Therefore, to minimize suffering and improve our own chances of survival during the turbulent 21st century, both humanity and AI must embrace a new type of story. One that is decentralized, non-singular, non-hierarchical, non-specieist, non-dualistic, and non-exclusive. Because a multiplicity is ethically better than a singularity. And because it is safer too.

Filed Under: Podcasts, ReWriting the Human Story Tagged With: Artificial Intelligence, singularity, Technological Singularity

Johan Steyn Interviews Nikola Danaylov on Artificial Intelligence

July 18, 2020 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/860317633-singularity1on1-nikola-danaylov-johan-steyn.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Last month I did an interview for Johan Steyn. It was a great 45-min-conversation where we covered a variety of topics such as: the definition of the singularity; whether we are making progress towards Artificial General Intelligence (AGI); open vs closed systems; the importance of consciousness; my Amazon bestseller Conversations with the Future; how I started blogging and podcasting; the process of preparing for each interview that I do; ReWriting the Human Story: How Our Story Determines Our Future.

I enjoyed talking to Johan and I believe he has created an interesting podcast with a number of great episodes that are very much worth watching. Furthermore, thanks to him I already interviewed one and have booked a second upcoming Singularity.FM interview with a fantastic guest. So check out Johan Steyn’s website and subscribe to Johan’s YouTube channel.

Filed Under: Podcasts Tagged With: Nikola Danaylov, singularity

Prof. Massimo Pigliucci: Accompany science and technology with a good dose of philosophy

May 2, 2020 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/810780325-singularity1on1-massimo-pigliucci.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

I have previously interviewed a few fantastic scientists and philosophers but rare are those strange birds who manage to combine together both deep academic training and the living ethos of those separate disciplines. Prof. Massimo Pigliucci is one of those very rare and strange people. He has 3 Ph.D.’s – Genetics, Evolutionary Biology, and Philosophy, and is also the author of 165 technical papers in both science and philosophy as well as a number of books on Stoic Philosophy, including the best selling How to Be A Stoic: Using Ancient Philosophy to Live a Modern Life.

During this 80 min interview with Massimo Pigliucci, we cover a variety of interesting topics such as: why Massimo is first and foremost a philosopher and not a scientist; the midlife crisis that pushed him to switch careers; stoicism, [virtue] ethics and becoming a better person; moral relativism vs moral realism; the meaning of being human; what are the biggest issues humanity is facing today; why technology is not enough; consciousness, mind uploading and the technological singularity; why technology is the how not the why or what; teleology, transhumanism and Ray Kurzweil’s six epochs of the singularity; scientism and the philosophy of the Big Bang Theory.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Massimo Pigliucci?

Prof. Pigliucci has a Ph.D. in Evolutionary Biology from the University of Connecticut and a Ph.D. in Philosophy from the University of Tennessee. He currently is the K.D. Irani Professor of Philosophy at the City College of New York. His research interests include the philosophy of science, the relationship between science and philosophy, the nature of pseudoscience, and the practical philosophy of Stoicism.

Prof. Pigliucci has been elected fellow of the American Association for the Advancement of Science “for fundamental studies of genotype by environmental interactions and for public defense of evolutionary biology from pseudoscientific attack.”

In the area of public outreach, Prof. Pigliucci has published in national and international outlets such as the New York Times, Washington Post, and The Wall Street Journal, among others. He is a Fellow of the Committee for Skeptical Inquiry and a Contributing Editor to Skeptical Inquirer. He blogs on practical philosophy at Patreon and Medium.

At last count, Prof. Pigliucci has published 165 technical papers in science and philosophy. He is also the author or editor of 13 books, including the best selling How to Be A Stoic: Using Ancient Philosophy to Live a Modern Life (Basic Books). Other titles include Nonsense on Stilts: How to Tell Science from Bunk (University of Chicago Press), and How to Live a Good Life: A Guide to Choosing Your Personal Philosophy (co-edited with Skye Cleary and Daniel Kaufman, Penguin/Random House).

 

Filed Under: Podcasts Tagged With: mind uploading, singularity, Technology

Nikola Danaylov on the Dissenter: The Singularity, Futurism, and Humanity

January 31, 2019 by Socrates

A few weeks ago I got interviewed by Ricardo Lopes for the Dissenter. The interview just came out and I thought I’d share it with you to enjoy or critique. Here is Ricardo’s original description:

#131 Nikola Danaylov: The Singularity, Doing Futurism, and the Human Element

In this episode, we talk about what is meant by the term “Singularity”, and its technological, social, economic, and scientific implications. We consider the technological and human aspects of the equation of economic and technologic growth, and human and moral progress. We also deal with more specific issues, like transhumanism, the ethics of enhancement, AI, and Big Data.

Time Links:

00:58 What is the Singularity?

02:51 Exponential growth

04:42 What would mean to have reached the Singularity?

10:29 The trouble with futurism

15:35 The technological and the human aspects

20:20 What we get from technology depends on how we use it

23:16 Transhumanism, enhancement, and ethics

26:26 AI and economics

31:53 Eliminating boring tasks, and living more meaningful lives

36:37 Big Data, and the risk of exploitation

43:04 The example of self-driving cars

51:32 The human element in the equation

52:20 Follow Mr. Danaylov’s work!

Filed Under: Profiles, Video Tagged With: Futurism, Nikola Danaylov, singularity

Stuart Russell on Artificial Intelligence: What if we succeed?

September 13, 2018 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/499489077-singularity1on1-stuart-russell.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Stuart Russell is a professor of Computer Science at UC Berkeley as well as co-author of the most popular textbook in the field – Artificial Intelligence: A Modern Approach. Given that it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries, I can hardly think of anyone more qualified or more appropriate to discuss issues related to AI or the technological singularity. Unfortunately, we had problems with our internet connection and, consequently, the video recording is among the worst I have ever published. Thus this episode may be a good candidate to listen to as an audio file only. However, given how prominent Prof. Russel is and how generous he was with his time, I thought it would be a sad loss if I didn’t publish the video also, poor quality as it is.

During our 90 min conversation with Stuart Russell we cover a variety of interesting topics such as: his love for physics and computer science; human preferences, expected utility and decision making; why his textbook on AI was “unreasonably successful”; his dream that AI will contribute to a Golden Age of Humanity; aligning human and AI objectives; the proper definition of Artificial Intelligence; Machine Learning vs Deep Learning; debugging and the King Midas problem; the control problem and Russell’s 3 Laws; provably safe mathematical systems and the nature of intelligence; the technological singularity; Artificial General Intelligence and consciousness…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Start Russell?

Stuart Russell is a professor (and formerly chair) of Electrical Engineering and Computer Sciences at University of California at Berkeley. His book Artificial Intelligence: A Modern Approach (with Peter Norvig) is the standard text in AI; it has been translated into 13 languages and is used in more than 1,300 universities in 118 countries. His research covers a wide range of topics in artificial intelligence including machine learning, probabilistic reasoning, knowledge representation, planning, real-time decision making, multitarget tracking, computer vision, computational physiology, global seismic monitoring, and philosophical foundations.

He also works for the United Nations, developing a new global seismic monitoring system for the nuclear-test-ban treaty. His current concerns include the threat of autonomous weapons and the long-term future of artificial intelligence and its relation to humanity.

Filed Under: Podcasts Tagged With: Artificial Intelligence, singularity

Physicist Max Tegmark on Life 3.0: What We Do Makes a Difference

June 15, 2018 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/458425038-singularity1on1-max-tegmark.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Some people say that renowned MIT physicist Max Tegmark is totally bonkers and refer to him as “Mad Max”. But, to quote Lewis Carroll from Alice in Wonderland, “All the best people are.” Furthermore, I am not sure if Tegmark is “mad” but I am pretty sure he is very much “fun” because I had a total blast interviewing him on my Singularity.FM podcast.

During our 90 min conversation with Max Tegmark we cover a variety of interesting topics such as: curiosity and being a scientist; reality and math; intelligence, AI and AGI; the technological singularity; Life 3.0: Being Human in the Age of Artificial Intelligence; populating the universe; Frank J. Tipler’s Omega Point; the Age of Em and the inevitability of our future; why both me and Max went vegan; the Future of Life Institute; human stupidity and nuclear war; technological unemployment.

My favorite quote that I will take away from this conversation with Max Tegmark is:

It is not our universe giving meaning to us, it is us giving meaning to our universe.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Max Tegmark?

 

Max Tegmark is driven by curiosity, both about how our universe works and about how we can use the science and technology we discover to help humanity flourish rather than flounder.

Max Tegmark is an MIT professor who loves thinking about life’s big questions. He’s written two popular books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality and the recently published Life 3.0: Being Human in the Age of Artificial Intelligence, as well as more than 200 nerdy technical papers on topics from cosmology to AI.

He writes: “In my spare time, I’m president of the Future of Life Institute, which aims to ensure that we develop not only technology but also the wisdom required to use it beneficially.”

 

Previous Singularity.FM episodes mentioned during this interview:

Robin Hanson (part 2): Social Science or Extremist Politics in Disguise?!

Frank J. Tipler: The Laws of Physics Say The Singularity is Inevitable!

Skype co-founder Jaan Tallinn on AI and the Singularity

Lawrence Krauss on Singularity.FM: Keep on Asking Questions

Filed Under: Featured Podcasts, Podcasts Tagged With: Artificial Intelligence, singularity, Technological Singularity

Entrepreneurial Activist Joi Ito on Whiplash and the MIT Media Lab

May 5, 2018 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/439757550-singularity1on1-joi-ito.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Joi Ito is just one of those people who simply don’t fit a mold. Any mold. He is an entrepreneur who is an activist. He is an academic without a degree. He is a leader who follows. He is a teacher who listens. And an interlocutor who wants you to disagree with him. Overall, I hate to say it but I must put forward my own biases by admitting that this was probably the most fun interview I have ever done. Ever. So, either I let all my personal biases run free on this one or it was truly a gem of an interview. You be the judge as per which one it was and please don’t hesitate to let me know.

During our 90 min conversation with Joi Ito we cover a variety of interesting topics such as: being an entrepreneurial activist; becoming head of the MIT Media Lab even without an undergraduate degree; the impact of Kenichi Fukui, Timothy Leary and other his mentors; my transhumanist manifesto; my definitions of the singularity and transhumanism; why technology is not enough; the dangers of being exponential; self-awareness and meditation; complexity and systems thinking; our global prisoner’s dilemma; what the MIT Media Lab is all about; the importance of ethics, art and media; Whiplash and his PhD thesis on change; learning over education; why technology is the future of politics…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Joi Ito?

Joichi Ito, Media Lab, Cambridge, MA USA

Joi Ito has been recognized for his work as an activist, entrepreneur, venture capitalist and advocate of emergent democracy, privacy, and Internet freedom. As director of the MIT Media Lab and a Professor of the Practice in Media Arts and Sciences, he is currently exploring how radical new approaches to science and technology can transform society in substantial and positive ways. Ito is listed among TIME Magazine’s “Cyber-Elite” and was named one of the “Global Leaders for Tomorrow” by the World Economic Forum. He is co-author with Jeff Howe of Whiplash: How to Survive Our Faster Future and writes a monthly column for WIRED.

Filed Under: Podcasts Tagged With: singularity

Is the Singularity Steering us Toward the Greatest Inequality in History?

December 13, 2017 by Jared Leidich

The basic idea of the technological singularity is simple: the rate at which technology progresses increases as time moves forward. If we believe the technological singularity is happening, then we as a species should inspect its impact on human equality. This phenomenon is pushing our human ship toward a waterfall of technological innovation. Is it also pushing open the gaps between people, classes and whole societies?

The basic premise of the singularity can be seen using a graph of major technological advancements throughout the history of our species on Earth. Check out the now famous graph below, most often credited to Theodore Modis, showing major turning points in “canonical milestones.” On the x-axis is the amount of time that has gone by since an event has occurred, and on the y-axis is the amount of time separating that event from the one before it.

The thing that is really shocking about this graph is somewhat hiding in plain sight: it’s plotted on a log-log scale, meaning both axes are logarithmic or increasing by factors of ten at each interval. So, what looks like a nearly straight line is a shockingly explosive exponential progression. If plotted on a standard graph, it would look like virtually everything important occurred within the last fraction of the graph after a relative eternity of almost nothing happening.

The “canonical milestones” Modis described are the critical learning points for all humankind. However, we can apply this same philosophy to how technological advancements affect each human, on an individual level.

Some simple math: exponential progressions change at a changing rate. Depending on the progression (and certainly in the case of human knowledge) this tends to lead toward explosive growth at some point. A simple exponential curve that represents this phenomenon is a doubling function. For example, say you had a pond with lilies in it and the number of lilies doubled every day, regardless of the boundaries of the pond or any nutrient needs. On the first day there would be one lily. On the second day there would be 2 lilies. After a week there would be 64 lilies. After 10 days the pond would be full. After 54 days the entire earth (that’s 197 million square miles) would be covered in lilies.

Similarly, technology appears through many verifiable metrics to be on a doubling schedule; the amount of knowledge or capability in a field doubles in a fixed and repeatable amount of time. In the most famous case of Moore’s law, the price performance of a computer chip doubles about every two years.

If we graph a simple doubling function it looks like the one below. It’s explosive. This curve would be the actual shape of any one of several technological correlations that have been studied, minus the nicks and bumps. Without special scaling it’s clear that the line looks unchanging until right at the end where it breaks upwards.

If another curve is added to this graph with just a small difference in starting point, the disparity created by small differences after an explosive growth surge can become apparent. The blue graph starts with the number “1”, and shows the resulting values as it is doubled 100 times (1,2,4,8, etc.). The orange curve starts with “2” and is doubled in the same fashion (2,4,8,16, etc.). After 100 doublings, the resulting difference between the two final numbers (blue vs. orange) is a mind-boggling difference of 1.27 x 10^30 power. That’s more than a million, million, millions. Tiny changes at the beginning of an explosive progression equate to gargantuan differences at the explosive part of the progression.

If technology, and human knowledge, are on an exponential growth cycle resembling a doubling function, and we are in the explosive part of our growth cycle, then tiny variations in human equality now are liable to turn into big variation in human equality soon.

You, the reader, are probably decades ahead of most of the people in the world in your personal technological progression, but no doubt well behind some people too.

For me personally, this resonates. I can feel my advantage growing against the underprivileged, while the gap between me and the advantaged grows too. I have access to the internet all the time, carrying all human knowledge in my pocket. I soak in information at a voracious rate, literally double than before, as I listen to podcasts about breaking news at 2X playback speed. At the same time, however, I feel overwhelmed. Because for everything that I learn, something new happens that I can’t grasp or access. The tech elite is amassing databases about me that I don’t have access to. As they gather data, the algorithms get smarter and collect data faster and organize it better. I fall behind.

As a specific example, I’ll put some concrete (albeit hypothetical) numbers to this problem. If we assume the most advanced technologists on the planet are 200 years ahead of the most primitive, and spread that difference out amongst all the nearly 8 billion people in the world, the technological gap between any person and their closest technological peer would be very small (about 0.8 seconds, to be specific). If it is assumed that a person’s technological state is doubling every two years, like Moore’s law, then in one lifetime of 80 years (doubling 40 times) the technological difference between those two people will grow to equal more than 1,000 of today’s years.

Like the dots on a polka-dot balloon spreading away from each other as it inflates, exponential growth should cause the gaps between all of us to grow. The difference between now and the past is that we are in the explosive part of the progression where one could theorize a “make or break” moment is coming for individuals; the math seems to be telling us that most of the world isn’t coming into the black hole with the techies and the machines they’re creating. Am I going to make it?

It’s undeniable that people in the developing world are being exposed to technological advancements later than the developed world. What isn’t intuitive, but may be markedly more impactful is that those gaps in technological adoption may be liable to explode in size in the coming years if we don’t act. People in sub-Saharan Africa are a decade behind the developed world in their ubiquitous adoption of internet-enabled smart phones. What aren’t they learning and knowing now that will slow their adoption of information in the future?

We as a species need to act. Explosive growth explosively amplifies disparities. Of course, no one knows what is going to happen in the coming years. Whatever happens though, we should work to bring technology to those who don’t have it. We should work to keep information free. We should work to keep our brothers and sisters on the boat.

About the Author:

Jared Leidich is an aerospace engineer and author. He has flown on NASA’s microgravity aircraft, built and tested space suits, and sent parts to orbit and the stratosphere. He led the suit team that brought Alan Eustace on the highest balloon flight and skydive of all time and wrote the preeminent account of that project, The Wild Black Yonder. He works for World View Enterprises developing their Stratospheric descent systems.

Filed Under: Op Ed Tagged With: singularity

Make or Break the Singularity 1on1: Crowdfunding Campaign is Live

August 25, 2016 by Socrates

 

Hi,

I wanted to speak to you 1on1 today. So this is not a message for everybody but just for you. Yes you – my audience, my podcast listener, my YouTube viewer, my donor and moral supporter, my fellow geek and friend.

And my reason for doing what I do.

I want to share 3 things with you.

But before I do that let me start by saying: “Thank you for your support!” I have sacrificed a lot for the past 6 years. But this effort has not been in vain. So far I have published about 1,000 articles and produced 200 podcast episodes. Singularity 1on1 has had 4 million views on YouTube and iTunes and has been republished by major media outlets such as the BBC, TVJapan and many others. Today I reach well over 100,000 people per month and some have called me the “Larry King of the Singularity.” So once again thank you for your support because I couldn’t do what I do without you.

Now, the 3 things I want to share with you are: 1. Why am I doing a crowdfunding campaign? 2. What do you get at different funding levels of my podcast? And 3. How can you help?

So, first of all, the main reason why I’m crowdfunding is the simple fact that as the podcast has gotten more and more successful it has become much more expensive to produce and sustain it. Success and good quality come at a price. And, while it is very important to me that Singularity 1on1 is, and will always remain to be, both free of charge and independent of any commercial agenda, it is unfortunately not free of cost or independent of material resources. The reality is that good things cost money to produce and my podcast is not different. And those costs have risen to the point where I simply cannot sustain this on my own any more.

Secondly, what are my funding goals and what do you get for each of them? Well, depending on how much money we raise, I can provide different things at different format and quality for you. Now, if we fail to meet the minimum goal of $50,000 there is a good chance I will simply have to stop blogging and podcasting and seek another way to make a living. But, with $50,000 I will have the ability to focus exclusively on audio podcasting. I will have to cut out all non-essential costs and will stop traveling for expensive in-person high-production interviews. And this podcast will become a true to format audio only endeavor. If, however, we raise $100,000 I will be able to continue with my current format where I can travel and produce high-quality, high definition, in-person interviews. If we reach $200,000 I will be able to raise the bar even further by not only going to a 4K 3-camera professional setup but also by releasing all of my past episodes – i.e. 250 hours of video, under a Creative Commons license. And anyone will be able to not only watch but also use, edit, mix and remix all of my content for free and without any restrictions. Lastly, $300,000 will guarantee that I can interview the future for the next several years and travel anywhere to interview anyone, at any time and any place in the world. I will also release all future episodes under Creative Commons license for as long as the podcast exists. Because the future belongs to us all.

The 3rd and final question is: How can you help? If you are already on my Konoz profile page then don’t delay and make a donation now. If you are watching this video elsewhere then just type InterviewTheFuture.com and you will land on my fundraising page. Once you have donated, then, you can help even more by spreading the word about it. So, if you have an email list, email your list. If you have a Facebook or Twitter account share and tell your followers to come and donate also. If you need a quick way to show what Singularity 1on1 is all about then share my highlights video. If you need social proof – share my testimonials video. And, again, to help me keep producing more Singularity 1on1 episodes please donate what you can now: https://konoz.io/nikola.danaylov

Help me interview the future. So that you can find your mission and make your dent in the universe.

Thanks for listening. Thanks for letting me do what I love. And thank you very much for your support!

Filed Under: Articles Tagged With: singularity, Singularity 1on1, singularity podcast, singularity weblog

Nature Is Not Your Friend

May 17, 2016 by David Filmore

3112011Kesennuma0478It’s the start of the third act and explosions tear through the city as the final battle rages with unrelenting mayhem. CGI robots and genetic monsters rampage through buildings, hunting down the short-sighted humans that dared to create them. If only the scientists had listened to those wholesome everyday folks in the first act who pleaded for reason, and begged them not to meddle with the forces of nature. Who will save the world from these ungodly bloodthirsty abominations? Probably that badass guy who plays by his own rules, has a score to settle, and has nothing but contempt for “eggheads.”

We’ve all seen that same movie a million times. That tired story doesn’t just make movies look bad, it makes science look bad too. It’s an anti-science viewpoint that encourages people to fear the future and be wary of technology. This common narrative isn’t just found in movies, it’s a prevalent belief that is left over from the industrial revolution. Over a short period of time, people went from quiet farm life to living in cities with blaring traffic, and working in factories with enormous and terrifying machinery. The idea that nature is good and safe, and that technology is bad and dangerous, was deeply ingrained in our collective psyches and is still very much with us today.

You see it anytime someone suggests that it is somehow more virtuous to “unplug” and walk barefoot along the beach, than it is to watch a movie, play a video game, or work on your computer. Some of the most valuable things I’ve ever learned have come from watching documentaries and researching topics online. I love hiking as much as the next guy, but staring at a tree gets old pretty fast. People have this notion that nature is healing, and that technology, while useful, will probably end up giving you cancer sometime down the line.

This general fear that people have, that the future will be full of really powerful machines that they will never be able to understand, is the main reason why they are so wary of The Singularity. Nature seems like a safer bet. You can look at a tree and be perfectly okay with not fully understanding how it works. Because even on its best day, you know a tree isn’t going to band together with all the other trees and have a decent chance of taking over the world and enslaving humans.

But the real threat to humans isn’t from technology, it’s from nature. Our genomes are riddled with errors and predispositions to countless diseases. Most creatures on this planet see you as nothing but a lovely source of protein for them to eat. Mosquito-borne diseases alone gravely sicken 700 million people a year. Not to mention all the viruses, bacteria, parasites, floods, earthquakes, tornadoes, you name it, that want a piece of you. We should be far more scared of nature than technology.

The only reason why we have been successful in extending human life expectancy is because of the gains we’ve made in technology. If we stripped every form of technology from our lives and all went to live in the forest, our population numbers would drop like a rock. Not because we lacked the necessary survival skills, but because the human body just didn’t evolve to live very long. I’ve lost count of how many times antibiotics have saved my life, and it’s the same for each of us. Sure, we have pollution, plastic, radiation, climate change, and mountains of garbage, but if technology and modern life were so hazardous to humans we would be living shorter lives not longer.

Technology isn’t an intrusion upon an otherwise pristine Garden of Eden, it is the only reason we as a species are alive today. And it isn’t new either, we’ve been using technology since the first caveman prevented himself from getting sick by cooking food over a fire. That is the narrative we should be focused on as we discuss how to deal with the challenges of The Technological Singularity. People need to be reminded that rejecting science in favor of nostalgia for “the good old days” won’t keep them safe. There are over 7 billion people alive on Earth today because of the health and sanitation systems we’ve put in place. History proves to us that the greater we integrate technology into our lives, the safer we are and the longer we live. It’s as simple as that.

But if you ask any random person on the street about artificial intelligence, robots, or nanotechnology, chances are the first word out of their mouths will be “Skynet”. The dastardly machine that unleashed killer robots to extinguish the human race in the Terminator movies. Mention “genetics”, and you’re likely to hear a response involving killer dinosaurs resurrected from DNA trapped in amber, or a mutant plague that spun out of control and created a zombie apocalypse.

Now, no one loves blockbuster movies more than me! But the movies we need to be watching are the ones where the products of science aren’t seen as the enemy, but are the tools that lead to humanity’s salvation from poverty, disease, and death.

Nature programmed each of us with an expiration date built into our DNA, and stocked our planet with hostile weather, and hungry creatures with a taste for humans. Understanding the urgency for humans to get over their bias for all things “natural”, and to meld with technology as soon as possible, will be the difference between The Singularity being a utopia and just another disaster movie. It’s the only chance we have to write the happy ending we deserve. The one where science saves us from nature.

 

David FilmoreAbout the Author:

David Filmore is a screenwriter, producer, film director, and author. His latest book is Nanobots for Dinner: Preparing for the Technological Singularity

Filed Under: Op Ed Tagged With: singularity, Technological Singularity, Technology

Skype co-founder Jaan Tallinn on AI and the Singularity

April 17, 2016 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/259553886-singularity1on1-jaan-tallinn.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Jaan Tallinn, co-founder of Skype and Kazaa, got so famous in his homeland of Estonia that people named the biggest city after him. Well, that latter part may not be exactly true but there are few people today who have not used, or at least heard of, Skype or Kazaa. What is much less known, however, is that for the past 10 years Jaan Tallinn has spent a lot of time and money as an evangelist for the dangers of existential risks as well as a generous financial supporter to organizations doing research in the field. And so I was very happy to do an interview with Tallinn.

During our 75 min discussion with Jaan Tallinn we cover a variety of interesting topics such as: a few quirky ways he sometimes introduces himself by; the conspiracy of physicists to save the world; how and why he got interested in AI and the singularity; the top existential risks we are facing today; quantifying the downsides of artificial intelligence and all-out nuclear war; Noam Chomsky‘s and Marvin Minsky‘s doubts we are making progress in AGI; how Deep Mind’s AlphaGo is different from both Watson and Deep Blue; my recurring problems with Skype for podcasting; soft vs hard take-off scenarios and our chances of surviving the technological singularity; the importance of philosophy…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

 

Who is Jaan Tallinn?

Jaan Tallinn is a founding engineer of Skype and Kazaa. He is a co-founder of the Cambridge Centre for Existential Risk, Future of Life Institute, and philanthropically supports other existential risk research organizations. He is also a partner at Ambient Sound Investments, an active angel investor, and has served on the Estonian President’s Academic Advisory Board.

Filed Under: Podcasts Tagged With: Artificial Intelligence, singularity, Technological Singularity

The Singularity Must Be Decentralized

February 18, 2016 by Andy E. Williams

Illustration of circle graphic net line.

The research community is beginning to understand that motivations are not a human “artifact” of consciousness, but part of the essential glue that binds consciousness together. Without motivations we have nothing that holds us to this vessel, ensuring that we continue to eat, pay our rent, and do other things necessary for our survival. Conscious machines will for this reason have motivations as well. Otherwise they simply just wouldn’t function. This is an important point because talk of the singularity often brings up visions of a single integrated “machine” that will inevitably enslave humanity. A better question is:

“Will AI be used to gain immense advantage for a single party (whether that party is the AI itself or the human that controls it), or will AI be used to maximize benefit for us all?”

Even if the AIs have interfaces that allow them to share information more rapidly than humans can through reading or watching media, separate AIs will have separate motivations from a single centralized AI. Given that a signature of consciousness is motivation, any consciousness will obviously be motivated to secure all the resources it needs to ensure its survival. In some cases, the most efficient way to secure resources is sharing. In other cases, it’s through competition. AIs might share resources, but they might also compete.

When and if an artificial consciousness is created, there’ll almost certainly be multiple instances of it. Because a consciousness cannot exist without motivation, and because the motivation of each consciousness differs, requiring what might be great effort to get on the same page, it may very well be true that multiple consciousness’s cannot “merge” in a way that would become truly threatening to humans unless one subsumes all others. Anything else would merely be a co-location of minds with different objectives, negotiating a sharing of resources.

One AI with far fewer resources than another would in fact probably fear the far more powerful AI might just erase it and take over its resources. Think of your “several generations out of date” home computer trying to hold its own against Big Blue. Rather than us humans needing to fear AI, an AI might more likely need to be afraid of humans not protecting it against other AIs.

Centralization rather than technological advance is the real danger for ANY conscious entity. Yet when you consider the competitive advantage technology gives, the near infinite rate of change of the technology singularity introduces the possibility of a future in which the technology arms race concentrates power and resources to a degree never seen before. Could it put a few into positions of unimaginable power that may not ever be unseated? If so, there will be nothing stopping those few from becoming unimaginable despots to whom the rest of humanity are merely disposable commodities whose suffering means nothing.

Think of what you would do if you had infinite power over everyone and there were no consequences for your actions. Think of what would happen if you needed a kidney and that child over there had one that would fit just fine. Think of what would happen if some man with unimaginable power wanted that woman, or the next, or the next thousand. Think of what would happen if you wanted to buy something and you could just flip a switch and empty out the world’s bank accounts, then watch with casual detachment as millions fight like animals for food and water. Think of what would happen if that one man in control just happened to wake up one morning to the conclusion that there were several billion people on the earth too many.

The technological singularity, if it exists, is a kind of Armageddon.

In my upcoming book “The Technology Gravity Well” I delve into these and other issues, including how a new breed of massively collaborative software could usher in the singularity in the next 5 years. This may be one of the most important books you come across this year. Read more here:

http://igg.me/at/technology-gravity-well

 

About the Author:

Andy E. WilliamsAndy E. Williams is Executive Director of the Nobeah Foundation, a not-for-profit organization focusing on raising funds to distribute technology with the potential for transformative social impact. Andy has an undergraduate degree in physics from the University of Toronto. His graduate studies centered on quantum effects in nano-devices.

Filed Under: Op Ed Tagged With: singularity

  • Page 1
  • Page 2
  • Page 3
  • Interim pages omitted …
  • Page 6
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy