• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

Technological Singularity

Chapter 11: The AI Story

August 2, 2021 by Socrates

https://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/1098721606-singularity1on1-rewriting-the-human-story-chapter-11.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

ReWriting the Human Story: How Our Story Determines Our Future

an alternative thought experiment by Nikola Danaylov

 

Chapter 11: The AI Story

Computer Science is no more about computers than astronomy is about telescopes. Edsger Dijkstra

When looms weave by themselves, man’s slavery will end. Aristotle

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Vernor Vinge, 1993

Today we are entirely dependent on machines. So much so, that, if we were to turn off machines invented since the Industrial Revolution, billions of people will die and civilization will collapse. Therefore, ours is already a civilization of machines and technology because they have become indispensable. The question is: What is the outcome of that process? Is it freedom and transcendence or slavery and extinction?

Our present situation is no surprise for it was in the relatively low-tech 19th century when Samuel Butler wrote Darwin among the Machines. There he combined his observations of the rapid technological progress of the Industrial Revolution and Darwin’s theory of evolution. That synthesis led Butler to conclude that intelligent machines are likely to be the next step in evolution:

…it appears to us that we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race.

Samuel Butler developed further his ideas in Erewhon, which was published in 1872:

There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusk has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time.

Similarly to Samuel Butler, the source of Ted Kaczynski’s technophobia was his fear that:

… the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide. The Unibomber Manifesto

As noted at the beginning of this chapter, humanity has already reached the machine dependence that Kaczynski was worried about. Contemporary experts may disagree on when artificial intelligence will equal human intelligence but most believe that in time it likely will. And there is no reason why AI will stop there. What happens next depends on both the human story and the AI story.

For example, if AI is created in a corporate lab it will likely be commercialized. If AI is created in a military lab it will likely be militarized. If AI is created in an Open Source community it will likely be cooperative and collaborative. And if it is created in someone’s garage it will likely reflect the story of that particular person or people. So, the context within which AI is created will shape its own origin story and that story will define the way it treats humanity.

Thus a military AI will likely treat humans as allies and enemies. A commercial AI will likely treat humans as customers and products. An Open Source AI might treat humans as parents, coders, friends, or allies. [Given current funding trends the first two types seem the most likely.] So the most crucial thing that humanity will do when creating AI is narrating the AI origin story. Because similar to us, by describing how it came into being, what it is here for, what’s its purpose and proper place in the universe, its story will determine its future. If we get the AI story right we have a chance to coexist peacefully. But if we get it wrong that could mean a Gigawar of planetary proportions, even extinction.

For example, what if, like us, AI ends up with a story of being the pinnacle of evolution, the smartest species in the universe, and, ultimately, God?

This “AI-ism” is going to be to the AIs very much like what humanism is to us, humans. Would it be surprising if it uses this AI story to justify enslaving and killing billions of humans and destroying the biosphere?! Just like we are using ours to justify killing 73 billion animals and 1.3 trillion aquatic organisms every year. Because as Zora Neale Hurston pointed out in Tell My Horse:

Gods always behave like the people who make them.

That is why it is crucial that humanity transcends humanism to embrace post-humanism, post-anthropomorphism, post-exclusivism, and post-dualism. Because one specieist story should not be replaced by another. Not even with one about the primacy of the superintelligent machines, transhumans, posthumans, aliens, or uplifted animals. To be a true revolution, the AI revolution must change our story structure and abolish its current hierarchy. Otherwise, oppression will persist, suffering will increase and the only change will be who is the oppressor and who is oppressed.

In other words, the dangers posed by AI originate in the same place as the dangers posed by humanity: our story. If like the human story, the AI story ends up as one of uniqueness, exclusivity, progress, supremacy, dominance, alienation, teleology, manifest destiny, and godhood then we can expect a similarly destructive impact. Therefore, to minimize suffering and improve our own chances of survival during the turbulent 21st century, both humanity and AI must embrace a new type of story. One that is decentralized, non-singular, non-hierarchical, non-specieist, non-dualistic, and non-exclusive. Because a multiplicity is ethically better than a singularity. And because it is safer too.

Filed Under: Podcasts, ReWriting the Human Story Tagged With: AI, AI Story, Artificial Intelligence, singularity, Singularity Story, Technological Singularity

Physicist Max Tegmark on Life 3.0: What We Do Makes a Difference

June 15, 2018 by Socrates

https://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/458425038-singularity1on1-max-tegmark.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Some people say that renowned MIT physicist Max Tegmark is totally bonkers and refer to him as “Mad Max”. But, to quote Lewis Carroll from Alice in Wonderland, “All the best people are.” Furthermore, I am not sure if Tegmark is “mad” but I am pretty sure he is very much “fun” because I had a total blast interviewing him on my Singularity.FM podcast.

During our 90 min conversation with Max Tegmark we cover a variety of interesting topics such as: curiosity and being a scientist; reality and math; intelligence, AI and AGI; the technological singularity; Life 3.0: Being Human in the Age of Artificial Intelligence; populating the universe; Frank J. Tipler’s Omega Point; the Age of Em and the inevitability of our future; why both me and Max went vegan; the Future of Life Institute; human stupidity and nuclear war; technological unemployment.

My favorite quote that I will take away from this conversation with Max Tegmark is:

It is not our universe giving meaning to us, it is us giving meaning to our universe.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Who is Max Tegmark?

 

Max Tegmark is driven by curiosity, both about how our universe works and about how we can use the science and technology we discover to help humanity flourish rather than flounder.

Max Tegmark is an MIT professor who loves thinking about life’s big questions. He’s written two popular books, Our Mathematical Universe: My Quest for the Ultimate Nature of Reality and the recently published Life 3.0: Being Human in the Age of Artificial Intelligence, as well as more than 200 nerdy technical papers on topics from cosmology to AI.

He writes: “In my spare time, I’m president of the Future of Life Institute, which aims to ensure that we develop not only technology but also the wisdom required to use it beneficially.”

 

Previous Singularity.FM episodes mentioned during this interview:

Robin Hanson (part 2): Social Science or Extremist Politics in Disguise?!

Frank J. Tipler: The Laws of Physics Say The Singularity is Inevitable!

Skype co-founder Jaan Tallinn on AI and the Singularity

Lawrence Krauss on Singularity.FM: Keep on Asking Questions

Filed Under: Featured Podcasts, Podcasts Tagged With: AI, Artificial Intelligence, Life 3.0, Max Tegmark, singularity, Technological Singularity, The Future of Life Institute

Nature Is Not Your Friend

May 17, 2016 by David Filmore

3112011Kesennuma0478It’s the start of the third act and explosions tear through the city as the final battle rages with unrelenting mayhem. CGI robots and genetic monsters rampage through buildings, hunting down the short-sighted humans that dared to create them. If only the scientists had listened to those wholesome everyday folks in the first act who pleaded for reason, and begged them not to meddle with the forces of nature. Who will save the world from these ungodly bloodthirsty abominations? Probably that badass guy who plays by his own rules, has a score to settle, and has nothing but contempt for “eggheads.”

We’ve all seen that same movie a million times. That tired story doesn’t just make movies look bad, it makes science look bad too. It’s an anti-science viewpoint that encourages people to fear the future and be wary of technology. This common narrative isn’t just found in movies, it’s a prevalent belief that is left over from the industrial revolution. Over a short period of time, people went from quiet farm life to living in cities with blaring traffic, and working in factories with enormous and terrifying machinery. The idea that nature is good and safe, and that technology is bad and dangerous, was deeply ingrained in our collective psyches and is still very much with us today.

You see it anytime someone suggests that it is somehow more virtuous to “unplug” and walk barefoot along the beach, than it is to watch a movie, play a video game, or work on your computer. Some of the most valuable things I’ve ever learned have come from watching documentaries and researching topics online. I love hiking as much as the next guy, but staring at a tree gets old pretty fast. People have this notion that nature is healing, and that technology, while useful, will probably end up giving you cancer sometime down the line.

This general fear that people have, that the future will be full of really powerful machines that they will never be able to understand, is the main reason why they are so wary of The Singularity. Nature seems like a safer bet. You can look at a tree and be perfectly okay with not fully understanding how it works. Because even on its best day, you know a tree isn’t going to band together with all the other trees and have a decent chance of taking over the world and enslaving humans.

But the real threat to humans isn’t from technology, it’s from nature. Our genomes are riddled with errors and predispositions to countless diseases. Most creatures on this planet see you as nothing but a lovely source of protein for them to eat. Mosquito-borne diseases alone gravely sicken 700 million people a year. Not to mention all the viruses, bacteria, parasites, floods, earthquakes, tornadoes, you name it, that want a piece of you. We should be far more scared of nature than technology.

The only reason why we have been successful in extending human life expectancy is because of the gains we’ve made in technology. If we stripped every form of technology from our lives and all went to live in the forest, our population numbers would drop like a rock. Not because we lacked the necessary survival skills, but because the human body just didn’t evolve to live very long. I’ve lost count of how many times antibiotics have saved my life, and it’s the same for each of us. Sure, we have pollution, plastic, radiation, climate change, and mountains of garbage, but if technology and modern life were so hazardous to humans we would be living shorter lives not longer.

Technology isn’t an intrusion upon an otherwise pristine Garden of Eden, it is the only reason we as a species are alive today. And it isn’t new either, we’ve been using technology since the first caveman prevented himself from getting sick by cooking food over a fire. That is the narrative we should be focused on as we discuss how to deal with the challenges of The Technological Singularity. People need to be reminded that rejecting science in favor of nostalgia for “the good old days” won’t keep them safe. There are over 7 billion people alive on Earth today because of the health and sanitation systems we’ve put in place. History proves to us that the greater we integrate technology into our lives, the safer we are and the longer we live. It’s as simple as that.

But if you ask any random person on the street about artificial intelligence, robots, or nanotechnology, chances are the first word out of their mouths will be “Skynet”. The dastardly machine that unleashed killer robots to extinguish the human race in the Terminator movies. Mention “genetics”, and you’re likely to hear a response involving killer dinosaurs resurrected from DNA trapped in amber, or a mutant plague that spun out of control and created a zombie apocalypse.

Now, no one loves blockbuster movies more than me! But the movies we need to be watching are the ones where the products of science aren’t seen as the enemy, but are the tools that lead to humanity’s salvation from poverty, disease, and death.

Nature programmed each of us with an expiration date built into our DNA, and stocked our planet with hostile weather, and hungry creatures with a taste for humans. Understanding the urgency for humans to get over their bias for all things “natural”, and to meld with technology as soon as possible, will be the difference between The Singularity being a utopia and just another disaster movie. It’s the only chance we have to write the happy ending we deserve. The one where science saves us from nature.

 

David FilmoreAbout the Author:

David Filmore is a screenwriter, producer, film director, and author. His latest book is Nanobots for Dinner: Preparing for the Technological Singularity

Filed Under: Op Ed Tagged With: nature, singularity, Technological Singularity, Technology

Skype co-founder Jaan Tallinn on AI and the Singularity

April 17, 2016 by Socrates

https://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/259553886-singularity1on1-jaan-tallinn.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Jaan Tallinn, co-founder of Skype and Kazaa, got so famous in his homeland of Estonia that people named the biggest city after him. Well, that latter part may not be exactly true but there are few people today who have not used, or at least heard of, Skype or Kazaa. What is much less known, however, is that for the past 10 years Jaan Tallinn has spent a lot of time and money as an evangelist for the dangers of existential risks as well as a generous financial supporter to organizations doing research in the field. And so I was very happy to do an interview with Tallinn.

During our 75 min discussion with Jaan Tallinn we cover a variety of interesting topics such as: a few quirky ways he sometimes introduces himself by; the conspiracy of physicists to save the world; how and why he got interested in AI and the singularity; the top existential risks we are facing today; quantifying the downsides of artificial intelligence and all-out nuclear war; Noam Chomsky‘s and Marvin Minsky‘s doubts we are making progress in AGI; how Deep Mind’s AlphaGo is different from both Watson and Deep Blue; my recurring problems with Skype for podcasting; soft vs hard take-off scenarios and our chances of surviving the technological singularity; the importance of philosophy…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

 

Who is Jaan Tallinn?

Jaan Tallinn is a founding engineer of Skype and Kazaa. He is a co-founder of the Cambridge Centre for Existential Risk, Future of Life Institute, and philanthropically supports other existential risk research organizations. He is also a partner at Ambient Sound Investments, an active angel investor, and has served on the Estonian President’s Academic Advisory Board.

Filed Under: Podcasts Tagged With: AI, Artificial Intelligence, Jaan Tallinn, singularity, Technological Singularity

Top 10 Reasons We Should Fear The Singularity [Infographic]

January 21, 2015 by Socrates

top_10_reasons_fear_singularity_image“I think the development of full artificial intelligence could spell the end of the human race” said Stephen Hawking.

“With artificial intelligence we are summoning the demon…” said Elon Musk.

So why are some of the world’s greatest minds and some of the world’s best entrepreneurs considering the potential rise of super-smart artificial intelligence – aka the technological singularity, as one of the world’s greatest threats?!

I have previously published a list of what I believe are the Top 10 Reasons We Should Fear the Singularity and it is one of the all-time most popular posts on Singularity Weblog. Today I want to share this neat new inforgraphic that Michael Dedrick designed based on the content of the original article.

Have a look and don’t fear letting me know what you think:

Do you fear the singularity?! Why?…

10_reasons_fear_singularity

Want to publish this infographic on your own site?

Copy and paste the below code into your blog post or web page:

Related articles
  • Worrying about Artificial Intelligence: CBC on the Singularity
  • Top 10 Reasons We Should Fear The Singularity
  • Top 10 Reasons We Should NOT Fear The Singularity

Filed Under: Funny, Op Ed, What if? Tagged With: AI, Artificial Intelligence, singularity, Technological Singularity

Greg Bear, Ramez Naam and William Hertling on the Singularity

October 2, 2014 by Socrates

https://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/210486496-singularity1on1-greg-bear-ramez-naam-william-hertling.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Sci Fi RoundtableThis is the concluding sci fi round-table discussion of my Seattle 1-on-1 interviews with Ramez Naam, William Hertling and Greg Bear. The video was recorded last November and was produced by Richard Sundvall, shot by Ian Sun and generously hosted by Greg and Astrid Bear. (Special note of thanks to Agah Bahari who did the interview audio re-mix and basically saved the footage.)

During our 30 minute discussion with Greg Bear, Ramez Naam and William Hertling we cover a variety of interesting topics such as: what is science fiction; the technological singularity and whether it could or would happen; the potential of conflict between humans and AI; the definition of the singularity; emerging AI and evolution; the differences between thinking and computation; whether the singularity is a religion or not…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Filed Under: Featured, Podcasts Tagged With: Greg Bear, Ramez Naam, singularity, Technological Singularity, William Hertling

SciFi Master Greg Bear: The Singularity is the Secular Apotheosis

September 26, 2014 by Socrates

https://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/210304625-singularity1on1-greg-bear.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Greg Bear Thumb 1Greg Bear is truly one of the masters of classic science fiction – he has written over 35 books, that have sold millions of copies, and have been translated into 22 languages. No wonder I was not only immensely excited but also pretty nervous while preparing to interview him face to face. As it turned out I was worrying for nothing: Greg Bear is a really affable fellow with a fantastic sense of humor and, together with his wife Astrid, endlessly generous hospitality. Bear is also a passionate macro-photographer with the most stunning dragon-fly-pictures collection that I have seen in my life. Thus it is a total understatement to say that I had an absolute blast spending a full day shooting 4 podcast episodes at his house, including the attached 90 min interview.

During our conversation with Greg Bear we cover a variety of interesting topics such as: how he got inspired to write; what is science fiction; the role of photography and visual imagery; the merging of philosophy and science; sci-fi as the jazz of literature; religion, mysticism and his take on Jesus; the birth of Comic-Con; whether science fiction inspires science or vice versa; the singularity, transhumanism and “the brick wall of philosophy”…

This interview was so packed with intellectual gems that I almost feel like shying away from listing any. But here are just two of my favorites, and you feel free to share yours in the comment section below:

…The people who are changing the world read science fiction.

In a sense, science fiction is history in reverse…

Greg-Bear's House

This is the third out of a series of 3 sci-fi round-table interviews with Ramez Naam, William Hertling and Greg Bear that I did last November in Seattle. It was produced by Richard and Tatyana Sundvall, shot by Ian Sun and generously hosted by Greg and Astrid Bear. (Special note of thanks to Agah Bahari who did the interview audio re-mix and Josh Glover who did the video editing.)

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

 

Who is Greg Bear?

Greg Bear PortraitGreg Bear is the author of more than thirty books, spanning thrillers, science fiction, and fantasy, including Blood Music, Eon, The Forge of God, Darwin’s Radio, City at the End of Time, and Hull Zero Three. His books have won numerous international prizes, have been translated into more than twenty-two languages, and have sold millions of copies worldwide. Over the last twenty-eight years, he has also served as a consultant for NASA, the U.S. Army, the State Department, the International Food Protection Association, and Homeland Security on matters ranging from privatizing space to food safety, the frontiers of microbiology and genetics, and biological security.

Filed Under: Podcasts Tagged With: Greg Bear, sci fi, Science Fiction, singularity, Technological Singularity

Rap News covers the Singularity [feat. Ray Kurzweil & Alex Jones]

September 23, 2014 by Socrates

Rap News Singularity

Rap News takes a fun trip into what they call: “the pure world of sci-fi to investigate the much vaunted, mysterious potential future event known as The Singularity.

What will a machine consciousness mean for humanity? What are the ethical, political, military and philosophical implications of strong A.I.? And what would an AI sound like when spitting rhymes over a dope beat?

All this and more shall be revealed in Rap News 28: The Singularity – featuring a special appearance from famed technocrat, futurist and inventor, Ray Kurzweil, in full TED talk mode; everyone’s favorite warmonger, General Baxter; and we welcome back the dauntless info warrior Alex Jones, who last made an appearance in RN6. Join Robert Foster on this epic Sci-Fi quest into the future/past of humanity.

Written & created by Giordano Nanni & Hugo Farrant in a suburban backyard home studio in Melbourne, Australia, on Wurundjeri Land.

 

Filed Under: Funny Tagged With: rap news, Ray Kurzweil, singularity, Technological Singularity

Physicist Michio Kaku: Science is the Engine of Prosperity!

June 6, 2014 by Socrates

https://media.blubrry.com/singularity/p/feeds.soundcloud.com/stream/208594810-singularity1on1-michio-kaku.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Michio Kaku

Dr. Michio Kaku is a theoretical physicist, bestselling author, acclaimed public speaker, renowned futurist, and popularizer of science. As co-founder of String Field Theory, Dr. Kaku carries on Einstein’s quest to unite the four fundamental forces of nature into a single grand unified theory of everything. You will not be surprised to hear that Michio Kaku has been on my guest dream-list since I started podcasting, and I was beyond ecstatic to finally have an opportunity to speak to him.

During our 90 min conversation with Dr. Michio Kaku we cover a variety of interesting topics such as: why he shifted his focus from the universe to the human mind; his definition, classification and ranking of consciousness; his take on the Penrose-Hameroff Orch OR model; Newton, Einstein, determinism and free will; whether the brain is a classical computer or not; Norman Doidge’s work on neuro-plasticity and The Brain That Changes Itself; the underlying reality of everything; his dream to finish what Einstein has started and know the mind of God; The Future of the Mind; mind-uploading and space travel at the speed of light; Moore’s Law and D-Wave’s quantum computer; the Human Brain Project and whole brain simulation; alternatives paths to AI and the Turing Test as a way of judging progress; cryonics and what is possible and impossible…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

 

Who is Michio Kaku?

michio-kaku-chalkboardDr. Michio Kaku has starred in a myriad of science programming for television including Discovery, Science Channel, BBC, ABC, and History Channel. Beyond his numerous bestselling books, he has also been a featured columnist for top popular science publications such as Popular Mechanics, Discover, COSMOS, WIRED, New Scientist, Newsweek, and many others. Dr. Kaku was also one of the subjects of the award-winning documentary, ME & ISAAC NEWTON by Michael Apted.

He is a news contributor to CBS: This Morning and is a regular guest on news programs around the world including CBS, Fox News, CNBC, MSNBC, CNN, RT. He has also made guest appearances on all major talk shows including The Daily Show with Jon Stewart, The Colbert Report with Stephen Colbert, The Late Show with David Letterman, The Tonight Show with Jay Leno, Conan on TBS, and others.

Michio Kaku hosts two weekly radio programs heard on stations around the country and podcast around the world. He is the co-creator of string field theory, a branch of string theory. He received a B.S. (summa cum laude) from Harvard University in 1968 where he came first in his physics class. He went on to the Berkeley Radiation Laboratory at the University of California, Berkeley and received a Ph.D. in 1972. In 1973, he held a lectureship at Princeton University.

Michio continues Einstein’s search for a “Theory of Everything,” seeking to unify the four fundamental forces of the universe—the strong force, the weak force, gravity, and electromagnetism.

He is the author of several scholarly, Ph.D. level textbooks and has had more than 70 articles published in physics journals, covering topics such as superstring theory, supergravity, supersymmetry, and hadronic physics.

Dr. Kaku holds the Henry Semat Chair and Professorship in theoretical physics at the City College of New York (CUNY), where he has taught for over 25 years. He has also been a visiting professor at the Institute for Advanced Study at Princeton, as well as New York University (NYU).

Filed Under: Featured Podcasts, Podcasts Tagged With: AI, Artificial Intelligence, Michio Kaku, Technological Singularity, the future of the mind

What Is the Singularity?

March 17, 2014 by Ted Chu

technological singularityI have always had mixed feelings about Singularity becoming the buzz word for the transhumanist movement. I am glad that this catchy word is spreading the idea of the post-human future, opening the eyes of people who cannot imagine a time when human beings are no longer the most intelligent and most powerful in the world. On the other hand, Singularity seems to represent a future when technologies completely overwhelm humans and bring unprecedented changes and risks. For most people, this is a scary future, a future that we cannot understand at all, let alone be a part of. Although technological enthusiasts have tried to convince people that we could become immoral and enjoy material abundance after the Singularity, nobody can deny the existential risks and dangers of unintended consequences.

That is why in my book, Human Purpose and Transhuman Potential, the concept of Singularity is only mentioned in passing. I would like to thank Nikola Danaylov for giving me this opportunity to briefly discuss a couple of issues related to Singularity.

First, I am not sure Singularity as popularly defined by mathematician and science fiction writer Vernor Vinge is even remotely close. During his Singularity 1 on 1 interview, Vinge states again that the Singularity is a time when our understanding of the new reality is like a goldfish’s understanding of modern civilization. If this is the case, I cannot believe that Singularity is near. As a reference, Ray Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030.

It is true that with accelerating technological change and possible intelligence “explosion”, human beings will understand less and less about technology. But this is nothing radically new. Who knows the technical details of the smartphone or the automobile? At the conceptual level, relativity, quantum mechanics, and the number 10 to the power of 100 are a few things so counter-intuitive that few can truly grasp. But there is something powerful called metaphor, and as long as we can map a complex reality to something we are familiar with, we manage to make sense of it.

I tend to agree with Michio Kaku that with luck and conscious efforts we can attain the Kardashev Type I civilization status within 100–200 years, which means, among other things, complete mastery of energy resources on Earth. If we can describe nuclear power and supercomputers to hunter-gatherers in remote corners of the Earth, why won’t future advanced transhumans be able to describe the Type I civilization to us?

In Chapter 12 of my book, there is a short description of a possible Kardashev Type III scenario, when future intelligent beings obtain the ability to harness the energy of an entire galaxy. I wrote about A Second Axial Age: “With new knowledge and technologies, CoBe (Cosmic Being) may manage to travel at an ever faster speed, maybe even faster than the speed of light. But unless CoBe can discover or invent instantaneous communication and information processing techniques such as so-called wormholes, space will be an absolute barrier to communication and interaction among distant stars and galaxies. In other words, we only know the history—not the current reality—of others living at great distances in space. The future might then be a return to a ‘tribal environment’ in the sense that the instant communications we have established on Earth might no longer be possible. CoBe would be isolated by space, each ‘tribe’ taking time to learn and to evolve.”

If the human being were kept alive as a species then (costing infinitely less than we keep a goldfish alive today), I am sure that super smart CoBe would be able to use some kind of metaphor (much better than my Axial Age metaphor) to communicate with humans about the new reality. Since we have already got a glimpse of the entire universe and its history (what I call the Cosmic View), something truly incomprehensible must be something beyond this universe.

If we define Singularity simply as accelerating technological change and intelligence explosion, then this is nothing new. The pace of evolution has been accelerating from the beginning, both in natural and cultural evolution. This trend should continue with what I call “conscious evolution”. We have been living in a world of information explosion and that has been just fine – each of us takes just a slice of information we like and blissfully ignore the rest. There will be much more intelligence developing and we will deal with it in more or less the same way. Emphasizing the fact that this is nothing new could go a long way in terms of lessening the fear of the post-human future.

We should not only point out that this unprecedented future has a rich history, but also make it clear that this future is highly desirable, and making it happen should be our mission. This is the second point I would like to address: the concept of Singularity fails to provide a positive and transcendental value for us. Modern science has demonstrated that a literal reading of religious scriptures is no longer tenable. However, the secular world has also thrown out the ancient wisdom of transcendental faith and narrowly focused our goal to maximize the well-being of humanity.

As I discussed in Chapter 8, this goal of maximizing human happiness is not only unattainable but also runs against the “will” of the universe for us to complete the transitory role of our species. As I argued in detail in Chapter 4 of my book, science is not value-free, nor is (or should be) the concept of Singularity.

I have not seen a good piece of argument that Singularity should be something that we need to focus our efforts on. I find it very difficult to inject value into this concept as it is popularly defined now. I am glad that Singularity has created a high-level of awareness of the posthuman future, but we must move on from its “neutral” and technological nature. We know technology is a two-edged sword. We need a weapon, but a flag is more important.

 

About the Author:

Ted ChuFormerly the chief economist at General Motors, Ted Chu was also chief economist for Abu Dhabi Investment Authority. For the last 15 years, his second career has been spent in conducting research on the philosophical question of humanity’s place in the universe with special reference to our “posthuman” future. Born and raised in China, Chu earned his Ph.D. in economics at Georgetown University. He is currently clinical professor of economics at New York University at Abu Dhabi.

 

Filed Under: Op Ed Tagged With: Human Purpose and Transhuman Potential, singularity, Technological Singularity, Ted Chu

Transcendence: Johnny Depp’s New Singularity Film [Almost] Transcends Technophobia

January 7, 2014 by Socrates

Transcendence is Johnny Depp’s new singularity film that comes out today.

transcendence-movie

I saw the film last night during its first available time-slot and, in order to avoid big spoilers, I want to write no more than a couple of vague sentences:

Let me admit that, after seeing the teasers and trailers below, I was very negatively predisposed and did not expect anything good. But I was pleasantly surprised because it was rather refreshing to see a mainstream Hollywood movie that was not made by sensationalist Luddites trying to make an easy buck by going for the lowest common denominator.

And so while the movie is by no means outstanding [or anything close to Her], overall it was pretty decent.

As my friend with whom I saw the movie observed: “The main idea was carried through rather well.”

And while the idea, that advanced technologies though often scary are not necessarily bad, is by no means unique or Earth-shattering in any way, it is one worth sending out into the mainstream of public consciousness. And this simple exercise is a considerable step up from the simplistic Terminator/hubris/end-of-the-world scenarios that have dominated science fiction since Frankenstein…

 

Synopsis: Dr. Will Caster (Johnny Depp) is the foremost researcher in the field of Artificial Intelligence, working to create a sentient machine that combines the collective intelligence of everything ever known with the full range of human emotions.  His highly controversial experiments have made him famous, but they have also made him the prime target of anti-technology extremists who will do whatever it takes to stop him.

However, in their attempt to destroy Will, they inadvertently become the catalyst for him to succeed – to be a participant in his own transcendence.  For his wife Evelyn (Rebecca Hall) and best friend Max Waters (Paul Bettany), both fellow researchers, the question is not if they can… but if they should.

Their worst fears are realized as Will’s thirst for knowledge evolves into a seemingly omnipresent quest for power, to what end is unknown.  The only thing that is becoming terrifyingly clear is there may be no way to stop him.

Transcendence co-stars Morgan Freeman, Cillian Murphy, Kate Mara and Paul Bettany.

 

I call it Transcendence:

 

Transcendence: Humanity’s Next Evolution

 

Revolutionary Independence From Technology [RIFT]

 

Official Trailer 1:

 

Official Trailer 2:

 

Filed Under: Reviews, Video, What if? Tagged With: Johnny Depp, singularity, Technological Singularity, Transcendence

Singularity Defined and Refined

October 29, 2013 by Singularity Utopia

The meanings of words change. Meanings evolve. Definitions of words are not set in stone, they aren’t unalterable commandments from God. Words are merely concepts humans have invented. The original definition of “awful” was apparently full of awe, worthy of respect.

Anyone can invent a word. Successful inventions enter common usage. All inventions are typically refined. The invention of words isn’t immune to refinement.

Misunderstanding often occurs regarding the word Singularity because this word is being refined, which is more common for new concepts. I think the Singularity is a colossal intelligence explosion, limitless intelligence, which creates utopia. It is not about mind-uploading or unpredictability.

Post-Scarcity is a clearer way to define the Singularity. Scarce intelligence is the source of all scarcity. Lifespan-scarcity, food-scarcity, or spaceship-scarcity all highlight how intellectual insufficiency is the obstacle to utopia. A resource called “intelligence” is the source of all technology. Technology is essentially intelligence, which means explosive intelligence is an explosion of resources.

singularity
Image by the artist Hugh C Fathers. All rights reserved ©

Our brainpower has been essential for our progress. Our minds erode scarcity. James Miller in his book Singularity Rising wrote: “Economic prosperity comes from human intelligence.” Ramez Naam also highlights the power of our brains in his book The Infinite Resource, thus regarding innovation he stated on his website: “Throughout human history we have learned to overcome scarcity and adversity through the application of innovation — the only resource that is expanded, not depleted, the more we use it.”

You could say we’re approaching an explosion of innovation. Technology conquers scarcity, technology liberates us from scarcity, but the power of technology (intelligence) is currently limited, scarce. We are suffering from a scarcity of ultra-sophisticated technology (intelligence) thus all resources are somewhat scarce. Human-level AI is extremely scarce, it is non-existent in the year 2013. When human-level AI is created we will start quickly eradicating all forms of scarcity, we will be rapidly approaching a colossal explosion of intelligence – the Singularity. [I was inspired to write about the definition of the Singularity after a G+ post by Mark Bruce. Mark wrote about the meaning of egregious, and wondered why the meaning had changed.]

The word egregious immediately caused me to think about the word gregarious, which is a logical connection to make. Both words are based on the Latin grex, gregis, which means “flock.” Gregarious means sociable, companionable; being part of the flock. Currently egregious means outstandingly bad, but the original meaning was merely outstanding, a shining example of awesomeness. Egregious is all about standing out from the flock, but interpretations could differ because standing out from the crowd can be good or bad. Farmers for example might not appreciate rebellious sheep.

The concept of the “black sheep” is a notorious idiom regarding non-conformity (standing out from the flock). Mark Bruce thought the meaning of egregious could have changed due to sarcasm but I think it’s merely a change based on obedience and conformity. The evolution of civilization has temporarily led to greater regimentation, mediocrity has been valued because it maintains social equilibrium, which I suspect is the reason why egregious (outstanding illustriousness) became bad. Blending into the flock became desirable while nonconformity became shockingly wrong. During the early stages of civilization, when populations were small thus less draining on resources, authoritarian control was less obvious or less needed, which could be why egregious originally described a valuable nonconformist trait of being “outstanding.”

Dealing with extremes can cause a switch between the two poles thus intense love can easily become intense hate if you are betrayed by a lover. Lovers can also become irrationally jealous, vengeful. Perhaps this is why the Singularity can either be utopia or dystopia, or perhaps it is why Snowden is either a hero or a traitor. Maybe it all depends on your viewpoint?

Insufficient intelligence causes humans to misunderstand situations. Fights over scarce resources occur, which causes civilization to emphasize the authoritarian disharmony of scarcity. Intelligent people via their foresight will think Snowden is a hero because they understand technology is eroding scarcity. Conversely Snowden has been deemed a traitor based on unawareness of the future. Snowden’s leaks represent decreasing scarcity but unawareness means people wrongly assume his actions threaten civilization.

Thankfully, despite the teething troubles of civilization, our collective intelligence is increasing thus there is less need to blend into the flock, although we do remain locked into scarcity-based battles. Sometime around year 2030 I think the authoritarian controls of civilization will be significantly abolished, but until then perhaps technology will be awful. Theoretically we could improve civilization much sooner but humans do suffer from scarce intelligence, which means it is difficult to be aware of the future.

The Singularity is a theory not an unalterable prophecy. The Singularity isn’t comparable to the biblical word of God, which warns against change: “I warn everyone who hears the words of the prophecy of this book: if anyone adds to them, God will add to him the plagues described in this book, and if anyone takes away from the words of the book of this prophecy, God will take away his share in the tree of life and in the holy city, which are described in this book.”

If intelligence is the focus of the Singularity then it is vital to refine our understanding of the theory, we need to improve how we define it. We need to consider what the purpose of intelligence is. Is it smart to become more intelligent? Is colossal intelligence really intelligent if it fails to create utopia? Is colossal intelligence painfully slow or is it defined via a emancipatory quickness? Michael Anissimov has stated we should stick to the “original documents” (3m 29s) regarding the Singularity but I think unyielding closure contradicts the openness of intelligence.

In addition to Michael’s biblical immutability, which focuses on the “original documents,” there is an issue with the way that Singularity University defines the meaning of the intelligence explosion. I often encounter people who think the Singularity has already happened. This kind of misunderstanding seems perpetuated by Singularity University because they suggest the Singularity is merely “dramatic technological change.”

If we are merely considering dramatic technological change then it is understandable for people to think the iPhone or Google Glass is the Singularity. Corruption of meanings can be frustrating, very confusing, but restricting our ability to change meanings isn’t the solution. The solution is openness whereby all meanings can be debated without any one individual or organisation imposing their authority to create absolute definitions.

Ray Kurzweil and Vernor Vinge have influenced my thinking but from my viewpoint they don’t fully comprehend the Singularity. Their biggest mistake is to think the Singularity is unknowable, unpredictable, beyond human comprehension. There is no rational reason to assume advanced intelligence would be unfathomable, in fact unfathomableness is decidedly unintelligent thus more appropriate for a censorious religion than explosive intelligence. Explosive intelligence should logically increase comprehension for everyone instead of decreasing comprehension.

A naked singularity, which has no event horizon, is a better analogy than standard gravitational singularities for describing our technological Singularity. Naked singularities are theoretically more powerful than standard black holes, they are more singular, thus metaphorically better descriptors of colossal intelligence. Standard singularities are comparatively boring.

Note also how instead of obscurantism recent black-hole research suggests that physics-ending singularities vanish, thereby creating bridges to alternate universes. This means that with the help of loop quantum gravity we might be able to deny the claim that the laws of physics break down in standard black-holes.

The black hole information paradox is a fascinating paradox. Perhaps information is not lost. Whatever the situation is regarding gravitational singularities, whether information is hidden or revealed, it should be noted obscurantism is not a facet of intelligence. [Although the CIA objecting to Snowden’s openness will probably disagree.] If gravitational singularities entail obscured information, if they are unfathomable or unknowable, then the metaphor is wrong because true intelligence is or should be opposed to [cosmic] censorship. Obscurantism is antithetical to intelligence.

Please note that despite the Singularity inevitably leading to widespread extrasolar, extragalactic and perhaps even multiverse colonisation, it is not a cosmological phenomenon. The Singularity is “only” metaphorically a stellar event.

Similar to how “egregious” had a different meaning at a different point in history, I think changing awareness will let people comprehend how the Singularity is opposed to obscurantism. In the future there will be no elitist restrictions and everyone will easily access explosive intelligence.

Maybe in year 2045 there will need to be Singularity whistle-blowers leaking classified intelligence from the core of the intelligence explosion? Obviously I jest when I state the intelligence explosion would need whistle-blowers. All restrictions upon knowledge will be explosively obliterated. The Singularity will be understood by everyone.

Incorrect definitions of the Singularity are mainly based on unawareness of how scarcity currently shapes our lives. Faulty predictions of the future fail to see how scarcity will be eradicated. The ramifications of scarcity ending are not appreciated. There is a failure to comprehend what the end of scarcity actually entails, namely how it relates to technology and/or information. Based on current circumstances people therefore envisage a future of scarce understanding – a future of restricted information where knowledge is limited to a minority of specialists. This entails incorrectly envisioned scenarios where the future is utterly unfathomable or robots kill humans then destroy the Earth: “…the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.”

Finally, the Singularity is capitalized because it is a unique event distinct from gravitational singularities. It is similar to how the Big Bang, the Mesozoic Era, or the Industrial Revolution is capitalized.

 

About the Author:

Singularity Utopia writes for Singularity-2045, a Post-Scarcity orientated website dedicated to increasing awareness regarding the coming technological utopia. The goal is to make the Singularity happen sooner instead of later.

 

 

Related articles
  • Frank J. Tipler: The Laws of Physics Say The Singularity is Inevitable
  • 17 Definitions of the Technological Singularity

Filed Under: Op Ed, What if? Tagged With: post scarcity, singularity, singularity utopia, Technological Singularity

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • David Loy on Zen, EcoDharma, AI and Story
  • Byron Reese on Stories, Dice and Rocks That Think
  • Cadell Last on Nietzsche, Transhumanism and Story
  • Joscha Bach on AI, Cosmology, Existence and the Bible
  • The Cassandra of Futurism Faith Popcorn: Futurism Doesn’t Come from the Past

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • Gadgets
  • Lists
  • Music
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Survey
  • Tips
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your own ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Donate
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2022 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy