• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

singularity

David Brin: What’s Important Isn’t Me. And It Isn’t You. It’s Us!

November 29, 2012 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/197437294-singularity1on1-david-brin.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

David Brin is not only a Ph.D. in astrophysics but also an award-winning, best-selling science fiction author, perhaps best known for his uplift series of novels and, most recently, Existence.

Originally, I was supposed to interview Brin in the summer. Unfortunately, I got a concussion the day before and thus had to delay it. David is a busy man so it took a while to book another date but eventually, we did and I have to say that I very much enjoyed talking to and being challenged by him.

During our conversation with Brin we cover a wide variety of topics such as his interest in science fiction, writing and civilization; his novel The Postman which later became a feature film with Kevin Costner; his views on post-modernism, progress, ethics, and objective reality; pessimistic versus optimistic science fiction; the self-preventing prophesy as the greatest form of science fiction (e.g. George Orwell’s 1984); his role of the lead prosecutor in Star Wars on Trial and Yoda as one of the most evil characters; his latest novel Existence…

My favorite quote that I will take away from this interview with David Brin is:

What’s important is not me. And it’s not you. It’s us!

Correction: the Oedipus tragedy was written by Sophocles, not by Aristophanes as I say during the interview.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Existence: Book Trailer

Bestselling, award-winning futurist David Brin returns to globe-spanning, high concept fiction with EXISTENCE. This 3-minute preview offers glimpses and scenes from the novel, all painted especially for this trailer by renowned web artist Patrick Farley, conveying some of the drama and what may be at stake, in our near future.

Gerald Livingston is an orbital garbage collector. For a hundred years, people have been abandoning things in space, and someone has to clean them up. But there’s something spinning a little bit higher than he expects, something that isn’t on the decades-old old orbital maps. An hour after he grabs it and brings it in, rumors fill Earth’s info mesh about an “alien artifact.”

Thrown into the maelstrom of worldwide shared experience, the Artifact is a game-changer. A message in a bottle; an alien capsule that wants to communicate. The world reacts as humans always do: with fear and hope and selfishness and love and violence. And insatiable curiosity.

 

Who is David Brin?

David Brin is a scientist, speaker, technical consultant, and world-known author. His novels have been New York Times Bestsellers, winning multiple Hugo, Nebula, and other awards. At least a dozen have been translated into more than twenty languages.

His 1989 ecological thriller, Earth foreshadowed global warming, cyberwarfare, and near-future trends such as the World Wide Web*. A 1998 movie, directed by Kevin Costner, was loosely based on The Postman.

Brin serves on advisory committees dealing with subjects as diverse as national defense and homeland security, astronomy and space exploration, SETI and nanotechnology, future/prediction, and philanthropy. His non-fiction book — The Transparent Society: Will Technology Force Us To Choose Between Privacy And Freedom? — deals with secrecy in the modern world. It won the Freedom of Speech Prize from the American Library Association.

As a public “scientist/futurist” David appears frequently on TV, including, most recently, on many episodes of “The Universe” and on the History Channel’s best-watched show (ever) “Life After People.” He also was a regular cast member on “The ArciTECHS.” (For others, see “Media and Punditry.”)

Brin’s scientific work covers an eclectic range of topics, from astronautics, astronomy, and optics to alternative dispute resolution and the role of neoteny in human evolution. His Ph.D in Physics from UCSD – the University of California at San Diego (the lab of nobelist Hannes Alfven) – followed a master’s in optics and an undergraduate degree in astrophysics from Caltech. He was a postdoctoral fellow at the California Space Institute and the Jet Propulsion Laboratory. His patents directly confront some of the faults of old-fashioned screen-based interaction, aiming to improve the way human beings converse online.

David’s novel Kiln People has been called a book of ideas disguised as a fast-moving and fun noir detective story, set in a future when new technology enables people to physically be in more than two places at once.

A hardcover graphic novel The Life Eaters explored alternate outcomes to WWII, winning nominations and high praise in the nation that most loves and respects the graphic novel.

David’s science-fictional Uplift Universe explores a future when humans genetically engineer higher animals like dolphins to become equal members of our civilization. He also recently tied up the loose ends left behind by the late Isaac Asimov. Foundation’s Triumph brings to a grand finale Asimov’s famed Foundation Universe.

As a speaker and on television, David Brin shares unique insights — serious and humorous — about ways that changing technology may affect our future lives. Brin lives in San Diego County with his wife, three children, and a hundred very demanding trees.

Filed Under: Podcasts Tagged With: Science Fiction, singularity

Jamais Cascio on the Singularity: You Matter! Your Choices Make A Difference.

November 28, 2012 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/197275871-singularity1on1-jamais-cascio.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Jamais Cascio is one of the world’s top 100 thinkers according to Foreign Policy. He writes and speaks on a variety of topics from technology and global warming, to war, nuclear proliferation, ethics, and sustainable development.  Thus my goal was to discuss most of those topics for, in one way or another, they are relevant to our future. Unfortunately, I got tangled up in our discussion of the singularity and we spent most of our time on that topic. The good news, however, is that I am planning to use this as an excuse and invite Jamais to come back again on Singularity 1 on 1.

During our conversation with Jamais Cascio, we cover a wide variety of topics such as his personal story of becoming “an easily distracted generalist;” his undergraduate and graduate training in history, anthropology, and political science; his views on the singularity community in general and the technological singularity and Singularity University in particular; his criticism that creators of new technology rarely consider the ethical and political implications of their inventions; what he means by saying “if I can’t dance, I don’t want to be a part of the singularity;” the benefits of irrationality and biology; mind uploading versus human augmentation; the lack of agency and assumed machine perfection as some of the most upsetting aspects of the singularity…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is Jamais Cascio?

Selected by Foreign Policy magazine as one of their Top 100 Global Thinkers, Jamais Cascio writes about the intersection of emerging technologies, environmental dilemmas, and cultural transformation, specializing in the design and creation of plausible scenarios of the future. His work focuses on the importance of long-term, systemic thinking as a catalyst for building a more resilient society. Cascio’s work appears in publications as diverse as the Atlantic Monthly, the Wall Street Journal, and Foreign Policy. He has been featured in a variety of television programs on future issues, including National Geographic Television’s SIX DEGREES, its 2008 documentary on the effects of global warming, and Canadian Broadcasting Corporation’s 2010 documentary SURVIVING THE FUTURE. Cascio speaks about future possibilities around the world, at venues including the Aspen Environment Forum, Guardian Activate Summit in London, the National Academy of Sciences in Washington DC, and TED.

In 2009, Cascio published his first non-fiction book, Hacking the Earth: Understanding the Consequences of Geoengineering, praised by Foreign Policy as “the most subtle analysis yet on the subject.” Cascio has long worked in the field of foresight strategy. In the 1990s, he served as technology specialist at scenario planning pioneer Global Business Network, and later went on to craft scenarios on topics including energy, nuclear proliferation, and sustainable development. Cascio is presently a Distinguished Fellow at the Institute for the Future in Palo Alto, and also serves as Senior Fellow at the Institute for Ethics and Emerging Technologies.

In 2003, he co-founded WorldChanging.com, the award-winning website dedicated to finding and calling attention to models, tools and ideas for building a “bright green” future. In March, 2006, he started Open the Future as his online home, writing about subjects as diverse as robot ethics and the carbon footprint of cheeseburgers.

Filed Under: Podcasts, Profiles Tagged With: singularity, Technological Singularity

James Hughes on Citizen Cyborg: Interrogate and Engage the World

November 12, 2012 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/197012701-singularity1on1-james-hughes.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Dr. James Hughes is not only the executive director of the Institute for Ethics and Emerging Technologies (IEET) but also a well-known book author and transhumanist. I enjoyed having him on the show and will probably ask him to return.

During our conversation with Dr. Hughes, we cover a wide variety of topics such as what the IEET is and what it does; the story behind James’ interest in technology, policy, philosophy, and bio/ethics; why transhumanist atheists are often drawn to Buddhism; his first book Citizen Cyborg and his upcoming Cyborg Buddha; transhumanism and his definition thereof; whether optimism is rational; the impact of artificial intelligence on transhumanism; James’ take on the technological singularity and our chances of surviving it; the benefits of biology; moral enhancement and animal uplift.

As always, you can listen to or download the audio file above or scroll down to watch the video interview in full. To show your support, you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is James Hughes?

James Hughes, Ph. D., is the Executive Director of the techno-progressive thinktank Institute for Ethics and Emerging Technologies. He is a bioethicist and sociologist at Trinity College in Hartford, Connecticut, where he teaches health policy and serves as Director of Institutional Research and Planning. He holds a doctorate in sociology from the University of Chicago. Dr. Hughes is the author of Citizen Cyborg: Why Democratic Societies Must Respond To The Redesigned Human Of The Future and is working on a second book tentatively titled Cyborg Buddha.

Filed Under: Podcasts Tagged With: cyborg, singularity, transhumanism

Cory Doctorow on AI: The Singularity Is A Progressive Apocalypse

September 11, 2012 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/196662655-singularity1on1-cory-doctorow.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Portrait by Jonathan Worth

Cory Doctorow is one of my all-time most favorite science fiction writers. So it is no surprise I had so much fun interviewing him.

I don’t know how he does it, but Cory is one of those rare individuals who can juggle successfully being a father, an avid reader, a blogger, an activist, a journalist, and a prolific science fiction writer, all at once.

It is for this reason that I was persistent in chasing Cory for over 2 years so that I can finally get him on Singularity 1 on 1. And it was totally worth it: Doctorow is indeed a very dynamic, eloquent, passionate, challenging, and fun interlocutor.

During our conversation, Cory covers a wide variety of topics such as: how Star Wars inspired him to become a science fiction writer; Cory’s initial jobs as a bookstore seller, Greenpeace activist, web developer, entrepreneur, and director of the Electronic Frontier Foundation; the intimate relationship between being a science fiction writer, a blogger and an activist; the motivation and goals behind his work; what science fiction is about and what it is good and bad at doing; Doctorow’s take on the technological singularity as a “progressive apocalypse”; his “militant atheism” and technology activism.

Some of my favorite quotes that I will take away from this interview with Cory are:

Science fiction is very good at predicting the present.

[…]

Evolution is not perfection. Evolution is suitability.

[…]

We have failed to appreciate the gravitas of the internet and continue to regulate it as if it is a glorified video on demand service. And as we do this, we put everything that we do on the internet – which is everything – in jeopardy.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is Cory Doctorow?

photo by Jonathan Worth

Cory Doctorow is a science fiction novelist, blogger and technology activist. He is the co-editor of the popular weblog Boing Boing, and a contributor to The Guardian, the New York Times, Publishers Weekly, Wired, and many other newspapers, magazines and websites. He was formerly Director of European Affairs for the Electronic Frontier Foundation, a non-profit civil liberties group that defends freedom in technology law, policy, standards and treaties. He holds an honorary doctorate in computer science from the Open University (UK), where he is a Visiting Senior Lecturer; in 2007, he served as the Fulbright Chair at the Annenberg Center for Public Diplomacy at the University of Southern California.

His novels have been translated into dozens of languages and are published by Tor Books and simultaneously released on the Internet under Creative Commons licenses that encourage their re-use and sharing, a move that increases his sales by enlisting his readers to help promote his work. He has won the Locus and Sunburst Awards, and been nominated for the Hugo, Nebula and British Science Fiction Awards. His New York Times Bestseller Little Brother was published in May 2008. A sequel, Homeland, will be published in 2013, and another young adult novel, Pirate Cinema will precede it in October 2012. His latest short story collection is With a Little Help, available in paperback, ebook, audiobook and limited edition hardcover. In 2011, Tachyon Books published a collection of his essays, called Context (with an introduction by Tim O’Reilly) and IDW published a collection of comic books inspired by his short fiction called Cory Doctorow’s Futuristic Tales Of The Here And Now. His latest adult novel is Makers, published by Tor Books/HarperCollins UK in October, 2009. The Great Big Beautiful Tomorrow, a PM Press Outspoken Authors chapbook, was also published in 2011.

Little Brother was nominated for the 2008 Hugo, Nebula, Sunburst and Locus Awards. It won the Ontario Library White Pine Award, the Prometheus Award as well as the Indienet Award for bestselling young adult novel in America’s top 1000 independent bookstores in 2008.

He co-founded the open source peer-to-peer software company OpenCola, sold to OpenText, Inc in 2003, and presently serves on the boards and advisory boards of the Participatory Culture Foundation, the Clarion Foundation, The Glenn Gould Foundation, and the Chabot Space & Science Center’s SpaceTime project.

In 2007, Entertainment Weekly called him, “The William Gibson of his generation.” He was also named one of Forbes Magazine’s 2007/8/9/10 Web Celebrities, and one of the World Economic Forum’s Young Global Leaders for 2007.

His forthcoming books include The Rapture of the Nerds (a novel for adults, written with Charles Stross); Anda’s Game (a graphic novel from FirstSecond).

On February 3, 2008, he became a father. The little girl is called Poesy Emmeline Fibonacci Nautilus Taylor Doctorow, and is a marvel that puts all the works of technology and artifice to shame.

Other Science Fiction Authors on Singularity 1 on 1:
  • Daniel H. Wilson on Singularity 1 on 1: We Can’t Win Against Technology – We Are Technology!
  • Karl Schroeder: The Singularity is an Old Idea. Keep Moving Forward!
  • Robert J. Sawyer on Singularity 1 on 1: The Human Adventure is Just Beginning
  • Charlie Stross on Singularity 1 on 1: The World is Complicated. Elegant Narratives Explaining Everything Are Wrong!
  • Vernor Vinge on Singularity 1 on 1: We Can Surpass the Wildest Dreams of Optimism

Filed Under: Podcasts Tagged With: Cory Doctorow, singularity

A Desktop Singularity: Security Cam Footage of The Technological Singularity As It Actually Happened

September 3, 2012 by Socrates

There are a few people who argue that the technological singularity has already happened. Well, if it did actually happen then this must be the security cam video record of the desktop singularity.

Now, how many people saw that one coming?!

Filed Under: Funny Tagged With: singularity, Technological Singularity

Ray Kurzweil: As Humans and Computers Merge… Immortality?

July 12, 2012 by Socrates

In previous parts of his series on Making Sen$e of financial news, Paul Solman has been showcasing the future of technology and shining the spotlight on Singularity University. In this video, Solman interviews inventor Ray Kurzweil, who predicts that advancing technology will result in augmented brains, memories recorded on “mind files” and a greatly increased (i.e. indefinite life) span.

Audio Version:

http://media.blubrry.com/singularity/www.pbs.org/newshour/rss/media/2012/07/10/20120710_immortals.mp3

 

Transcript:

GWEN IFILL: Next, we take a very different look at a future — the future for human health and longevity.

Paul Solman, the NewsHour’s economics correspondent, has been exploring the profound social and economic changes brought on by rapidly changing technology. Tonight, he checks in with an inventor and futurist who takes the concept of advances in the medical field even further. It’s part of his ongoing reporting Making Sense of financial news.

PAUL SOLMAN: Earlier this year, we did several stories at a conference run by the futuristic California think tank Singularity University.

The stories were about-high tech’s prodigious promise for the future, dirt-cheap energy, sky-high crop yields, labor-free machinery. Tonight comes the kicker, far longer, far healthier, conceivably even eternal life.

Why would immortality be an economics rather than a science story? Because the basic aim of economics is to maximize well-being, the greatest good for the greatest number. And a longer, healthier life is the most unambiguous good there is, says Singularity’s co- founder, Ray Kurzweil, skeptics notwithstanding.

RAY KURZWEIL, co-founder, Singularity University: People say, oh, I don’t want to live past 100. And I say, OK, I would like to hear you say that when you’re 100.

PAUL SOLMAN: Assuming they will be hale and hearty at 100, which Kurzweil emphatically does. In fact, he firmly believes that at, age 100, he will just be getting started.

RAY KURZWEIL: I think I have a very good chance of making it through.

PAUL SOLMAN: But, when you say making it through, you mean, essentially, live forever?

RAY KURZWEIL: Indefinitely. I mean, I can never talk to you and say I have done it, I have lived forever. But the goal is to put that decision in our own hands, rather than the metaphorical hands of fate.

PAUL SOLMAN: Now, before you dis Ray Kurzweil’s prediction that death will become an elective procedure, note that he’s known for being ahead of the curve. This footage comes from a film about him, “Transcendent Man.”

RAY KURZWEIL: My name is Raymond Kurzweil, and I’m from Queens, New York.

PAUL SOLMAN: While still in high school, he was invited on TV’s “I’ve Got a Secret” to show off the computer he’d programmed to compose music in the style of classical composers.

MAN: Raymond, how old are you?

RAY KURZWEIL: I’m 17.

MAN: Do your parents know what you have been up to?

(LAUGHTER)

PAUL SOLMAN: In his 20s, he invented a reading machine for the blind.

MAN: They have invented a machine that can make any book talk.

MACHINE VOICE: Four score and seven years ago.

PAUL SOLMAN: His first customer was Stevie Wonder. . .

STEVIE WONDER, musician: Obviously, it was a life-changer.

PAUL SOLMAN: . . . starting a lifelong friendship that led to the development of Kurzweil’s second major invention, a musical synthesizer that sounded like real instruments — in short, a string of breakthroughs that boggle the human brain, when you think of how primitive computer brains were back then.

But computers had been getting more powerful fast, which got Kurzweil thinking ahead.

RAY KURZWEIL: In 1981, I noticed this remarkable exponential curve, which was very smooth. And I extended that curve out to 2050. Now we’re in 2012. It’s 30 years later, and were very much on that curve.

PAUL SOLMAN: What Kurzweil had noticed, as illustrated in the “Transcendent Man” documentary, was that all information technologies progress exponentially, doubling in performance while decreasing in size and price. Double every year, one, two, four, eight, 16, and so forth, and in just three decades, you have topped one billion.

RAY KURZWEIL: This is several billion times more powerful per dollar than the computer I used when I was a student. It’s 100,000 times smaller. We can create computers twice as powerful as they are today next year, because we’re using today’s computers to create them.

PAUL SOLMAN: While Kurzweil’s brain is nothing to sneeze at, he thinks even he won’t be able to compete with tomorrow’s computers.

RAY KURZWEIL: Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, we will have multiplied the intelligence, the human biological machine intelligence of our civilization a billion-fold.

PAUL SOLMAN: A billion-fold from today?

RAY KURZWEIL: Right.

That’s such a singular change that we borrow this metaphor from physics and call it a singularity, a profound disruptive change in human history. Our thinking will become a hybrid of biological and non-biological thinking.

PAUL SOLMAN: But, for the purposes of immortality, and therefore this story, so will our other bodily functions become a hybrid, insists Kurzweil, as humans and computers merge.

RAY KURZWEIL: The electronics will be so small, and we will put computerized devices that are the size of blood cells inside our body to keep us healthy. A new biological virus comes out, these little nanobots could download their software to combat that new pathogen.

PAUL SOLMAN: And so, immortality.

RAY KURZWEIL: We will get to a point 15 years from now where, according to my models, we will be adding more than a year every year to your remaining life expectancy, where the sands of time are running in rather than running out, where your remaining life expectancy actually stretches out as time goes by.

PAUL SOLMAN: Of course, as more time goes by, there will be more to remember. But Kurzweil says we will have augmented brains to retain more of it.

RAY KURZWEIL: Information defines your personality, your memories, your skills. And it really is information. And we ultimately will be able to capture that and actually recreate it. So then we will back ourselves up. People a hundred years from now will think it pretty amazing. People actually went through the day without backing up their mind file?

PAUL SOLMAN: You mean, back up your mind, so that all the memories you had yesterday, you will have tomorrow?

RAY KURZWEIL: It will be there in case it gets damaged. So if you hit the proverbial bus and it damages part of your brain, you can recreate that.

PAUL SOLMAN: And to make sure he’s ready, in good shape for the tipping point, Ray Kurzweil has been following a rigorous health program for decades now, ever since being diagnosed with diabetes in his 30s.

RAY KURZWEIL: Aging is not one process. It’s many different things going on that cause us to age. I have a program that at least slows down each of these different processes.

I’m constantly testing myself, hormone levels, nutrient levels, and the usual things like cholesterol and C-reactive protein and keeping things in what I consider to be optimal ranges.

PAUL SOLMAN: His cholesterol, for example, has dropped from 280 to 100, thanks to a strict regimen of diet, exercise, statin drugs and nutritional supplements. He takes about 150 pills a day.

And then there are injections and I.V. drips for the more exotic substances.

RAY KURZWEIL: I will give you one example. In a baby, 90 percent of the cell membrane is made up of phosphatidylcholine. That substance is responsible for letting the nutrients in, letting toxins out, keeping the cell supple.

By the time you’re 90 years old, the level of phosphatidylcholine you have will be less than 10 percent of what you had as a child.

PAUL SOLMAN: So you’re getting shots of this?

RAY KURZWEIL: It’s an I.V. If you’re aggressive, even baby boomers in their 60s can be in good shape when we get to these more powerful technologies. But you have to be aggressive. If you’re oblivious to it, then it would be too late.

PAUL SOLMAN: High tech CEO Carl Bass, 55, is also part of Kurzweil’s Singularity crowd, but a tad less optimistic.

CARL BASS, CEO, Autodesk: I feel like I just missed out.

PAUL SOLMAN: Just missed out? That’s how I feel. I’m 67, and I think, my gosh, if I would only been born 10, 15, 20 years later.

CARL BASS: I feel exactly the same way you do. I become more optimistic about what’s possible, even if a little bit longing about what I may not get to enjoy.

PAUL SOLMAN: But wait. Are there no true skeptics in the high-tech universe? What about co-mapper of the human genome and recent creator of a supposedly new form of life Craig Venter?

How old are you?

CRAIG VENTER, CEO, Synthetic Genomics: About 65. I wouldn’t mind getting to 100.

(LAUGHTER)

PAUL SOLMAN: Are you regretful that you’re going to miss the moment of immortality?

CRAIG VENTER: I don’t think we’re going to ever get there. I know a little bit more about biological reality. What I have argued, if you want to be immortal, do something useful in your lifetime.

RAY KURZWEIL: Craig Venter is a brilliant, very innovative person, but in this instance, he is not appreciating exponential growth.

You really have to think about it and calculate it out, because it’s not intuitive.

PAUL SOLMAN: In the end, though, if Ray Kurzweil is correct, a key question. Should the dead eventually disappear, how would the world cope with an impossibly large number of the living? By colonizing space is one common high-tech answer.

And we will be covering that, if the NewsHour would want yet another story about the future, with one proviso. To paraphrase my beloved grandmother, we should live so long.

GWEN IFILL: If you’re intrigued by Ray Kurzweil’s ideas, we have posted more of his conversation online. He talked to Paul about artificial intelligence, technology’s changing role in our lives, and what it means to be human.

Related articles
  • TIME Magazine’s 10 Questions for Ray Kurzweil
  • PBS NewsHour on Man vs. Machine: Will Human Workers Become Obsolete?
  • PBS NewsHour covers Singularity University

Filed Under: Video, What if? Tagged With: immortality, Ray Kurzweil, singularity

The Singularity As The Ultimate Culture Jam

June 14, 2012 by Jake Anderson

Growing up, and even until fairly recently, I viewed the world as little more than a spinning funeral procession. Morbid, yes. Borderline ridiculous, check. But also the truth. I regarded life on Earth as a cold accumulation of atoms in the void, where the rich get richer, the poor get poorer, and the machine grinds on. I’m at heart a pessimist who has forced himself to consider optimism as a more manageable world view.

This view of death wasn’t entirely negative. You see, I considered mortality to be a form of parity—a unique brand of universal justice.

Of course, the natural corollary to this is that I am doomed to die as well. And I’m not too proud to confess that the fear of death—or, more accurately, sheer horror over the prospect of Eternal Nothingness—has kept me awake many nights.

When I learned about the Singularity my entire worldview changed. Suddenly I discovered there were people out there—singularitarians—who not only reject the hypothesis of normal linear progression, they reject the hypothesis of death itself. The world is indeed a cold accumulation of atoms in the void, but we may be able to wake it up. We may be able to produce a culture jam on matter itself.

At this point you may be asking two questions. What is the Singularity? And what is a culture jam? 

The Singularity

The Singularity by Ken Vallario
The Singularity by Ken Vallario

A crisp definition of the technological Singularity is to say that it will be an explosion in intelligence that creates superhuman computational abilities that evolve exponentially. Humans are not necessary to this equation, although many futurists expect them to merge their intelligence with advanced machines and transcend their biological form.

We can expect a complete paradigm shift in the evolution of intelligence on Earth. Things we hold common and dear now, such as singular emotional expressions, discrete physical actions, and individual consciousness will be reconfigured into incomprehensibly efficient online systems that will interconnect disembodied, polymorphous identities.

I imagine most of our time will be spent in virtualized states of entrepreneurial rapture, competing to deploy versions of our identities to optimal coordinates of the Dyson Sphere in order to mine for raw matter that can be transformed into computronium (intelligent matter).

That’s right, I said computronium.

The Culture Jam

A culture jam is an action, expression, practice, or work of art that subverts mainstream cultural meanings or institutions. It is a popular form of civil disobedience among anti-consumerist groups and other social activists who attempt to re-appropriate cultural iconography and use it to critique mass culture.

A culture jam can be as simple as spraying graffiti over a corporate logo, or as complex as hacking into a database of classified files and disseminating them across the Internet. That said, a culture jam doesn’t have to be a criminal act at all. Poetry readings, public art installations, even unusual gestures and actions such as ‘planking’ can all be considered forms of culture jamming.

Occupy The Singularity

By creating an entirely new lexicon of ideas, memes, and modalities, the Singularity will be the ultimate culture jam, or, as the 60s Situationists group may have called it, an epic détournement. Everything will be turned around, rearranged, spliced and re-coded.

changing media

I like to think about how things like humor, intimacy, recreation, leisure, and consumption will be turned on their heads in a post-Singularity world. In a society in which matter can be reengineered at the molecular level, economic models involving supply and demand won’t make a lot of sense. In fact, currency itself will have to be completely re-imagined. Perhaps it will end up resembling a combination of the Economics 2.0 Charles Stross describes in Accelerando and the reputation currency Whuffie in Corey Doctorow’s Down and Out In The Magic Kingdom.

Many other things will be drastically different as well. For example, in a world in which human consciousness has been exponentially enhanced by advanced computation, traditional biological synergies like stress and emotional release will probably be rendered arbitrary, eliminating the need for comedy, love, drugs and even sex.

I personally can’t imagine sarcasm after the Singularity either, or rhetorical questions for that matter.

Life After the Singularity

But will life without sarcasm, drugs, sex and stress really be life? It’s hard to imagine that world. In that way, the coming of the Singularity could become a classic example of an unstoppable force meeting an immovable object. Will the technological rapture be derailed by reactionary human ignorance? Life without sex, drugs, and rock n’ roll? Hell no!

Or will the Singularity really be the ultimate culture jam, subverting and re-appropriating everything we’ve ever known about life and consciousness?

The Society of the Singularity 

The Singularity will inherently entail a global culture jam as all current economies, world views, governments, social meanings, commercial values and cultural modalities are irrevocably liquidated or re-appropriated. This will open up a new world for marketers, capitalists, socialists, entrepreneurs, and culture jammers alike—a widespread proliferation of decentralized mobile power centers steered by millions of disembodied agents with the power to reengineer matter itself.

Culture jammers, meet the new boss, same as the old boss. if they’re advertising on the moon, we’ll have to jam the frequencies; if they’re mining us for data, we’ll have to feed them false data.

Of course, my bigger confusion, and fear, concerns the end of death as a form of parity. In a world in which humans can use biotechnology and nanotechnology to stave off death and merge with online entities, the Haves will have an even bigger hand over the Have-Nots: the ability to buy a way out of death itself. That is, unless the Singularity renders currency itself as meaningless. I guess it will be the most anxiety-filled game of wait-and-see in history.

Why anxiety-filled? The Singularity is the only thing that can stop or redirect the near-inevitable extinction of intelligent life on Earth. It is the only hope we have for ebbing the permanent erasure of consciousness from the fabric of the universe. Beyond that, it is our only hope for waking up physical matter out of its cold slumber and enlisting it in the creation of epic purpose in an otherwise meaningless quantum circus of atoms in the void.

The concept of ‘God’ was created by us for a reason: we must create Him, or we will die.

It’s All or Nothing.

Failing that, we’ll always have the dream of sexy robots.

About the Author: 

Jake Anderson is a writer/comedian/filmmaker living in San Diego, California. He is currently self-publishing an e-book of subversive science fictions stories about life after the Singularity. Check out his Over The Moon INDIEGOGO campaign and his blog or follow him on Twitter @OverTheMoonSF.

Filed Under: Op Ed Tagged With: singularity

17 Definitions of the Technological Singularity

April 18, 2012 by Socrates

The term singularity has many meanings.

The everyday English definition is a noun that designates the quality of being one of a kind, strange, unique, remarkable or unusual.

If we want to be even more specific, we might take the Wiktionary definition of the term, which seems to be more contemporary and easily comprehensible, as opposed to those in classic dictionaries such as the Merriam-Webster’s.

So, the Wiktionary lists the following five meanings:

Noun
singularity (plural singularities)

1. the state of being singular, distinct, peculiar, uncommon or unusual
2. a point where all parallel lines meet
3. a point where a measured variable reaches unmeasurable or infinite value
4. (mathematics) the value or range of values of a function for which a derivative does not exist
5. (physics) a point or region in spacetime in which gravitational forces cause matter to have an infinite density; associated with Black Holes

What we are most interested in, however, is the definition of singularity as a technological phenomenon — i.e. the technological singularity. Here we can find an even greater variety of subtly different interpretations and meanings. Thus it may help if we have a list of what are arguably the most relevant ones, arranged in a rough chronological order.

Seventeen Definitions of the Technological Singularity:

1. R. Thornton, editor of the Primitive Expounder

In 1847, R. Thornton wrote about the recent invention of a four function mechanical calculator:

“…such machines, by which the scholar may, by turning a crank, grind out the solution of a problem without the fatigue of mental application, would by its introduction into schools, do incalculable injury. But who knows that such machines when brought to greater perfection, may not think of a plan to remedy all their own defects and then grind out ideas beyond the ken of mortal mind!”

2. Samuel Butler

It was during the relatively low-tech mid 19th century that Samuel Butler wrote his Darwin among the Machines. In it, Butler combined his observations of the rapid technological progress of the Industrial Revolution and Charles Darwin’s theory of the evolution of the species. That synthesis led Butler to conclude that the technological evolution of the machines will continue inevitably until the point that eventually machines will replace men altogether. In Erewhon Butler argued that:

“There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusc has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time.”

3. Alan Turing

In his 1951 paper titled Intelligent Machinery: A Heretical Theory,  Alan Turing wrote of machines that will eventually surpass human intelligence:

“once the machine thinking method has started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.”

4. John von Neumann

In 1958 Stanislaw Ulam wrote about a conversation with John von Neumann who said that: “the ever accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Neumann’s alleged definition of the singularity was that it is the moment beyond which “technological progress will become incomprehensibly rapid and complicated.”

5. I.J. Good, who greatly influenced Vernor Vinge, never used the term singularity itself. However, what Vinge later called singularity Good called intelligence explosion. By that I. J. meant a positive feedback cycle within which minds will make technology to improve on minds which once started will rapidly surge upwards and create super-intelligence:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

6. Vernor Vinge introduced the term technological singularity in the January 1983 issue of Omni magazine in a way that was specifically tied to the creation of intelligent machines:

“We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.”

He later developed further the concept in his essay the Coming Technological Singularity (1993):

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. […] I think it’s fair to call this event a singularity. It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown.”

It is important to stress that for Vinge the singularity could occur in four ways: 1. The development of computers that are “awake” and superhumanly intelligent. 2. Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity. 3. Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent. 4. Biological science may find ways to improve upon the natural human intellect. [Vernor talks about the singularity after min 2:13 in the video below]

7. Hans Moravec: 

In his 1988 book Mind Children, computer scientist and futurist Hans Moravec generalizes Moore’s Law to make predictions about the future of artificial life. Hans argues that starting around 2030 or 2040, robots will evolve into a new series of artificial species, eventually succeeding homo sapiens. In his 1993 paper The Age of Robots Moravek writes:

“Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course for them. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today’s best computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in this decade should make possible machines with reptile-like sensory and motor competence. Properly configured, such robots could do in the physical world what personal computers now do in the world of data–act on our behalf as literal-minded slaves. Growing computer power over the next half-century will allow this reptile stage will be surpassed, in stages producing robots that learn like mammals, model their world like primates and eventually reason like humans. Depending on your point of view, humanity will then have produced a worthy successor, or transcended inherited limitations and transformed itself into something quite new. No longer limited by the slow pace of human learning and even slower biological evolution, intelligent machinery will conduct its affairs on an ever faster, ever smaller scale, until coarse physical nature has been converted to fine-grained purposeful thought.” 

8. Ted Kaczynski

In Industrial Society and Its Future (aka the “Unabomber Manifesto”) Ted Kaczynski tried to explain, justify and popularize his militant resistance to technological progress:

“… the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.”

9. Nick Bostrom

In 1997 Nick Bostrom – a world-renowned philosopher and futurist, wrote How Long Before Superintelligence. In it Bostrom seems to embrace I.J. Good’s intelligence explosion thesis with his notion of superintelligence:

“By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences.”

10. Ray Kurzweil

Ray Kurzweil is easily the most popular singularitarian. He embraced Vernor Vinge’s term and brought it into the mainstream. Yet Ray’s definition is not entirely consistent with Vinge’s original. In his seminal book The Singularity Is Near Kurzweil defines the technological singularity as:

“… a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself.”

11. Kevin Kelly, senior maverick and co-founder of Wired Magazine

Singularity is the point at which “all the change in the last million years will be superseded by the change in the next five minutes.”

12. Eliezer Yudkowsky

In 2007 Eliezer Yudkowsky pointed out that singularity definitions fall within three major schools: Accelerating Change, the Event Horizon, and the Intelligence Explosion. He also argued that many of the different definitions assigned to the term singularity are mutually incompatible rather than mutually supporting.  For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability. Interestingly, Yudkowsky places Vinge’s original definition within the event horizon camp while placing his own self within the Intelligence Explosion school. (In my opinion Vinge is equally within the Intelligence Explosion and Event Horizon ones.)

13. Michael Anissimov

In Why Confuse or Dilute a Perfectly Good Concept Michael writes:

“The original definition of the Singularity centers on the idea of a greater-than-human intelligence accelerating progress. No life extension. No biotechnology in general. No nanotechnology in general. No human-driven progress. No flying cars and other generalized future hype…”

According to the above definition, and in contrast to his SIAI colleague Eliezer Yudkowsky, it would seem that Michael falls both within the Intelligence Explosion and Accelerating Change schools. (In an earlier article, Anissimov defines the singularity as transhuman intelligence.)

14. John Smart

On his Acceleration Watch website John Smart writes:

“Some 20 to 140 years from now—depending on which evolutionary theorist, systems theorist, computer scientist, technology studies scholar, or futurist you happen to agree with—the ever-increasing rate of technological change in our local environment is expected to undergo a permanent and irreversible developmental phase change, or technological “singularity,” becoming either:

A. fully autonomous in its self-development,
B. human-surpassing in its mental complexity, or
C. effectively instantaneous in self-improvement (from our perspective),

or if only one of these at first, soon after all of the above. It has been postulated by some that local environmental events after this point must also be “future-incomprehensible” to existing humanity, though we disagree.”

15. James Martin

James Martin – a world-renowned futurist, computer scientist, author, lecturer and, among many other things, the largest donor in the history of Oxford University – the Oxford Martin School, defines the singularity as follows:

Singularity “is a break in human evolution that will be caused by the staggering speed of technological evolution.”

16. Sean Arnott: “The technological singularity is when our creations surpass us in our understanding of them vs their understanding of us, rendering us obsolete in the process.”

17. Your Definition of the Technological Singularity?!…

As we can see there is a large variety of flavors when it comes to defining the technological singularity. I personally tend to favor what I would call the original Vingean definition, as inspired by I.J. Good’s intelligence explosion because it stresses both the crucial importance of self-improving super-intelligence as well as its event horizon-type of discontinuity and uniqueness. (I also sometimes define the technological singularity as the event, or sequence of events, likely to occur right at or shortly after the birth of strong artificial intelligence.)

At the same time, after all of the above definitions it has to be clear that we really do not know what the singularity is (or will be). Thus we are just using the term to show (or hide) our own ignorance.

But tell me – what is your own favorite definition of the technological singularity?

Filed Under: Best Of, Op Ed Tagged With: singularity, Technological Singularity

Philosopher David Chalmers: We Can Be Rigorous in Thinking about the Future

March 10, 2012 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/192965928-singularity1on1-david-chalmers.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Yesterday I interviewed philosopher David Chalmers.

David is one of the world’s best-known philosophers of mind and thought leaders on consciousness. I was a freshman at the University of Toronto when I first read some of his work. Since then, Chalmers has been one of the few philosophers (together with Nick Bostrom) who has written and spoken publicly about the Matrix simulation argument and the technological singularity. (See, for example, David’s presentation at the 2009 Singularity Summit or read his The Singularity: A Philosophical Analysis)

During our conversation with David, we discuss topics such as: how and why Chalmers got interested in philosophy; and his search to answer what he considers to be some of the biggest questions – issues such as the nature of reality, consciousness, and artificial intelligence; the fact that academia in general and philosophy, in particular, doesn’t seem to engage technology; our chances of surviving the technological singularity; the importance of Watson, the Turing Test and other benchmarks on the way to the singularity;  consciousness, recursive self-improvement, and artificial intelligence; the ever-shrinking of the domain of solely human expertise; mind uploading and what he calls the hard problem of consciousness; the usefulness of philosophy and ethics; religion, immortality, and life-extension; reverse engineering long-dead people such as Ray Kurzweil’s father.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is David Chalmers?

David Chalmers is a philosopher at the Australian National University where he is Distinguished Professor of Philosophy and Director of the Centre for Consciousness. Chalmers is also Visiting Professor of Philosophy at New York University and works in the philosophy of mind and in related areas of philosophy and cognitive science. He is particularly interested in consciousness, but also in all sorts of other issues in the philosophy of mind and language, metaphysics and epistemology, and the foundations of cognitive science.

Filed Under: Podcasts Tagged With: mind uploading, singularity

Robert J. Sawyer on Humanity 2.0

February 8, 2012 by Socrates

Robert J. Sawyer and Socrates

Robert J. Sawyer is one of my all-time favorite science fiction writers. Thus, when I heard that the Literary Review of Canada and TV Ontario’s Big Ideas are co-hosting a talk by Robert at the Gardiner Museum, I simply had to attend.

Robert J. Sawyer is one of those very rare people who truly know a lot about everything and — even rarer ones — who can communicate clearly and convincingly about what they know. Below you can watch the recording of Sawyer’s very engaging, eloquent and empassionate presentation touching up on a variety of issues such as cosmology, SETI, transhumanism, the singularity, longevity, mind uploading and other ways of upgrading humanity to version 2.0.

The event was held on November 21, 2011 in Toronto and it took a couple of months before it was eventually aired on TVO and posted online.

Program Synopsis: When Marshal McLuhan published Understanding Media, in 1964, the U of T English professor’s radical arguments about technology’s role in shaping human existence made him a unique media oracle. Now, 100 years after McLuhan’s birth, many simply take as given that our future will be shaped, not by ethical or cultural precepts, but by our fast-changing technological advances.

In fact, we’re approaching the moment —not too far off—at which computer intelligence will exceed that of humans. Today, some already dream of uploading their consciousnesses into artificial bodies or virtual worlds; others wish to radically prolong their lives or enhance their bodies through biotechnology. These changes are feared by some, embraced by others, and point to key questions: What will it mean to be human in the future? Can we look forward to a Utopian tomorrow? Might some of us simply become obsolete? What will it mean to be human in the future?

Robert J. Sawyer discusses how to approach our brave new future without (too much) fear and trembling and points out that uploading consciousness into virtual worlds and prolonging life through biotechnology are already being contemplated.

Who is Robert J. Sawyer?

Called “The Dean of Canadian Science Fiction” by The Ottawa Citizen and “just about the best science-fiction writer out there these days” by The Denver Rocky Mountain News, Sawyer is one of only eight writers in history (and the only Canadian) to win all three of the science-fiction field’s top honors for best novel of the year – the Hugo Award, the Nebula Award, and the John W. Campbell Memorial Award. He has taught writing at the University of Toronto, Ryerson University, Humber College, the National University of Ireland, and the Banff Centre. His keen insights into the human impact of technological change have led to consulting work for corporate clients such as Google, and Sawyer has also advised bodies from the Canadian Federal Department of Justice to the US Defense Advanced Research Projects Agency.

Related articles
  • Robert J. Sawyer on Singularity 1 on 1: The Human Adventure is Just Beginning
  • The WWW Trilogy: Wake, Watch and Wonder Book Review

Filed Under: Video Tagged With: Robert J. Sawyer, singularity, transhumanism

No Illusions Podcast: Cameron Reilly Puts Socrates in the Spotlight

December 9, 2011 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/190463340-singularity1on1-no-illusions-podcast-cameron-reilly.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

It is only fair that every once-in-a-while Socrates – i.e. “the man with the questions,” ought to get the table turned on him, take the other side of the microphone and answer a few questions himself.

So, when Cameron Reilly asked me to be the next guest on his popular and long-running No Illusions Podcast I was honored and agreed without hesitation.

In 2004 Cameron co-founded the Podcast Network – Australia’s first social media company, which he built into one of the largest independent Australian media sites. In 2007, Reilly was called one of the “40 Biggest Players Of Australia’s Digital Age.” Currently he is a regular speaker on issues surrounding social media, social networking and the future of media in addition to consulting for a number of Brisbane-based companies as their digital strategist.

During my conversation with Cameron we discuss issues such as: my personal history and being born Bulgarian; Canada, Toronto and becoming Canadian; my take on the concept and definition of the technological singularity; the Rapture of the Nerds criticism; the scientific method, science and religion; Moore’s Law; nanotechnology; the potential for dividing humanity into technophile transhumanists and technophobe neo-luddites and all out global war between these two fractions; the pro’s and con’s of being skeptical and using the Socratic method of inquiry.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Related articles
  • If Socrates Were A Blogger
  • Hamlet’s Transhumanist Dilemma: Will Technology Replace Biology?
  • A Transhumanist Manifesto
  • What is the best definition of the Technological Singularity?
  • Charlie Stross on Singularity 1 on 1: The World is Complicated. Elegant Narratives Explaining Everything Are Wrong!
  • Stephen Wolfram on Singularity 1 on 1: To Understand the Future, Explore the Computational Universe

Filed Under: Podcasts Tagged With: Nikola Danaylov, singularity, Socrates

Robopocalypse: Daniel H. Wilson’s Novel To Become Steven Spielberg’s Film

October 30, 2011 by Socrates

I was in the middle of scheduling an interview with Daniel H. Wilson‘s agent when I got a last minute acceptance to Singularity University. I had to delay our interview and rush to NASA’s Ames Campus in Mountain View, California. As soon as I was back home I got in touch again with Willson’s representative but, unfortunately, was told that Daniel was not doing any more interivews.

Daniel H. Wilson is the author of The New York Times best selling science fiction novel Robopocalypse and a columnist and contributing editor for Popular Mechanics magazine. He has also written: How To Survive a Robot Uprising, How to Build a Robot Army, A Boy and His Bot and Where’s My Jetpack?.

Robopocalypse is not only one of Amazon’s top books for 2011 but, most notably, Steven Spielberg is directing a film based on the novel, scheduled for release on July 4, 2013.

I don’t know if Daniel is under some kind of obligation to keep low profile until the release of the upcoming movie or is simply busy working on his latest sci fi novel titled AMPED — I will continue trying to get him for an interview for Singularity 1 on 1. In the meantime, here is a very brief review of his fantastic novel Robopocalypse.

Robopocalypse Book Review:

The story starts up with a singularity: a scientist working on creating AI in a remove lab somewhere has already had to destroy 14 versions of his creation in order to protect the world from it. However, version 15 (called Archos) manages to escape its Faraday cage, kill the scientist and, among other things, start a global war against humanity. This is pretty much what you could call the generic terminator or machine apocalypse script. From here on however the story is entirely unique.

For starters, Archos’ main goal is not the complete annihilation of humanity – so he doesn’t use nuclear missiles. Neither is the war his main focus and occupation. Archos is very interested in nature and has a complete appreciation of life as a uniquely rare phenomenon in the universe. He states on several occasions that humankind must and will survive the war.

“I will burn your civilization down to light your way forward.”

However, if humanity has to survive so does robotkind. Thus in a way the book is really about a birth – the birth of “freeborn”, sentient, humanoid and self-aware machines and their integration as valuable members of our sentient civilization.

Historically speaking all “inalienable” human rights have come after strife, massive protests or violent revolutions. Similarly, it only makes sense that robot rights will come after a hard-fought war, barely won with the help of “freeborn” robots fighting alongside humanity.  Thus one may realize that it is impossible to outwit the super-smart “God-in-a-box” Archos. In the end he gets what he wants – the ensured survival of both mankind and robotkind.

Socrates’ verdict: 10 out 10

Fans’ Trailers for Robopocalypse:

Related articles

  • A 2011 Interview with Daniel H. Wilson 

Filed Under: Reviews Tagged With: robopocalypse, singularity

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Page 6
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy