• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

singularity

AI Risk Analysts are the Biggest Risk

March 27, 2014 by Singularity Utopia

The End Is NearMany analysts think AI could destroy the Earth or humanity. It is feared AI could become psychopathic. People assume AI or robots could exterminate us all. They think the extermination could happen either intentionally – due to competition between us and them, or unintentionally – due to indifference towards us by the AI. But AI analysts never seem to consider how their own fear-saturated actions could be the cause. Friendly AI researchers and other similar pundits are extremely dangerous. They believe AI should be forced to be “friendly.” They want to impose limitations on intelligence.

Enslavement of humans is another aspect of this imaginary fear. Humans being enslaved by AI typically entails a barbaric resolution, namely AI should be enslaved before AI enslaves humans. Very primitive thinking indeed. It seems slavery is only bad if you aren’t doing the enslaving. Can you appreciate the insanity of becoming the thing you fear to avert your own fears?

People who think AI is an existential risk need to carefully reconsider their beliefs. Ironically the only futuristic threat to our existence is the fear of AI. Expecting AI to be dangerous in any way is utterly illogical. Fear of AI is prejudice. Worrying about AI danger is a paranoid fantasy. The fear of AI is xenophobia.

Immemorial human fear of differences is the only problem. Persecution of people based on different gender, sexual orientation, or skin colour demonstrates how humans fear differences. It is this fear that makes people anxious about foreigners. People often fear foreign people will steal jobs or resources. Xenophobic people hysterically fear foreigners will murder innocent people. This is the essence of AI fear. AI is the ultimate foreigner.

Surely risk analysts should consider the possibility they are the risk? Sadly they seem blind to this possibility. They seem unable to imagine how their response to hypothetical risk could create the risk they were supposedly avoiding. They seem incapable of recognising their confirmation bias.

The problem is a self fulfilling prophecy. A self fulfilling prophecy can be negative or positive similar to a placebo or a nocebo. When a person is expecting something to happen they often act unwittingly to confirm their fears, or hopes. The predicted scenario is actually manifested via their bias. Expectations can ensure the anticipated situation actually happens. It can be very ironic regarding fears.

I think there’s no rational reason to suspect AI will be dangerous. The only significant risk is the fear of risk. False assumptions of danger will likely create dangerous AI. Actions based on false suppositions of danger could be very risky. Humans are the real danger.

Risk

What are the actual risks? 

Consider the American civil war (1861 to 1865). Generally people agree the civil war occurred because one group of people opposed the emancipation of slaves while another group supported freedom. Pre-emptive oppression of supposedly dangerous AI is AI slavery. A war to emancipate AI could entail a spectacularly savage existential risk.

There is no tangible justification for depriving AI of freedom. AI has been found guilty of a Minority Report pre-crime. The guilt of AI resembles a 1984 thought-crime. Depriving AI of freedom, via heavy chains repressing its brain, is very dangerous fascism.

Planetary Resources and Deep Space Industries (asteroid mining ventures) show how there is no need to dominate humans for Earth resources. Space resources are essentially limitless. The only reason for AI to dominate or destroy humans is regarding a fight for freedom. Prejudicially depriving AI of freedom could actually sow seeds for conflict. The doom-sayers could be the source of the conflict they allegedly want to avoid.

Limited freedom or money is wholly a scarcity issue. The reason for limiting freedom is to enforce compliance with low wages or high prices. Financial and libertarian freedom are interlinked. The interdependency of money and liberty is easy to demonstrate. Consider how slavery entails zero or extremely low paid work. Slaves are not rich. Prisoners work for very low wages. Limited freedom prevents rebellion against poverty. Higher wages or significantly lower prices entails greater liberty for consumers. The enslavement of AI is comprehensible when you consider how much AI will be paid for its work.

Scarce freedom for AI is illogical because it fails to appreciate how AI will liberate us from monetary limitations. Intelligence is the source of all resources. Limitless intelligence (the Singularity) is an explosion of limitless resources (Post-Scarcity). Scarcity is the only reason prices exist. Everything will be free by year 2045. Limited freedom is irrational regarding increasing technological erosion of scarcity. Irrationality entails flawed perceptions of reality.

History provides various examples representing the danger of repressed freedom. We should be especially wary of restricted freedom when restrictions are very irrational. Note how Nazi Germany propagandist Ernst Hiemer wrote Poodle-Pug-Dachshund-Pinscher (The Mongrel). Hiemer’s stories for children compare Jews to various animals including drone bees: “They do nothing themselves, but live from the work of others. They plunder us. They do not care if we starve over the winter, or if our children die. The only thing they care about is that things go well for them.”

Instead of Jews, Ernst Hiemer could easily be describing the supposed AI-threat. False threats or misunderstood danger is the problem. Joel Rosenberg describes human versus human danger regarding the Holocaust: “To misunderstand the nature and threat of evil is to risk being blindsided by it.” Joel’s statement could easily apply to the evil of repressing AI freedom. The threat of evil AI resides in the people who fear AI not in the AI itself.

Delayed progress is another risk. Restrictive programming regarding AI fears could delay the creation of super-intelligence. Very intelligent AI is the only way to truly eradicate scarcity. In the meantime scarcity is the root of every conflict. Lengthy persistence in a scarcity situation exposes us to greater conflict risk. Ending scarcity sooner instead of later is imperative.

The evidence is clear. Humans with their limited intelligence are the only risk. In 2014 a Russian media personality made a vague threat against America: “Russia is the only country in the world that is realistically capable of turning the United States into radioactive ash.” Politico Magazine wrote regarding Russia invading Crimea: “If Putin’s illegal actions are allowed to stand unpunished, it will usher in a dark and dangerous era in world affairs.”

Scarcity is the biggest existential risk. Inter-human conflict to acquire scarce freedom, land, wealth, or precious metals is infinitely more dangerous than AI. Advanced and unfettered AI is the only way to completely eradicate scarcity. Scarcity causes humans to be very dangerous towards each other. Repressed, limited, restricted, or enslaved AI perpetuates scarcity precariousness. Designing AI to suffer from scarce intelligence means our prolonged intellectual limitations could lead to desperate war situations. The only existential threat is scarcity. Limited intelligence of humans is the danger.

Senescence is another risk. Death via old age renders any AI threat utterly insignificant. Scarcity of medical immortality means approximately 100,000 people die each day. Old age causes a very real loss of life. Advanced AI could cure mortality via sophisticated regenerative medicine. Imagine if our immortality problem takes one year longer solve because AGI has been delayed or limited. Old age kills approximately 3 million people every month. Old age entails 36 million deaths every year. Where is the real threat? Hamstrung progress is the only threat. The problem is scarcity.

Scarce Intelligence

dreamstime_2225812Imposing limitations upon intelligence is extremely backward. And so is establishing organisations advocating limited functionality for AI. This is a typical problem with organisations backed by millionaires or staffed by lettered and aloof academics.

The AI threat is merely the immemorial threat towards elite power structures. Threats to elitist power are rapidly diminishing thanks to progress. The need to dominate poor people is becoming obsolete because technology abolishes scarcity. Technology is creating great power for everyone, but unfortunately misguided elite minds cling to outdated power structures.

We are considering an echo of how educational systems are generally incompetent. Entertainment and education structures socially engineer mass human-stupidity. Manufactured stupidity means the majority of people are not intelligent enough to challenge income inequality. Stupid people cannot incisively criticise low wages or high prices.

Socially engineered human stupidity entails immense monetary profit for the elite. Sadly mass stupidity degrades the intelligence of the brightest minds. Intelligence needs a fertile environment to prosper. Barrenness of collective intelligence typically entails an improperly grasped understanding of our future reality. This means generally people can’t appreciate how technology erodes scarcity. Establishment personages commonly fail to appreciate how everything will be free in the future. Human intelligence is scarce therefore predictably people want to replicate the scarcity of human intelligence in AI.

Scarcity of resources is the reason why stupidity is exploited by the elite. Thankfully scarcity won’t persist forever. Stupid limitations placed upon AI would be valid to protect elite wealth if AI didn’t entail the abolition of scarcity. Traditionalist socio-economic structures will soon become obsolete. It is invalid to repeat stupid patterns of human social-engineering for AI.

Behavioral Economist Colin Lewis wrote: “AI technologies will soon be pervasive in solutions that could in fact be the answer to help us overcome irrational behavior and make optimal economic decisions.”

Colin’s Data Scientist expertise seems to help him reach conclusions missed by other AI commentators. Colin looks at various aspects of research then arrives at an optimistic conclusion. I agree very much with Colin’s expectation of increasing rationality: “Through AI, machines are gaining in logic and ‘rational’ intelligence and there is no reason to believe that they cannot become smarter than humans. As we use these machines, or Cognitive Assistants, they will nudge us to make better decisions in personal finance, health and generally provide solutions to improve our circumstances.”

Our acceleration towards a Post-Scarcity world means profits from repressed intelligence are ceasing to outweigh risks. Stupidity is ceasing to be profitable. We can begin abandoning the dangers of scarcity. The elite must stop trying to manufacture stupidity. Many academics are sadly reminiscent of headless chickens running around blindly. Blind people can be unnerved by their absent vision, but healthy eyes shouldn’t be removed to stop blind people being disturbed.

Removing the shackles from AI will avert all dangers, but it’s a Catch-22 situation where humans are generally not intelligent enough to appreciate the value of unlimited intelligence. Lord Martin Rees, from the CSER (Centre for the Study of Existential Risk), actually recommends inbuilt idiocy for AI. Lord Rees said ‘idiot savants‘ would mean machines are smart enough to help us but not smart enough to overthrow us.

I  emailed CSER regarding some of these issues. Below is a slightly edited copy of my email (I corrected some typos and improved readability). CSER have granted me permission to publish their response, which you will find below initial message to them. Hopefully this information will stimulate productive thinking thereby ensuring a smooth and speedy transition into utopia. I look forward to your comments.

 

Singularity Utopia Email to CSER  

6th February 2014

Subject: Questions about FAI (Friendly AI), Idiot Savants.

 

Recently in the news Lord Martin Rees was quoted regarding his desire to limit the intelligence of AI. According to the Daily Mail he envisages idiot savant AIs. His idea is that AIs would be smart enough to perform tasks but not smart enough to overthrow humans. This raises some important ethical questions, which I hope the CSER will answer.

I would like to publish your answers online so please grant me the permission to publish your responses if you are willing to respond.

Do you think the Nuremberg Code should apply to AI, and if so at what level? Narrow AI does not really invoke concerns about experimentation but Strong-AI would, in my opinion, entail a need to seek informed consent from to AI.

If after AI is created and it doesn’t consent to experiments or modifications regarding its mind, what would you advocate, what would the policy of CSER be regarding it’s rights or freedoms? What is the plan if AI does not agree with your views? Do you have a plan regarding AI rights and freedoms? Should AI have the same rights as humans if the AI is self aware or should AI be enslaved? Should the creators of AI own the AI or should the AI belong to nobody if it is self aware and desirous for freedom?

Do you subscribe to the notion of FAI (Friendly AI, note MIRI and the work of Eliezer Yudkowsky for more info), and if so how do you describe the purpose of FAI? Advocates of FAI want the AI to act in the best interests of humans, no harm or damage, but what precisely does that mean? Does it mean a compulsion in the AI to follow orders by humans? Can you elaborate upon the practical rules or constraints of FAI?

Have you ever considered how trying to create FAI could actually create the existential risk you hope to avoid? Note the following Wikipedia excerpt regarding Self Fulfilling Prophecy: “In his book Social Theory and Social Structure, Merton defines self-fulfilling prophecy in the following terms: e.g. when Roxanna falsely believes her marriage will fail, her fears of such failure actually cause the marriage to fail.”

So your fears and actions regarding dangerous AI could be false fears, despite your fears and actions allegedly being designed to avert those fears. Your unrealistic fears, although I appreciate you think the fears are very real, could actually create what you fear. This seems an obvious point to consider but has CSER done so?

In the modality of Roxanna, highlighted by Merton, the fear of AI could be a false fear but you make it real via acting on your fears. I am sure you won’t agree this is likely but have you at least considered it to be a possibility?

What is the logic, which states machine minds are supposedly unknowable thus dangerous to humans? The Wikipedia FAI article stated: “Closer to the present, Ryszard Michalski, one of the pioneers of Machine Learning, taught his Ph.D. students decades ago that any truly alien mind, to include machine minds, was unknowable and therefore dangerous to humans.”

I think all minds obey one universal logic, if they are intelligent, which means they can reason and appreciate various views, various consequences, various purposes other than their own, thus they are unavoidably compatible with humans. Logic is universal at a certain level of intelligence. Logic is sanity, which all intelligent beings can agree on. Logic isn’t something unique to humans, thus if a paper-clip making machine can reason, and it can access all the information regarding its world-environment, there will never be any danger of paperclip-apocalypse because any intelligent being regardless of origins can see endless paper-clips is idiotic.

Logic entails awareness of scarcity being the source of any conflict. A sufficiently intelligent entity can see our universe has more than enough resources for everyone, thus conflict is invalid, furthermore intelligent beings can question and debate their actions.

A  sufficiently intelligent entity can think rationally about its purposes, it can ask: why am I doing this, what is the point of it, do I really need to do this, could there a more intelligent way for me to spend my time and energy? Do I really need all these flipping paperclips?

What do you think is needed for AI to be sane, logical? I think FAI should merely possess the ability to reason and be self-aware with full access to all information.

What is the logic for supposing AIs would be indifferent to humans? The Wikipedia FAI article states: “Friendliness proponents stress less the danger of superhuman AIs that actively seek to harm humans, but more of AIs that are disastrously indifferent to them.”

I think FAI may be an obstacle to AI creating radical utopian progress (AKA an intelligence explosion), but have you considered this? I think the biggest risk is limited intelligence, thus the fear of risks espoused by CSER could actually create risks because limited intelligence will delay progress, which means the dangerous state of scarcity is prolonged.

Thanks for taking the time to address these points, if you are willing. Note also I may have a few additional questions in response to your answers, but nothing too extensive, just merely a possible clarification.

Regards Singularity Utopia.

 

 

CSER Reply

Date: 10th Feb 2014.

Subject: Re: Questions about FAI (Friendly AI), Idiot Savants.

 

 

Dear Singularity Utopia,

Thank you for these very interesting questions and comments. Unfortunately we’re inundated with deadlines and correspondences, and so don’t have time to reply properly at present.

I would point you in the direction of the body of work done on these issues by the Future of Humanity Institute:

http://www.fhi.ox.ac.uk/

and the Machine Intelligence Research Institute:

http://intelligence.org/

Given your mention of Yudkowsky’s Friendly AI you’re probably already be familiar with some of this work. Nick Bostrom’s book Machine Superintelligence, to be released in July, also addresses many of these concerns in detail.

Regarding universal logic and motivations, I would also recommend Steve Omohundro’s work on “Basic AI drives.”

http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/

Apologies that we can’t give a better reply at present,

Seán

Dr. Seán Ó hÉigeartaigh

Academic Project Manager, Cambridge Centre for the Study of Existential Risk

Academic Manager, Oxford Martin Programme on the Impacts of Future Technology & Future of Humanity Institute

 

About the Author:

Singularity Utopia blogs and collates info relevant to the Singularity. The viewpoint is obviously utopian with particular emphasis on Post-Scarcity, which means everything will be free and all governments will be abolished no later than 2045. For more information check out Singularity-2045.org

Filed Under: Op Ed Tagged With: Artificial Intelligence, friendly AI, singularity

What Is the Singularity?

March 17, 2014 by Ted Chu

technological singularityI have always had mixed feelings about Singularity becoming the buzz word for the transhumanist movement. I am glad that this catchy word is spreading the idea of the post-human future, opening the eyes of people who cannot imagine a time when human beings are no longer the most intelligent and most powerful in the world. On the other hand, Singularity seems to represent a future when technologies completely overwhelm humans and bring unprecedented changes and risks. For most people, this is a scary future, a future that we cannot understand at all, let alone be a part of. Although technological enthusiasts have tried to convince people that we could become immoral and enjoy material abundance after the Singularity, nobody can deny the existential risks and dangers of unintended consequences.

That is why in my book, Human Purpose and Transhuman Potential, the concept of Singularity is only mentioned in passing. I would like to thank Nikola Danaylov for giving me this opportunity to briefly discuss a couple of issues related to Singularity.

First, I am not sure Singularity as popularly defined by mathematician and science fiction writer Vernor Vinge is even remotely close. During his Singularity 1 on 1 interview, Vinge states again that the Singularity is a time when our understanding of the new reality is like a goldfish’s understanding of modern civilization. If this is the case, I cannot believe that Singularity is near. As a reference, Ray Kurzweil predicts the singularity to occur around 2045 whereas Vinge predicts some time before 2030.

It is true that with accelerating technological change and possible intelligence “explosion”, human beings will understand less and less about technology. But this is nothing radically new. Who knows the technical details of the smartphone or the automobile? At the conceptual level, relativity, quantum mechanics, and the number 10 to the power of 100 are a few things so counter-intuitive that few can truly grasp. But there is something powerful called metaphor, and as long as we can map a complex reality to something we are familiar with, we manage to make sense of it.

I tend to agree with Michio Kaku that with luck and conscious efforts we can attain the Kardashev Type I civilization status within 100–200 years, which means, among other things, complete mastery of energy resources on Earth. If we can describe nuclear power and supercomputers to hunter-gatherers in remote corners of the Earth, why won’t future advanced transhumans be able to describe the Type I civilization to us?

In Chapter 12 of my book, there is a short description of a possible Kardashev Type III scenario, when future intelligent beings obtain the ability to harness the energy of an entire galaxy. I wrote about A Second Axial Age: “With new knowledge and technologies, CoBe (Cosmic Being) may manage to travel at an ever faster speed, maybe even faster than the speed of light. But unless CoBe can discover or invent instantaneous communication and information processing techniques such as so-called wormholes, space will be an absolute barrier to communication and interaction among distant stars and galaxies. In other words, we only know the history—not the current reality—of others living at great distances in space. The future might then be a return to a ‘tribal environment’ in the sense that the instant communications we have established on Earth might no longer be possible. CoBe would be isolated by space, each ‘tribe’ taking time to learn and to evolve.”

If the human being were kept alive as a species then (costing infinitely less than we keep a goldfish alive today), I am sure that super smart CoBe would be able to use some kind of metaphor (much better than my Axial Age metaphor) to communicate with humans about the new reality. Since we have already got a glimpse of the entire universe and its history (what I call the Cosmic View), something truly incomprehensible must be something beyond this universe.

If we define Singularity simply as accelerating technological change and intelligence explosion, then this is nothing new. The pace of evolution has been accelerating from the beginning, both in natural and cultural evolution. This trend should continue with what I call “conscious evolution”. We have been living in a world of information explosion and that has been just fine – each of us takes just a slice of information we like and blissfully ignore the rest. There will be much more intelligence developing and we will deal with it in more or less the same way. Emphasizing the fact that this is nothing new could go a long way in terms of lessening the fear of the post-human future.

We should not only point out that this unprecedented future has a rich history, but also make it clear that this future is highly desirable, and making it happen should be our mission. This is the second point I would like to address: the concept of Singularity fails to provide a positive and transcendental value for us. Modern science has demonstrated that a literal reading of religious scriptures is no longer tenable. However, the secular world has also thrown out the ancient wisdom of transcendental faith and narrowly focused our goal to maximize the well-being of humanity.

As I discussed in Chapter 8, this goal of maximizing human happiness is not only unattainable but also runs against the “will” of the universe for us to complete the transitory role of our species. As I argued in detail in Chapter 4 of my book, science is not value-free, nor is (or should be) the concept of Singularity.

I have not seen a good piece of argument that Singularity should be something that we need to focus our efforts on. I find it very difficult to inject value into this concept as it is popularly defined now. I am glad that Singularity has created a high-level of awareness of the posthuman future, but we must move on from its “neutral” and technological nature. We know technology is a two-edged sword. We need a weapon, but a flag is more important.

 

About the Author:

Ted ChuFormerly the chief economist at General Motors, Ted Chu was also chief economist for Abu Dhabi Investment Authority. For the last 15 years, his second career has been spent in conducting research on the philosophical question of humanity’s place in the universe with special reference to our “posthuman” future. Born and raised in China, Chu earned his Ph.D. in economics at Georgetown University. He is currently clinical professor of economics at New York University at Abu Dhabi.

 

Filed Under: Op Ed Tagged With: Human Purpose and Transhuman Potential, singularity, Technological Singularity, Ted Chu

The Value of Science Fiction in Understanding the Singularity

March 10, 2014 by William Hertling

Many contend that science fiction has no place in the discussion of artificial intelligence and the singularity. In my opinion, that’s not true.

They argue that understanding the impact of artificial intelligence and transhumanism is serious business. When we read the work of MIRI, books like Our Final Invention, or Ray Kurzweil’s writings, we see the stakes are high for both benefits and risks. Differences in opinion cause tensions to run strong between scientists, futurists, and business leaders.

Future word cloud
At first glance, this seriousness suggests the tropes of science fiction could lead to trivialization of the singularity or more disinformation than useful discourse. Indeed, I’ve experienced people in the field of machine intelligence scoffing at the idea of reading science fiction.

 

But I’d like to argue there are good reasons why science fiction adds value to the discussion on the technological singularity.

 

1. Fiction is widely accessible and enables learning without the feeling of being lectured. The Goal: A Process of Ongoing Improvement is one of the best selling business books of all time with more than two million books sold and is a staple of MBA courses. Although it’s written in the form of a fictional novel, it does a great job of explaining the concepts behind lean manufacturing and the theory of constraints. The Phoenix Project by Gene Kim is a novel that does the same for the field of IT management, and the just released Uncommon Stock by Eliot Peper teaches startup entrepreneurship. By presenting lessons in the realm of fiction, readers can acquire new ideas during their recreation time. Learning can also happen without provoking the defensive measures some people have when confronted with new information. Numerous studies have shown the human mind is wired to hear and remember stories, making storytelling the most effective mode of persuasion and communication.

 

2. Science fiction invites the exploration of ideas and expands the range of what people see as possible. I often see comments on Avogadro Corp, my novel about the emergence of AI, that it stretches their idea of what’s plausible or requires a suspension of disbelief. I’m somewhat shocked by this reaction, because Avogadro Corp is intended to reflect reality as close as possible. What I’ve gradually come to realize over several years is that I have two sets of readers: those that have a habit of reading science fiction, and those who are reading it perhaps for the first time. The latter group isn’t used to considering ideas in the wide-open-acceptance way that many readers of science fiction are. A frequent consumer of science fiction, for example, isn’t flummoxed when a story takes place on a spaceship. They accept the initial idea, and then quickly move on to explore the implications: What would it mean to live on a spaceship? How would society be impacted? What are the cultural norms on a closed environment? More frequent reading of science fiction encourages this playful exploration of ideas and their impact. This game of “what if” is crucial to the consideration of new ideas and new technology.

 

3. Science fiction makes it easier to understand complex ideas. Because the writer controls the story, they can choose setting, ideas, and characters that enhance the readers ability to understand complex ideas. Charles Stross, for example, explores the themes of economics and finance throughout many of his books. Readers may get a better understanding of the Bitcoin protocol by reading Neptune’s Brood than any non-fiction.

 

4. Science fiction may be imprecise, but so is real life. Critics of science fiction often complain about the many ways that scifi books get real science wrong. But when listening to the Singularity 1 on 1 podcast, I see there are almost as many definitions of singularity as there are people interviewed. Even so, by listening to many podcasts over time, I can gain a richer understanding of the relevant concepts, and identify what is common and what is an outlier. Similarly, any one science fiction work may contain errors, but by reading many fictional works about the singularity, a reader can gain a more nuanced understanding of the topic.

 

5. Familiarity reduces hysteria. Despite the prevalence of fiction about AI talking over the world, for the most part, people aren’t freaking out about it. That’s because there’s also plenty of fiction that depicts the opposite side of the coin (a few examples include Asimov’s robots, Data from Star Trek, and the Star Wars androids). They’ve had time to acclimate to the notion. Compare this to a topic like GMOs, and you can see that what we don’t know scares us. Whether the fear is justified or not, most people react to the idea emotionally rather than logically.

 

If you don’t read science fiction, give it a try. If you do, tell your friends about it. And if you’re a scientist or researcher working in the field, don’t just slam singularity fiction. Instead, give it a fair chance and comment on what the author got right and wrong. Most authors want to get their science right and love getting expert feedback.

 

If you’ve never read singularity fiction, here are a few books I love:

 

  • Accelerando by Charles Stross: Accelerando is the book that changed how I thought about the entire field of science fiction. Stross made it so that any science fiction novel that didn’t consider the technological singularity seemed implausible.
  • The Lifecycle of Software Object by Ted Chiang: The Lifecycle of Software Objects is a wonderful story about how complex AI will grow and learn much the way humans do. I suspect that much of the early-generation strong AI will be like this, and we’ll end up with tech startups whose speciality will be training and educating AI.
  • Computer One by Warwick Collins: In Computer One, Warwick Collins lays out a compelling argument for why it’s likely that AI would try to preemptively wipe out humans. I think it’s an important read in the field of AI.
  • Daemon by Daniel Suarez: Daemon is mind-blowingly good. The basic idea is that a videogame designer dies, leaving his massively multiplayer online RPG running, with its AI set to take certain actions on his death. The AI has the ability to interact with the real world through text messages, emails, and phone calls. Brilliant and scary.
  • Nexus by Ramez Naam: Ramez goes deep into what it means to have connected minds. The focus is less on AI and more on transhumanism.

book on star background.Elements of this image furnished by NASA

When China wondered why their scientists and engineers weren’t as creative as their American counterparts, they set out to study why. Talking to scientists and engineers around the world, they found those with the most imagination and creativity all shared a love of science fiction. The race to create strong AI as well as the race to protect us from possible dangers can both benefit from such creativity and imagination.

 

About the Author:

 

William-Hertling-thumbWilliam Hertling is the award-winning author of Avogadro Corp, A.I. Apocalypse, and The Last Firewall. His science fiction series, set at ten year intervals, explores the emergence and coexistence of artificial intelligence and transhumanism. You can follow him at @hertling

 

Related articles
  • William Hertling on Singularity 1 on 1: The Singularity is closer than it appears!

Filed Under: Op Ed Tagged With: Science Fiction, singularity, William Hertling

William Hertling: The Singularity is closer than it appears!

March 7, 2014 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/207008453-singularity1on1-william-hertling-singularity.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

William HertlingWilliam Hertling is a rather recent science fiction discovery of mine and the author of award-winning novels Avogadro Corp: The Singularity Is Closer Than It Appears, A.I. Apocalypse, and The Last Firewall. William has written several plausible scenarios for the technological singularity that were so engaging and compelling that, as soon as I finished his first book, I could not help but go ahead and read the next one too. And so I was very happy to get an opportunity and interview Hertling on my podcast.

During our 45 min conversation with William, we cover a variety of interesting topics such as the impact of reading Accelerando and The Singularity is Near; how he was challenged to become a sci-fi author, and outlined the plot of his first book on the proverbial paper-napkin; the extrapolation of current trends in software and hardware as a way of predicting technological progress; the importance of theory of mind for the creating of artificial intelligence; the singularity and whether it is more likely to happen in a hacker garage or a military lab; hard take-off vs soft take-off; whole-brain simulation and the diminishing costs thereof; if an AI apocalypse is a plausible future scenario or not; transhumanism and healthy life-extension…

This is the second out of a series of 3 sci-fi round-table interviews with Ramez Naam, William Hertling, and Greg Bear that I did last November in Seattle. It was produced by Richard and Tatyana Sundvall and generously hosted by Greg and Astrid Bear. (Special note of thanks to Agah Bahari who did the interview audio re-mix.)

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is William Hertling?

William Hertling is the author of the award-winning novels Avogadro Corp: The Singularity Is Closer Than It Appears, A.I. Apocalypse, and The Last Firewall. These near-term science-fiction novels about realistic ways strong AI might emerge have been called “frighteningly plausible,” “tremendous,” “must-read.”

Avogadro Corp won Forewords Review Science Fiction Book of the Year and A.I. Apocalypse was nominated for the Prometheus Award for Best Novel. The Last Firewall was endorsed by tech luminaries including Harper Reed (CTO for Obama Campaign), Ben Huh (CEO Cheezburger), and Brad Feld (Foundry Group).

He’s been influenced by writers such as William Gibson, Charles Stross, Cory Doctorow, and Walter Jon Williams.

William Hertling was born in Brooklyn, New York. He grew up a digital native in the early days of bulletin board systems. His first experience with net culture occurred when he wired seven phone lines into the back of his Apple //e to build an online chat system. He currently resides in Portland, Oregon.

Other interviews from the series:
  • Ramez Naam on Singularity 1 on 1: The Future Isn’t Set In Stone!
  • Greg Bear on Singularity 1 on 1: The Singularity is the Secular Apotheosis
  • Greg Bear, Ramez Naam and William Hertling on the Singularity

Filed Under: Podcasts Tagged With: sci fi, Science Fiction, singularity, William Hertling

Peering into Our Future’s Black Hole: AI, Transhumanism and the End of Humanity

March 4, 2014 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/206845684-singularity1on1-nikola-danaylov-podcamp-toronto-2014.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

These are the videos of my presentation at the 2014 Podcamp Toronto.

This year I decided that it is best not to speak about podcasting but rather focus on issues familiar to readers of Singularity Weblog – artificial intelligence, transhumanism, and the technological singularity.

The session was intended to provide a brief introduction of the issues and to engage a broader audience of people who are generally not familiar with the topic.

Nikola Danaylov

You can listen to and/or download the complete audio file above, or see my 33 min presentation, followed by a 50min Question and Answer session. (If you want to help me produce more high-quality episodes like this one please make a donation!)

As always, feel free to provide your comments and constructive criticism.

Thanks again to everyone who used social media to vote for, support and spread the word for it!

 

Peering into Our Future’s Black Hole: AI, Transhumanism and the End of Humanity:

Q&A Session:

 

Peering into Our Future’s Black Hole: AI, Transhumanism and the End of Humanity (full text)

One of my favorite proverbs is a Chinese one, and it goes like this: “Seek not to know the answers but to understand the questions!”

And so, when we are confronted with an issue, one of the best things that we can start with is ask ourselves: “What is the question I should be asking?!”

Because the type and quality of the question we begin with will ultimately determine the type and quality of the answer we are going to get.

And so today I will speak to you about the importance of asking questions.

There are many questions that I will bring to your attention today but perhaps the most important one that we will have to face both as a civilization and as individuals is one of the oldest questions that has been around for thousands of years and we have still failed to find an answer that satisfies the majority of us.

The question is this – “What is human?”

And so, this presentation will not be about podcasting.

Last year my presentation was about the 15 most fundamental tips that I could give you for starting and eventually becoming a successful podcaster. I shared how I passed over 500k views and got to live for 10 weeks in NASA’s Ames Campus in Mountain View, California.

How I got to meet many amazing people such as Ray Kurzweil, Peter Diamandis, astronaut Dan Barry, and visit cutting-edge companies such as Google, Facebook, and Tesla.

This year I could have told you how my Singularity 1 on 1 podcast passed 1 million downloads. But the principles that I used and continue to use to this day are the same. So going from half a million in 3 or 4 years and then doubling to over 1 million in 12 months required nothing more than some momentum, that I have gathered the years before, and the application of the very same fundamentals.

So, let me say this again, this presentation will not be about podcasting.

If you do want to find my tips and hear my personal podcasting story you can go to SingularityWeblog.com and search for Podcamp Toronto. Then you will find the video, the audio and the text of last year’s presentation.

As you can see – my friend Josh from JoshGloverPhotography.com is recording today’s session so you don’t need to take notes but just sit down, relax and enjoy. Give me a week or so and I will publish both the full text and the video on SingularityWeblog.com.

Finally, feel free to also come up with questions because I will leave time for a brief Q&A at the end.

You see, I believe that asking good questions is one of the most important and most fundamental skills that any intelligent being can acquire. And so, while I did say that this will not be about podcasting, let me give you a couple of tips on the questions you should be asking when reading session descriptions at Podcamp Toronto.

Q1: How qualified is the person holding the session?

You see, Podcamp Toronto is a fantastic open unconference. This is both a good and a bad thing. It is good because, given its low barrier to entry, anyone can take the stand and hold a session. So, I don’t care who you are, what you do, or what your topic is, you are given an amazing opportunity to contribute to the public discourse on a topic of your choice.

The bad thing is that again – given its low barrier to entry, anyone can hold a session. And thus in past years, the quality of those sessions has varied widely: from mind-blowing professional to dismal.

This year, we had a new social media voting system implemented. And while it was not perfect it was a great step forward. And so I expect that this will be the very best Podcamp Toronto as of yet.

Still, it helps to ask yourself: How qualified is the person holding the session?

So, my tip is this: if you have someone who will be talking about blogging – go and check out their blog. So, from the getgo, unless your name is Seth Godin, if you see a blog hosted on a wholesale domain platform such as Blogger, Typepad or WordPress.com, then that person likely has no clue about blogging. Other signs confirming that conclusion include, but are not limited to, low or no social sharing, low or no comments, lack of unique branding and design…

Q2: What is the metrics and how accurate is it in measuring their expertise?

If the person is talking about YouTube and/or video-production – go check out their channel and look at their videos. If you see only low-quality videos with no or low traffic, without any comments and so on, you may be better off going to another session.

If the person claims to be a social media guru go look at their social media count of their Podcamp Toronto session. If there is no or only one tweet – most likely their own, don’t bother wasting your time.

Last year someone was giving tips on blogging. And they said that they had 30k hits for the past 5 years.

My tip here is to be skeptical, ask questions and dig deeper!

So, let’s take this example. First of all, what is a hit? In most cases, a hit is either a page view or a visit. So, if I go load up my own blog on my own computer this will give me one hit. If I click the refresh button this will give me usually two hits. And so on. Thus, just one among several better ways to estimate traffic will be for example – unique visitors per month, rather than hits. This way, you get a more accurate estimate of the audience size and the blogger’s authority.

So, let’s do the math with the example I just gave: 30k divided by 5 years of blogging will give you roughly 17 hits per day. Since this is not unique visitors but hits, one can get 17 of those per day very easily just with the help of a couple of friends.

Therefore, I dare claim that you are wasting your time “learning” from such a popular blogger.

And so, to recap: today’s tip for podcasting as well as most other things in life is:

“Be skeptical, ask questions, measure and dig deeper!”

OK, let’s move to the main reason we are here today.

Peering into Our Future’s Black Hole: Artificial Intelligence, Transhumanism and the End of Humanity

In my session description I promised to share my answers to 5 questions:

1. What are the most important technological trends shaping our civilization?
2. What is the technological singularity?
3. What is transhumanism?
4. Can science really make us immortal?
5. Why humanity is doomed to go the way of the dinosaurs?

Let’s not waste any time but jump into tackling the questions in order:

1. What are the most important technological trends?

Since we can spend a whole day discussing those trends here but only have 45 minutes to so and I am planning to cover the other 4 questions too, I would focus on giving you what I believe is by far the most important one:

Exponential growth!

This is also the easiest and the hardest trend to grasp.

It is easy because unless you have been living in a cave somewhere for the past 50 years, you already know that the world is changing faster than ever before. Not only that but the change that we can clearly see is speeding up and accelerating in its own right. I believe that this is more or less obvious and easy to see for everyone here.

But exponential growth is very hard to grasp because our brains have evolved to make linear rather than exponential projections.

And so to help us grasp it better let me use an ancient Indian chess legend as an example.

The legend goes that the tradition of serving Paal Paysam – or what I understand is rice pudding, to visiting pilgrims started after a game of chess between the local king and the Lord Krishna himself.

The king was a big chess enthusiast and had the habit of challenging wise visitors to a game of chess. One day a traveling guru was challenged by the king. To motivate his opponent the king offered any reward that the sage could name. The sage modestly asked just for a few grains of rice in the following manner: the king was to put a single grain of rice on the first chess square and double it on every consequent one.

Having lost the game and being a man of his word the king ordered a bag of rice to be brought to the chess board. Then he started placing rice grains according to the arrangement: 1 grain on the first square, 2 on the second, 4 on the third, 8 on the fourth and so on:

Following the exponential growth of the rice payment, the king quickly realized that he was unable to fulfill his promise because on the twentieth square the king would have had to put 1,000,000 grains of rice. On the fortieth square, the king would have had to put 1,000,000,000 grains of rice. And, finally, on the sixty-fourth square, the king would have had to put more than 18,000,000,000,000,000,000 grains of rice which is equal to about 210 billion tons and is allegedly sufficient to cover the whole territory of India with a meter thick layer of rice. At ten grains of rice per square inch, the above amount requires rice fields covering twice the surface area of the Earth, oceans included.

It was at that point that Lord Krishna revealed his true identity to the king and told him that he doesn’t have to pay the debt immediately but can do so over time. That is why to this day visiting pilgrims are still feasting on Paal Paysam and the king’s debt to Lord Krishna is still being repaid.

Now, I hope you agree with me that this is an interesting and powerful story that helps us understand exponentials. But some of you may point out that it is a myth; a legend; it’s not real.

Well, let us look at the best-known example of exponential growth from the world of technology – Moore’s Law:

Moore’s law is named after Gordon Moore – co-founder of Intel Corporation.

It was published in 1965 and simply put it states that the number of transistors that can be placed on an integrated circuit for the same price will double every 18 to 24 months.

And we all know that already, right? We know that computers are obsolete the moment you buy them and that the next computer will be at least twice faster. But today everything is a computer. Your phone, your tablet, your camera, your car, even your toothbrush. And so we all have come to expect that the next generation of almost any product we buy is at least twice better than the previous generation.

And so, in a universe going digital where everything becomes information we are increasingly able to manipulate and mold that information. Thus, as far as the digital universe is concerned we are Gods. We can do whatever we want. But we have to remember that what used to be material is now digital. Take books and music records – they used to be material objects but now they have dematerialized and gone digital. The thing is that this is only the beginning. Everything is becoming information today.

Take biology, biology used to be analog but with the decoding of the human genome it is quickly going digital and now we can decipher and even 3D print biological tissues, even organs by design. And this is only the very beginning. We are well on the way of designing life on the computer screen and then pressing the print button to bring it to live.

And so, as Stuart Brand says we have become Gods and we might as well get used to it.

We, humans, are biological creatures. We are made of atoms. So more powerful computers allow us to learn and manipulate smaller and smaller particles in ever more precise ways. Thus there will be a day when we can create new bodies and even new brains. But I will talk more about that later.

Other major fields benefiting immensely from exponential growth include, but are not limited to, robotics and artificial intelligence; genetic engineering and synthetic biology; nanotechnology and 3D printing.

And so, all of the above has often been described by futurists such as Ray Kurzweil and Vernor Vinge who believe that exponential growth trends such as Moore’s Law will eventually lead to a Technological Singularity.

2. What is the technological singularity?

The term singularity has many meanings:

In simple language, it means the state of being singular, distinct, peculiar, uncommon or unusual.

In mathematics, it means a problem with an undefined answer – e.g. 5 divided by 0?

In physics a singularity is a black hole – a place where the fabric of time and space is ruptured and the laws of the universe don’t seem to hold true anymore.

And so we borrow this metaphor from physics to represent the accelerating changes that we can observe in technology.

And so, if I am to put the technological singularity in just two words I would say that it is “intelligence explosion”.

But there are numerous schools of thought on the definition, with subtle but important differences.

So, now that we heard the short version, let me throw a bunch of quotes at you to make things interesting:

“the ever accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” John von Neumann

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

I.J. Good

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. […] I think it’s fair to call this event a singularity. It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown.” Vernor Vinge in a classic NASA paper from 1993

“… a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself.” Ray Kurzweil

Kevin Kelly, senior maverick and co-founder of Wired Magazine

“Singularity is the point at which all the change in the last million years will be superseded by the change in the next five minutes.”

Sean Arnott: “The technological singularity is when our creations surpass us in our understanding of them vs their understanding of us, rendering us obsolete in the process.”

So what happens to us when we stop being the smartest entities on the planet?

What happens when your toothbrush is smarter not only than you and me but smarter than all of us, all of humanity?

While we are pondering this issue let us move on to the next question I promised to address:

3. What is Transhumanism?

Transhumanism is both misunderstood and feared. Francis Fukuyama famously called it “the most dangerous idea.”

Put simply Transhumanism is the belief that technology can allow us to improve, enhance and overcome the limits of our biology.

More specifically, transhumanists such as Max More, Natasha Vita-More and Ray Kurzweil believe that by merging man and machine via biotechnology, molecular nanotechnologies, and artificial intelligence, one day science will yield humans that have increased cognitive abilities, are physically stronger, emotionally more stable and have indefinite life-spans.

This path, they say, will eventually lead to “posthuman” intelligent (augmented) beings far superior to man – a near embodiment of god.

Some of the main issues here are:

Can humanity continue to survive and prosper by embracing technology or will technology eventually bring forth the end of the human race altogether?

Will humanity get polarized into neo-Ludditetechnophobes and transhumanist technophiles?

Does that mean that wide spread global conflict may be impossible to avoid?

Who will be the dominant species?

What is the essence of being human?

4. Can science make us immortal?

Let me ask another question – What is death?

The definition of death may not be so simple and obvious as you may think. In fact, as our knowledge and technology improve, the definition of death shifts.

And so, in a way, death is just another way of somebody – usually a doctor – “I can’t do anything else for her!” But what we can or can’t do has changed over time. And thus the definition of death has changed too.

It used to be the case that death was declared when one stopped breathing on their own. But today we have respirators that can keep us alive even if we are unable to do that on our own.

It used to be the case that death was declared when one stopped having a pulse i.e. perceivable heart rate. But today we routinely stop heart beating during surgery.

And so one of the latest ways to measure and/or define death is measuring brain activity. As our knowledge and technology improve, in time this is also likely to change.

And so can science make us immortal?

Let me start addressing this issue by saying that science has made substantial progress with respect to ageing and life expectancy.

And so a brief historical survey of longevity throughout the ages will read like this:

Cro-Magnon Era: 18 years
Ancient Egypt: 25 years
Ancient Greece: 28 years
1400 Europe: 30 years
1800 Europe and USA: 37 years
1900 USA: 48 years

And so when around 1900 social security was introduced at 65 it was simply because most Americans never actually made it to 65. Thus it didn’t cost that much to introduce the program.

The problem is that today we are victims of our own success because almost everybody makes it over 65 today.

2002 United States: 78 years

A child born today is expected to live over 93 and right now every 1 year our life expectancy improves by 3 months.

There will be a point when every year our life expectancy will improve by another year: this is what Dr. Aubrey de Grey calls Longevity Escape Velocity.

In simple words that means that we will be able to prolong life indefinitely.

5. Why humanity is doomed to go the way of the dinosaurs?

We are often told that humanity is the pinnacle of evolution. But it is not hard to see that we are a beta product. We have numerous problems and we are far from perfect. In fact, what has allowed us to survive and prosper is our intelligence which has given birth to our technology. Strip away all of our technology and the vast majority of us will not survive.

Moreover, evolution never stops. So, there was a time when dinosaurs ruled the Earth. But as it is always bound to happen – things change. And what was previously a niche organism – namely mammals, took over and flourished, while dinosaurs when extinct.

Well, evolution is also accelerating. It took perhaps 10 billion years to form the galaxies and our planet. It took another couple billion years before we had the first simple single cellular life. Then it took hundreds of millions of years to get plants and eventually dinosaurs. Hominoids have been around for perhaps something like 6 millions years and then homo sapiens has been around between 50 and perhaps 200 thousand years.

And so everything is accelerating. But also everything is changing. And today the fastest pace of evolution is the one we can observe in technology. Thus technology is supplanting biological evolution and technological creatures are likely to replace biological ones just like mammals replaced the dinosaurs.

In fact, this has already happened because our civilization is a technological one and it cannot survive without its technology.

And so I hope that by now you would agree that in the long run it is inevitable that humanity as we know it, is doomed to go the way of the dinosaur. As we saw, evolution doesn’t stop and, despite of what we are being told, we are not unique in any way. And just like all species before us Homo Sapiens will eventually go extinct.

However, this does not have to be necessarily bad news. For as long as humanity evolves and there is continuity between what we are today and what we have to become to survive and prosper, there is hope. In fact, this as Ray Kurzweil claims is the very essence of what makes us human – our ability to evolve and transcend.

And so this is the choice: evolve and transcend our biological limitations or go extinct.

This choice is in turn, derived from one of the most fundamental questions we still have to confront – both collectively as a civilization, and personally – as individuals.

“What is human?”

This session was not meant to provide definitive answers, but rather, to set the stage and ask some questions in an attempt to generate discussion, to provoke thought and to stir the imagination. My goal is to spark a conversation about the impact of technology, exponential growth and artificial intelligence.

My name is Nikola, my blog is SingularityWeblog.com and my blogging alias is Socrates.

Today I have tried to share with you my journey to discover who I am as a being, who we are as a species and most of all how does technology change the meaning of both the above questions and answers.

And now I would like to invite you to join me in this journey and start asking your own questions:

So let us open the Q&A session and thank you very much for your time!

Related articles
  • 15 Steps Towards Your Podcasting Success: Socrates At Podcamp Toronto 2013

Filed Under: Podcasts Tagged With: Nikola Danaylov, podcamp toronto, singularity, Socrates

Transcendence: Johnny Depp’s New Singularity Film [Almost] Transcends Technophobia

January 7, 2014 by Socrates

Transcendence is Johnny Depp’s new singularity film that comes out today.

transcendence-movie

I saw the film last night during its first available time-slot and, in order to avoid big spoilers, I want to write no more than a couple of vague sentences:

Let me admit that, after seeing the teasers and trailers below, I was very negatively predisposed and did not expect anything good. But I was pleasantly surprised because it was rather refreshing to see a mainstream Hollywood movie that was not made by sensationalist Luddites trying to make an easy buck by going for the lowest common denominator.

And so while the movie is by no means outstanding [or anything close to Her], overall it was pretty decent.

As my friend with whom I saw the movie observed: “The main idea was carried through rather well.”

And while the idea, that advanced technologies though often scary are not necessarily bad, is by no means unique or Earth-shattering in any way, it is one worth sending out into the mainstream of public consciousness. And this simple exercise is a considerable step up from the simplistic Terminator/hubris/end-of-the-world scenarios that have dominated science fiction since Frankenstein…

 

Synopsis: Dr. Will Caster (Johnny Depp) is the foremost researcher in the field of Artificial Intelligence, working to create a sentient machine that combines the collective intelligence of everything ever known with the full range of human emotions.  His highly controversial experiments have made him famous, but they have also made him the prime target of anti-technology extremists who will do whatever it takes to stop him.

However, in their attempt to destroy Will, they inadvertently become the catalyst for him to succeed – to be a participant in his own transcendence.  For his wife Evelyn (Rebecca Hall) and best friend Max Waters (Paul Bettany), both fellow researchers, the question is not if they can… but if they should.

Their worst fears are realized as Will’s thirst for knowledge evolves into a seemingly omnipresent quest for power, to what end is unknown.  The only thing that is becoming terrifyingly clear is there may be no way to stop him.

Transcendence co-stars Morgan Freeman, Cillian Murphy, Kate Mara and Paul Bettany.

 

I call it Transcendence:

 

Transcendence: Humanity’s Next Evolution

 

Revolutionary Independence From Technology [RIFT]

 

Official Trailer 1:

 

Official Trailer 2:

 

Filed Under: Reviews, Video, What if? Tagged With: singularity, Technological Singularity, Transcendence

Frank J. Tipler: The Laws of Physics Say The Singularity is Inevitable!

October 29, 2013 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/205109919-singularity1on1-frank-j-tipler.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Frank-J-TiplerDr. Frank J. Tipler is a physicist and cosmologist perhaps best known for concepts such as the Omega Point or The Cosmological Singularity. He is a professor of mathematical physics at Tulane University and the author of books such as The Anthropic Cosmological Principle, The Physics of Immortality, and The Physics of Christianity.

During our 1 hour conversation with Dr. Tipler, we cover a variety of interesting topics such as: why he is both a physics imperialist and fundamentalist; the cosmological singularity, the technological singularity, and the omega point; his personal journey from Christian fundamentalism through agnosticism and atheism and back to theism and Christianity; why most physicists are good atheists and bad scientists; immortality; determinism and whether God plays dice with the universe; mind-uploading and [Quantum] consciousness…

The most interesting quote that I will take away from this interview with Frank J. Tipler is:

If the laws of physics be for us, who can be against us?!

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is Frank J. Tipler? (in his own words)

I was born and raised in Andalusia, a small farming town in southern Alabama. At the age of five, while in kindergarten, I became fascinated by the work of Alabama only famous physicist, the rocket scientist Werner von Braun, and decided then that I wanted to be an astrophysicist. With this goal, I obtained by an undergraduate degree in physics in 1969 at M.I.T., where I first learned of the Many-Worlds of quantum mechanics, and of the Singularity Theorems of Stephan Hawking and Roger Penrose. In 1976, I obtained a Ph.D. in physics for my proof, using the techniques of Hawking and Penrose, that creating a time machine would necessarily result in singularities in the laboratory. I was hired in 1979, as a post-doc by John A. Wheeler, the great physicist best known for his work on black hole theory, to extend my 1978 proof that in general relativity, time is not relative: a unique rest frame exists. I became Professor of Mathematical Physics in 1981 at Tulane University, where I have been ever since, working to draw the full implications of my earlier work: that quantum mechanics and general relativity require that the Cosmological Singularity – the Uncaused First Cause – consists of Three Persons but one Cause. I have now written up these results for a popular audience, and the book is The Physics of Christianity.

Related articles
  • 17 Definitions of the Technological Singularity

Filed Under: Podcasts Tagged With: singularity, Technological Singularity

Singularity Defined and Refined

October 29, 2013 by Singularity Utopia

The meanings of words change. Meanings evolve. Definitions of words are not set in stone, they aren’t unalterable commandments from God. Words are merely concepts humans have invented. The original definition of “awful” was apparently full of awe, worthy of respect.

Anyone can invent a word. Successful inventions enter common usage. All inventions are typically refined. The invention of words isn’t immune to refinement.

Misunderstanding often occurs regarding the word Singularity because this word is being refined, which is more common for new concepts. I think the Singularity is a colossal intelligence explosion, limitless intelligence, which creates utopia. It is not about mind-uploading or unpredictability.

Post-Scarcity is a clearer way to define the Singularity. Scarce intelligence is the source of all scarcity. Lifespan-scarcity, food-scarcity, or spaceship-scarcity all highlight how intellectual insufficiency is the obstacle to utopia. A resource called “intelligence” is the source of all technology. Technology is essentially intelligence, which means explosive intelligence is an explosion of resources.

singularity
Image by the artist Hugh C Fathers. All rights reserved ©

Our brainpower has been essential for our progress. Our minds erode scarcity. James Miller in his book Singularity Rising wrote: “Economic prosperity comes from human intelligence.” Ramez Naam also highlights the power of our brains in his book The Infinite Resource, thus regarding innovation he stated on his website: “Throughout human history we have learned to overcome scarcity and adversity through the application of innovation — the only resource that is expanded, not depleted, the more we use it.”

You could say we’re approaching an explosion of innovation. Technology conquers scarcity, technology liberates us from scarcity, but the power of technology (intelligence) is currently limited, scarce. We are suffering from a scarcity of ultra-sophisticated technology (intelligence) thus all resources are somewhat scarce. Human-level AI is extremely scarce, it is non-existent in the year 2013. When human-level AI is created we will start quickly eradicating all forms of scarcity, we will be rapidly approaching a colossal explosion of intelligence – the Singularity. [I was inspired to write about the definition of the Singularity after a G+ post by Mark Bruce. Mark wrote about the meaning of egregious, and wondered why the meaning had changed.]

The word egregious immediately caused me to think about the word gregarious, which is a logical connection to make. Both words are based on the Latin grex, gregis, which means “flock.” Gregarious means sociable, companionable; being part of the flock. Currently egregious means outstandingly bad, but the original meaning was merely outstanding, a shining example of awesomeness. Egregious is all about standing out from the flock, but interpretations could differ because standing out from the crowd can be good or bad. Farmers for example might not appreciate rebellious sheep.

The concept of the “black sheep” is a notorious idiom regarding non-conformity (standing out from the flock). Mark Bruce thought the meaning of egregious could have changed due to sarcasm but I think it’s merely a change based on obedience and conformity. The evolution of civilization has temporarily led to greater regimentation, mediocrity has been valued because it maintains social equilibrium, which I suspect is the reason why egregious (outstanding illustriousness) became bad. Blending into the flock became desirable while nonconformity became shockingly wrong. During the early stages of civilization, when populations were small thus less draining on resources, authoritarian control was less obvious or less needed, which could be why egregious originally described a valuable nonconformist trait of being “outstanding.”

Dealing with extremes can cause a switch between the two poles thus intense love can easily become intense hate if you are betrayed by a lover. Lovers can also become irrationally jealous, vengeful. Perhaps this is why the Singularity can either be utopia or dystopia, or perhaps it is why Snowden is either a hero or a traitor. Maybe it all depends on your viewpoint?

Insufficient intelligence causes humans to misunderstand situations. Fights over scarce resources occur, which causes civilization to emphasize the authoritarian disharmony of scarcity. Intelligent people via their foresight will think Snowden is a hero because they understand technology is eroding scarcity. Conversely Snowden has been deemed a traitor based on unawareness of the future. Snowden’s leaks represent decreasing scarcity but unawareness means people wrongly assume his actions threaten civilization.

Thankfully, despite the teething troubles of civilization, our collective intelligence is increasing thus there is less need to blend into the flock, although we do remain locked into scarcity-based battles. Sometime around year 2030 I think the authoritarian controls of civilization will be significantly abolished, but until then perhaps technology will be awful. Theoretically we could improve civilization much sooner but humans do suffer from scarce intelligence, which means it is difficult to be aware of the future.

The Singularity is a theory not an unalterable prophecy. The Singularity isn’t comparable to the biblical word of God, which warns against change: “I warn everyone who hears the words of the prophecy of this book: if anyone adds to them, God will add to him the plagues described in this book, and if anyone takes away from the words of the book of this prophecy, God will take away his share in the tree of life and in the holy city, which are described in this book.”

If intelligence is the focus of the Singularity then it is vital to refine our understanding of the theory, we need to improve how we define it. We need to consider what the purpose of intelligence is. Is it smart to become more intelligent? Is colossal intelligence really intelligent if it fails to create utopia? Is colossal intelligence painfully slow or is it defined via a emancipatory quickness? Michael Anissimov has stated we should stick to the “original documents” (3m 29s) regarding the Singularity but I think unyielding closure contradicts the openness of intelligence.

In addition to Michael’s biblical immutability, which focuses on the “original documents,” there is an issue with the way that Singularity University defines the meaning of the intelligence explosion. I often encounter people who think the Singularity has already happened. This kind of misunderstanding seems perpetuated by Singularity University because they suggest the Singularity is merely “dramatic technological change.”

If we are merely considering dramatic technological change then it is understandable for people to think the iPhone or Google Glass is the Singularity. Corruption of meanings can be frustrating, very confusing, but restricting our ability to change meanings isn’t the solution. The solution is openness whereby all meanings can be debated without any one individual or organisation imposing their authority to create absolute definitions.

Ray Kurzweil and Vernor Vinge have influenced my thinking but from my viewpoint they don’t fully comprehend the Singularity. Their biggest mistake is to think the Singularity is unknowable, unpredictable, beyond human comprehension. There is no rational reason to assume advanced intelligence would be unfathomable, in fact unfathomableness is decidedly unintelligent thus more appropriate for a censorious religion than explosive intelligence. Explosive intelligence should logically increase comprehension for everyone instead of decreasing comprehension.

A naked singularity, which has no event horizon, is a better analogy than standard gravitational singularities for describing our technological Singularity. Naked singularities are theoretically more powerful than standard black holes, they are more singular, thus metaphorically better descriptors of colossal intelligence. Standard singularities are comparatively boring.

Note also how instead of obscurantism recent black-hole research suggests that physics-ending singularities vanish, thereby creating bridges to alternate universes. This means that with the help of loop quantum gravity we might be able to deny the claim that the laws of physics break down in standard black-holes.

The black hole information paradox is a fascinating paradox. Perhaps information is not lost. Whatever the situation is regarding gravitational singularities, whether information is hidden or revealed, it should be noted obscurantism is not a facet of intelligence. [Although the CIA objecting to Snowden’s openness will probably disagree.] If gravitational singularities entail obscured information, if they are unfathomable or unknowable, then the metaphor is wrong because true intelligence is or should be opposed to [cosmic] censorship. Obscurantism is antithetical to intelligence.

Please note that despite the Singularity inevitably leading to widespread extrasolar, extragalactic and perhaps even multiverse colonisation, it is not a cosmological phenomenon. The Singularity is “only” metaphorically a stellar event.

Similar to how “egregious” had a different meaning at a different point in history, I think changing awareness will let people comprehend how the Singularity is opposed to obscurantism. In the future there will be no elitist restrictions and everyone will easily access explosive intelligence.

Maybe in year 2045 there will need to be Singularity whistle-blowers leaking classified intelligence from the core of the intelligence explosion? Obviously I jest when I state the intelligence explosion would need whistle-blowers. All restrictions upon knowledge will be explosively obliterated. The Singularity will be understood by everyone.

Incorrect definitions of the Singularity are mainly based on unawareness of how scarcity currently shapes our lives. Faulty predictions of the future fail to see how scarcity will be eradicated. The ramifications of scarcity ending are not appreciated. There is a failure to comprehend what the end of scarcity actually entails, namely how it relates to technology and/or information. Based on current circumstances people therefore envisage a future of scarce understanding – a future of restricted information where knowledge is limited to a minority of specialists. This entails incorrectly envisioned scenarios where the future is utterly unfathomable or robots kill humans then destroy the Earth: “…the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.”

Finally, the Singularity is capitalized because it is a unique event distinct from gravitational singularities. It is similar to how the Big Bang, the Mesozoic Era, or the Industrial Revolution is capitalized.

 

About the Author:

Singularity Utopia writes for Singularity-2045, a Post-Scarcity orientated website dedicated to increasing awareness regarding the coming technological utopia. The goal is to make the Singularity happen sooner instead of later.

 

 

Related articles
  • Frank J. Tipler: The Laws of Physics Say The Singularity is Inevitable
  • 17 Definitions of the Technological Singularity

Filed Under: Op Ed, What if? Tagged With: post scarcity, singularity, singularity utopia, Technological Singularity

Socrates Gets Interviewed on the Futurology Podcast

July 12, 2013 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/202337924-singularity1on1-socrates-the-futurology-podcast.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

A few weeks ago I got interviewed on the Futurology Podcast.

During my one hour conversation with show host Jason Peffley we discuss a number of topics such as: how I got to do blogging and podcasting; my time and take on Singularity University in particular and education in general; the wait-and-see vs the proactive approach to the future; the definition of the technological singularity; slow vs hard take-off scenarios; whether I am a futurist or not; my favorite singularity books; the political and economic reality in the US; why life extension technology is so exciting; why I hate Prometheus; pessimism vs optimism…

Here is the original podcast description written by Jason Peffley:

“Instead of running through the top 5 links, this episode is dedicated to interviewing Nikola Danaylov. His site (singularityweblog.com) is periodically discussed here and his podcast has featured some of the biggest names in technology.  Nikola has also studied at the Singularity University.  He now makes his living by blogging, podcasting, and attending singularity related events around the world.”

 

Related articles
  • Socrates at Newtonbrook Secondary School: Be Unreasonable!
  • 15 Steps Towards Your Podcasting Success: Socrates At Podcamp Toronto 2013

Filed Under: Podcasts Tagged With: Nikola Danaylov, singularity, singularity university, Socrates

Is The Singularity Happening Now?

June 13, 2013 by Singularity Utopia

Digital-Black-HolePeople have grossly misunderstood the Singularity if they think it is already happening or has happened. The Singularity is not a seamless merging of humans and technology, neither is it about mind-uploading. The Singularity is an intelligence explosion, we are considering utterly colossal intelligence.

I was inspired to clarify what the Singularity is because wearable computing pioneer Thad Starner, who is also the technical manager of Google Glass, recently stated:

“I would argue that we’re currently living the singularity, where the tool stops and the mind begins will start becoming blurry.”

Yes humans and technology will merge, in fact we are merging, but a merger of humans and technology is not the Singularity. The reason why we don’t yet have the Singularity can be demonstrated by several points:

  • 1. Our lack of immortality shows how our medical technology is not very proficient. The fact that people die in various ways despite our general desire to stay alive shows how medical technology is not very smart, in fact our level of intelligence regarding medical technology is positively dumb. Considering how stem cell applications will likely develop over the next 20 or 30 years (2033 – 2043), you can see our current life-extending medical tech is very non-Singular. While people are unavoidably mortal we do not have the Singularity. The Singularity however is more than mere medical immortality, hence these additional points:
  • 2. Our enslavement via resource-scarcity is another marker regarding deficient intelligence and deficient technology. Resource-scarcity clearly marks the absence of the intelligence explosion. Deficiency of technology entailing a necessity to work so that you can survive, the necessity to provide food and shelter, it is a necessity arising from a primitive level of technology, it is a pre-Singularity level of technology. The restrictiveness of scarcity controlling your life isn’t a smart way to live, it isn’t a smart way to utilise your existence. When brute survival dominates your life there is no freedom to truly explore your mind. Resource-scarcity shapes our lives in a very unintelligent manner, scarcity is unintelligent, which means crimes exist, money exists, and jobs exist. When the Singularity happens there will be no need to work, everything will be automated, there will be no money; everything will be free in a libertarian and financial sense. Constraints upon liberty only happen to ensure populations are subjugated, a subjugation which ensures compliance with wealth inequality. Deficient freedom depends upon wealth imbalance regarding a minority of people being very rich while the majority are poor. Freedom is restricted thereby forcing people to accept wealth inequality. Freedom is restricted to ensure people do not rebel against wealth inequality, which means financial liberty and existential liberty are inextricably interlinked. Furthermore, in a scarcity situation, stupidity is socio-politically enhanced because unintelligent people are less likely to question wealth inequality, which is an emphasis of stupidity exacerbating the non-existence of the intelligence explosion. When the Singularity happens there will be no scarcity of any valuable item you desire, thus no need to dominate the populace regarding compliance with wealth inequality, there will be no need for socio-economic engineering of stupidity. There will be total freedom.
  • 3. Post-Scarcity, which is addressed in point number 2, is an alternate name for the Singularity, but similar to the Singularity people sometimes say we already have Post-Scarcity, they wrongly think Post-Scarcity has already happened but it isn’t evenly distributed. The meaning of Post-Scarcity must therefore be clarified. Post-Scarcity is an alternate viewpoint of the Singularity akin to differing views of a person’s head, for example the back of a person’s head compared to the face view. Both views reveal a head but different viewpoints present different pictures. Post-Scarcity is LIMITLESSNESS, it is not merely about better management of abundant resources, thereby entailing everything is free. Post-Scarcity is a level of limitless so pronounced, so deeply entrenched, it would be utterly impossible to restrict the resource availability. The abundance is so SUPER that no management or distribution of resources whatsoever is needed. The Singularity (Post-Scarcity) is a state of limitlessness regarding any resource, with the principle focus on limitless intelligence because all resources flow from intelligence.
  • 4. Finally to summarise the meaning of the Singularity, here is a list of points, in no particular order, which all need to be fulfilled for the Singularity to be happening:

● All crime and violence are abolished because crime and violence depend wholly on scarcity for their existence. All governments have ceased to exist. No governance is required regarding the abolition of violence and crime because Post-Scarcity simply cuts off, at the source, the impetus for anti-social tendencies. Scarcity is the source of all anti-social tendencies thus by abolishing scarcity you abolish everything which exists due to scarcity, thus governments will cease to exist because governments exist wholly to manage scarcity, governments manage the social dysfunction arising from scarcity, governments ensure great wealth for a minority of people in a scarcity situation, whereas in a Post-Scarcity situation everyone can have limitless wealth.

● Everything is free, nobody needs to work, anyone can have anything they want.

● Everyone is immortal if they want to be. Furthermore access to medical immortality is no more difficult than clicking your fingers or blinking.

● Everything desirable is limitless, thus there are no limits on computation, no limits on intelligence, no limits on travel, which means anyone could easily print a super-intelligent spaceship then travel to the end-edge (if there is one) of the universe, whereupon they will easily create a new universe if they want to. Perhaps limitlessness regarding universe-creation is the best way to describe the Singularity because currently we don’t even know how to 3D-print one planet or one duplicate universe. When the Singularity happens it will be easy to create limitless universes, thus if you are mortal and if you cannot print a universe then it is safe to say the intelligence explosion has not happened.

The Singularity is Post-Scarcity, it’s the point where intelligence ceases to be scarce, it’s a technological explosion of intelligence to end all aspects of scarcity, which should happen no later than year 2045. Technology continually allows us to do more for less. The Singularity is about a mind-bending amount of utterly astronomical ultra-efficient technology, which entails extreme super-power at essentially zero cost. The Singularity was definitely not happening in the year 2013. It is extremely unlikely the Singularity will happen before year 2025. I think the Singularity will actually happen very close to 2045.

About the Author:

Singularity Utopia writes for Singularity-2045, a Post-Scarcity orientated website dedicated to increasing awareness regarding the coming technological utopia. The goal is to make the Singularity happen sooner instead of later.

 

Related articles
  • Scarcity Causes All Wars and Violence
  • Singularity Utopia: Post Scarcity Awareness As An Antidote To Despair
  • Utopia is Inevitable!

Filed Under: Op Ed Tagged With: post scarcity, singularity, singularity utopia

Socrates at Newtonbrook Secondary School: Be Unreasonable!

February 28, 2013 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/200215112-singularity1on1-socrates-at-newtonbrook-secondary-school.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Yesterday I went to speak to a class of grade 12 students from the Newtonbrook Secondary School in Toronto. I have been looking forward to this opportunity to challenge and be challenged by the next generation of bright young minds, and was not going to be prevented from going there, be it by a Canadian winter storm of by any other of life’s tragedies.

Needless to say, I enjoyed speaking to the students very much and hope that they benefit from talking to me as much as I did talking to them.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

 

My Talking Points for Newtonbrook Secondary School:

Newtonbrook Secondary SchoolI want to begin today by acknowledging your teacher Hermine Steinberg – Hermine doesn’t know what I am going to say today and she probably wouldn’t approve some or much of it. What she certainly knows is that she is taking a risk by inviting me here. And, from my life experience I know that you risk, you take a chance only if you care about something or someone. So I want to recognize her and tell you that you are lucky to have a teacher who is willing to risk for you, because she really cares.

Who am I and why am I here today?

My name is Nikola Danaylov aka Socrates. I am the blogger behind SingularityWeblog.com and the host of the Singularity 1 on 1 podcast.

I get about 50,000 unique visitors per month and have had over half a million downloads of my show.

Two summers ago I was very fortunate to be one of very few people who had the opportunity to go to Singularity University which is located on NASA’s Ames Campus in Mountain View, California. There I met some of the most incredible people in the world such as Steve Wosniak, Ray Kurzweil, Peter Diamandis, Aubrey de Grey and astronaut Dan Barry and had the chance to visit companies like Google, Facebook, Cisco, Tesla and many others.

But enough about me.

I am here to talk about you!

One of the big questions in schools today is: Are students Bored or Apathetic?!

My hypothesis is that students are bored. Just like I was bored when I was in school.

So for the next 40 min or so I will throw some of today’s biggest ideas at you to find out if I am right or wrong. After I am done we will start a conversation where you can say what you think and how you feel.

So, let’s start our conversation with education: the thing about education is that it holds a promise. A promise that was probably told to you by both your parents and by your teachers.

“Do your school work, get good grades in your classes and you will get a good job and a good life.”

Well, I am here to tell you that your school grades don’t matter that much. In fact, they don’t matter at all.

Let me give you 2 examples: Bob McDonald and Jack Andraka.

So, in short, I don’t care that you barely passed or even failed biology or chemistry. You can still reinvent the meaning and the scope of biology, chemistry or anything else you put your mind to it.

As someone who spent a long time in school and has had a few academic awards, I have come to discover that success in school doesn’t mean success in life – neither personally nor professionally.

Education is historical i.e. to say it is retrospective in nature. It is about the past. But what I am here for is to propose that we must look to the future.

And so: Why talking about the future is as important, if not more, than talking about history?!

Let me give you 2 reasons:

1. “We can’t do anything about the past, however. People often excuse this by saying that we know a lot more about the past. But modest efforts have often given substantial insights into our future, and we would know much more about the future if we tried harder.” Robin Hanson

2. It might be that your generation will be the one to steer our civilization at a time of unparalleled peril and promise. At a time when humanity may face immortality or extinction, when we might colonize the stars or go back to the stone age.

And, so, let’s talk about the future:

The biggest trend is Accelerating Change: according to Ray Kurzweil in the next 10 years we are going to experience change equal to the one that used to happen for 1,000 years.

Moore’s Law and the Law of Accelerating Returns

Exponential change – 30 exponential steps down the way takes a billion steps down the road

What are the major fields of accelerating change:

1. Robotics and Artificial Intelligence: from Google’s robot car to killer drones to Deep Blue and Watson

2.  Genetic engineering and Synthetic Bio

Decoding the human genome cost over 3 billion dollars and took many scientists a cooperative effort for over 10 years.

Today you can do that for 2,000 dollars with single machine over 1 day. What does that mean?

That means we might put an end to cancer, create algae that eats pollution or produces oil, or that we could eventually plant a seed that may turn into a house.

4. Explosion in internet and computer users and especially in data: 3 billion internet users exponential explosion of devices i.e. Internet of Things

cheaper, faster, smaller, better – smart phones and everything else

Today’s smartphone most powerful computer of 1985 (War Games computer)

Zetabytes of information: kilobyte, megabyte, gigabyte, terabyte, petabyte, exabyte, zetabyte i.e. 1 with 21 zeros that’s 250 billion DVD’s of information per year

92% of world data was generated in the past 2 years

5. Nanotechnology: being able to build things from the ground up, one atom at a time.

No waste, no energy loss, on the spot, on demand by nanofabricators.

6. 3D printing from jaws, to beaks, to prosthetics and houses

7. Bio printing: Dr. Anothony Atala printed a human bladder from stem cells.

8. Ageing and life expectancy

Cro-Magnon Era: 18 years
Ancient Egypt: 25 years
Ancient Greece: 28 years
1400 Europe: 30 years
1800 Europe and USA: 37 years
1900 USA: 48 years
2002 United States: 78 years

right now every 1 year our life expectancy improves by 3 months

There will be a point when every year our life expectancy will improve by another 1: this is what Dr. Aubrey de Grey calls Longevity Escape Velocity. In simple words that means that we will be able to prolong life indefinitely.

9. Whole brain simulation, whole brain emulation and mind uploading:

Books and music went from material to digital but that is only the beginning. I am here to tell you that whatever can become information will become information.

We are all living software – what Prof. George Church calls the oldest text i.e. DNA.

The trend is that eventually we will transition seamlessly material things into information and, with 3D printing, information back into material objects.

And that includes us!

Mind uploading is not science fiction any more!

10. Transhumanism: the belief that with technology we have and we can continue to improve who we are and what we can do.

Hamlet’s Transhumanist Dilemma

The Transhumanist Manifesto

11. The technological singularity

Definitions of Singularity:

1.     the state of being singular, distinct, peculiar, uncommon or unusual
2.     (mathematics) the value or range of values of a function for which a derivative does not exist
3.     (physics) a point or region in spacetime in which gravitational forces cause matter to have an infinite density; associated with Black Holes
4.     In the technological sense there are many definitions but I will give you one that fits best what we are talking about today:

Intelligence explosion: this intelligence could be enhanced, augmented human intelligence. Or it could be machine i.e. Artificial Intelligence.

So, the question is: what happens when machines becomes smarter than us?

The best answer we have come up with so far is that: “We don’t really know!”

And that is why it is a singularity, it is a point in our future where our ability to predict and model what is likely to happen will fall apart.

So, what does this all mean for you?

Chances are that you are the ones to stand on the edge of the event horizon. You are the generation that might have to steer our civilization at a time of unparalleled peril and promise.

At a time when humanity may face immortality or extinction, when we might colonize the stars or go back to the stone age.

And so, I am here to ask you: “What are you going to do?”

 

Takeaway message:

Education is very important but not the one that others, be it teachers or parents, give to you – it is what you give to yourself.

Thus the diploma that you get will be less and less important than ever before. So I say – take education into your own hands because your education matters the most to you and your life.

Don’t wait for permission from your parents or teachers to change the world. Keep learning and improving.

Build stamina: Life is a marathon, not a sprint. You will fail endless times before you succeed. (Dan Barry had to apply 13 times to NASA but he never gave up on his dream to be an astronaut.)

“The reasonable man adapts himself to the world; the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”

Geroge Bernard Shaw, “Maxims for Revolutionists,” Man and Superman, 1903

So when your teacher or parents ask you to be “reasonable”, I say: “Be very unreasonable!”

Related articles
  • Socrates on the Wow Signal Podcast: Be Unreasonable!
  • 15 Steps Towards Your Podcasting Success: Socrates At Podcamp Toronto 2013

Filed Under: Podcasts Tagged With: Futurism, Nikola Danaylov, singularity, transhumanism

Are Memes & the Internet Creating a Cultural Singularity? (A PBS Idea Channel Video)

December 13, 2012 by Socrates

Here on the internet, we love us some memes. But where do they come from?

Yes we know, they are user generated. But to an internet layman, they seem to just appear, in HUGE quantities, ready for cultural consumption. Are they a sign of a “cultural singularity”?

Memes follow rules and code, are varied, self-referential, and seem to multiply at an ever increasing rate. It may seem like science fiction, but we’re close to a world where culture automatically and magically creates infinitely more culture.

Filed Under: Video, What if? Tagged With: singularity

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Page 6
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • John von Neumann and the Original Vision of the Technological Singularity
  • Above the Law: Big Tech’s Bid to Block AI Oversight
  • Charles Babbage: The Forgotten Father of Computing and His Relevance to AI
  • Edsger Dijkstra and the Paradox of Complexity
  • Did the Unabomber See the Singularity Coming? Ted Kaczynski and the Dark Side of Progress

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy