• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

Technological Singularity

Welcome to Life – The Singularity, Ruined By Lawyers (video)

June 7, 2012 by Socrates

Welcome to Life – the Singularity, Ruined by lawyers is a video that is as funny as it is smart. For this reason, even though it has been making the rounds on the internet for quite some time now, I decided to repost it below for your viewing pleasure. Enjoy!

 

Welcome To Life: 

Hello, and welcome to Life. We regret to inform you that your previous existence ended on January 14, 2052 following a road traffic accident. However, your consciousness was successfully uploaded to the Life network by your primary care provider. You may be experiencing some confusion. Please remain calm. Life contains *ting sound* Your mental state is being temporarily adjusted in order to calm you. Please do not be alarmed. Life contains over thirty thousand unique activities, networking with millions of other digitized minds, and the ability to contact undigitized friends and family. Please accept these terms and conditions in order to continue Life. Your attention is particularly drawn to Section 2: Usage Rules and Limitations, Section 9: Privacy, and Section 11: Restricted Mental Activities. Thank you.

Please select a Life plan.

Terms and conditions

THE LEGAL AGREEMENTS SET OUT BELOW ARE BETWEEN YOU AND LIFE DIGITAL PERSONALITY MANAGEMENT INCORPORATED (HERAFTER “THE PROVIDER”) AND GOVERN YOU USE OF THE PROVIDER’S SYSTEMS WHICH INCLUDE, BUT ARE NOT LIMITED TO, THE COMPILATION AND SIMULATION OF YOUR DIGITAL PERSONALITY UNDER THE DIGITAL PERSONALITIES (RIGHTS OF DECEASED PERSONS) ACT 2050.

THE AGREEMENT APPLIES WITHOUT PREJUDICE TO ANY PREVIOUS AGREEMENTS AND CONTRACTS THAT YOU MAY HAVE ENTERED INTO WITH THIRD PARTIES, IMPORTANT” ACCEPTING SIMULATION AS A DIGITAL PERSONALITY MEANS YOU WAIVE YOUR RIGHT TO POST-MORTEM RELEASE OF DEBTS AND OBLIGATIONS. YOUR LIFE MAY BE AT RISK IF YOU DO NOT KEEP UP REPAYMENTS ON A LOAN SECURED ON IT.

1. PAYMENTS AND REFUNDS POLICY

You agree that you will pay for services purchased from the Provider, as well as upgrades, enhancements and experience (“Apps”) selected from third-party simulation enhancement entities (“App Providers”). The Provider accepts payment by direct transfer from bank accounts in the US, UK, France, Germany and Australia. In the event your payments become significantly in arrears, the Provider reserves the right to a) search your digital personality for the details, locations, and access requirements for assets that you owe in relation to its services (see: Section 9, Privacy) and/or b) terminate your simulation without notice (see: Section 13, Notice of Termination).

! Rejecting these terms will result in termination of your simulated personality.

Accept

Reject

Welcome To Life

Tier One is our premium offering, allowing full uninterrupted simulation of your pre-terminal state. It includes unlimited modification of your body plan, accelerated learning and recall, and full personal backup facilities. Tier Two is our advertiser-supported offering. It contains many of the features of Tier One, but at a significantly reduced cost. Some areas of the environment, such as the sky, may be replaced with targeted advertising, and your personal brand preferences may be altered to align with those of our sponsors. Tier Three is our value offering. Thanks to our commercial partners, your experience at this tier is unlimited. However, some activities, senses, and visual rendering options may be subject to a Fair Use Policy. More complicated mental processes, including subconscious thought, creativity and self-awareness, may be rate-limited or disabled at times of significant server load. Thank you.

Your stored mind contains one or more patterns that contravene the Prevention of Crime and Terrorism Act of 2050. Please stand by while we adjust these patterns. Your stored mind contains sections from 124,564 copyrighted works. In order to continue remembering these copyrighted works, a licensing fee of $18,000 per month is required. Would you like to continue remembering these works? Thank you.

Legal compliance
UNDER INTERNATIONAL TRACE AND COPYRIGHT LAWS, WE ARE UNABLE TO STOR WHOLE OR PART COPYRIGHTED WORKS AS PART OF A DIGITAL PERSONALITY WITHOUT THAT PERSONALITY TENDERING A LICENSING FEE DETERMINED BY THE COPYRIGHT HOLDER. THE FOLLOWING COPYRIGHTED WORKS ARE WHOLLY OR PARTLY CONTAINED WITHIN YOUR STORED PERSONALITY TO A DEGREE THAT CONTRAVENES THE RIGHT OF INTELLECTUAL PROPERTY HOLDERS:

MUSICAL WORKS OR PERFORMANCES: 57,384
VISUAL WORKS OR PERFORMANCES: 43,586
OLFACTORY WORKS OR PERFORMANCES: 124
OTHER WORKS OR PERFORMANCES: 23,470

You have insufficient funds in an financial reserves to pay this licensing fee.

! Copyrighted works are being deleted.

Welcome to Life.

Please stand by. Welcome to Life. Do you wish to continue?

© Published By Enyay tomscott.com

Filed Under: Funny, Video, What if? Tagged With: copyright, intellectual property, mind uploading, Technological Singularity

Top 10 Reasons We Should Fear The Singularity

May 22, 2012 by Socrates

Why do we fear the technological singularity?

Well, let me give you what I believe are the top 10 most popular reasons:

1. Extinction

Extinction is by far the most feared as well as the most commonly predicted consequence of the singularity.

The global apocalypse for the human race comes in many flavors but some of the most popular ones are: the supersmart terminator AI’s – a robopocalypse; nanotechnology gone rogue – the so called grey goo scenario, home-made Smart Weapons of Mass Destruction – used by terrorists and nihilists; genetic modifications or mutations – turning us into living-dead zombies; science experiments gone wrong – the Large Hadron Collider creating a black hole that engulfs the planet…

In short, the fear is that, as Bill Joy notoriously put it: The Future Doesn’t Need Us.

2. Slavery

Perhaps the second most common reason for fearing the singularity is the potential slavery or subjugation of the entire human race. The argument is pretty straight forward:

Once we have super smart AIs we stop being the smartest entities on this planet. In other words, we have created Gods while remaining mere humans. So, if for whatever reason the machines decide not to exterminate us, then, chances are that, since they will be vastly superior to us, they will enslave us. This can be accomplished in a variety of ways: either explicitly – with us being aware of our bondage, or implicitly – without us realizing it (the Matrix/simulation scenarios).

3. World War III – Giga War

The third most common fear of the singularity is, of course, World War 3. A Giga War of unprecedented scale, sophistication and efficiency of death and destruction that may be the result of either the clash between the human race and the AIs or between different fractions of humans: e.g. the ArtIlect War of terrans versus cosmists as foreseen by Hugo de Garis. Whatever the case may be, it will likely result in billions of deaths and a collapse or complete eradication of our civilization.

4. Economic Collapse

Some have argued that, if we somehow manage to avoid the previous three scenarios, then, we are likely to experience a complete economic collapse:

The complete robotization of our society is likely to lead to overproduction of goods and services. Yet, since it is claimed that most people will lose their jobs to the robots, there will be a global unemployment rate of unprecedented scale which in turn will collapse the demand for those robo-made goods and services. Combine this with a population explosion of 9 or even 10 billion mostly unemployed people who have no means to create income and buy anything, and we are looking at a global economic collapse.

5. Big Brother AI

This scenario is a milder version of the slavery/subjugation Matrix option because we are still under the complete control of an all-knowing Artificial Intelligence. The main difference here is that the AI is merely doing what is best for us, rather than what is best for it: we have a benevolent, omnipotent, absolute monarch protecting us from our worst enemies – our fellow human beings and our own selves. It is all done in the name of maximizing security, prosperity and overall happiness for all people across the planet. The only minor negative is a little bit of propaganda and ideological, political or religious brainwashing required to prop up “the cult of the AI,” but that’s OK since it is for our own good.

6. Alienation and Loss of Humanity

Following the “if you can’t beat them, join them” maxim, one way of potentially surviving the singularity is by merging with the machines. This idea – that we can and should improve on what we have been handed down by mother nature, is often referred to as transhumanism. Merging man and machine via biotechnology, molecular nanotechnologies and artificial intelligence, we would increase our cognitive abilities, physical strength, emotional stability and overall health and longevity.

The fear, of course, is that by doing so we are going to lose the very essence of being human – our human nature, our human souls and human identity. Furthermore, at the collective level, the loss of humanity will also mean alienation or loss of community which is to say that the resulting variety of posthuman entities will be so wide apart as to negate any connection whatsoever among different individuals. This in turn will mean that humanity, in fact, did not survive but succumbed to the machine invasion and indeed went extinct.

7. Environmental Catastrophe

Our history shows that our environmental destructiveness is in direct proportion of our technological prowess. Once we live in a global society where everything is mass produced by robots, our manufactured civilization will sever the last connection to the natural world. We will lose the very last bit of respect for mother nature:

Why preserve the rain forest if we can create a “better” and “smarter” one? Why care about biodiversity, species’ extinction or environmental degradation if we can revive and mold those for our own purposes or pleasure?

Why care about anything if we are (technological) Gods?!

8. Loss of History, Knowledge and Spatial Resolution [aka Digital Dark Age]

The ever accelerating process of digitization comes along with a certain loss or even destruction of data. This data can be in the form of history, cultural traditions, dead languages or important scientific information. For example, NASA recently admitted it has lost the ability to recover much of the computer data from some of the Apollo missions and the Moon landings. Thus certain kinds of vitally important and unique knowledge as well as history or cultural traditions are lost forever. To know if we are getting a good deal or not, we must fist quantify the data losses and compare them to the potential gains. Yet, at the break-neck speed we’re moving forward few have time for such calculations.

It seems that we live in an analog universe with infinite resolution – both zooming in and out, as far as we can. The process of digitization captures a mere fraction of it. Just like a compressed .mp3 file captures only a part of the actual musical performance, this process creates symbols which are digital representations of the real thing. The fear is we may end up losing awareness that the digital realm is a realm of symbols – a mere reflection of the true analog universe, ending up in Plato’s Digital Cave of Illusions.

9. Computronium and Matrioshka Brains 

As far as we can tell it seems we live in a universe full of dumb matter. This, of course, makes for a pretty dumb universe too.

However, extrapolating from our own development, it would appear that as time goes by there is a movement from less towards more intelligence in the universe. Thus, given enough time, more and more of our planet and, eventually our universe, is likely to contain and consist of more and more intelligent matter. This process is likely to continue until Moore’s Law collapses and an equilibrium is reached. Such a theoretical arrangement of matter – the best possible configuration of any given amount to achieve a perfectly optimal computing device, is the substrate also known as computronium.

A Matrioshka brain is a hypothetical megastructure of immense computational capacity. Based on the Dyson sphere, the concept derives its name from the Russian Matrioshka doll and is an example of a planet-size solar-powered computer, capturing the entire energy output of a star. To form the Matrioshka brain all planets of the solar system are dismantled and a vast computational device inhabited by uploaded or virtual minds, inconceivably more advanced and complex than us, is created.

So the idea is that eventually, one way or another, all matter in the universe will be smart. All dust will be smart dust, and all resources will be utilized to their optimum computing potential. There will be nothing else left but Matrioshka Brains and/or computronium…

“NASA are idiots. They want to send canned meat to Mars!” Manfred swallows a mouthful of beer, aggressively plonks his glass on the table. “Mars is just dumb mass at the bottom of a gravity well; there isn’t even a biosphere there. They should be working on uploading and solving the nanoassembly conformational problem instead. Then we could turn all the available dumb matter into computronium and use it for processing our thoughts. Long-term, it’s the only way to go. The solar system is a dead loss right now – dumb all over! Just measure the MIPS per milligram. If it isn’t thinking, it isn’t working.” (Accelerando by Charles Stross)

10. Fear of Change

Fear of change and fear of the unkown are deeply embedded in the human psyche: We all want to be comfortable. Not knowing is very, very uncomfortable. Realizing that the coming change is radically unique – both in scale and unpredictibility, is even more discomforting.

When it comes to survival nobody likes surprises. So we take it as a matter of both personal as well as collective security to model and at least roughly foresee the future.

The singularity is a radical change of arguably cosmic proportions which is by definition impossible to model, let alone predict. Thus, there is no surprise it evokes very deep insecurity and primal fear.

The question is: Are you afraid?! Are you not very, very afraid?!

Related articles
  • 17 Definitions of the Technological Singularity
  • Top 10 Reasons We Should NOT Fear The Singularity

Filed Under: Best Of, Op Ed, What if? Tagged With: Technological Singularity

When Vernor Vinge Coined the Technological Singularity

May 14, 2012 by Socrates

When Vernor Vinge coined the term technological singularity few foresaw it becoming the conceptual watershed that it is now.

Today, regardless of whether you are writing about sci fi, futurism, artificial intelligence, technology or the future of humanity, the moment you embrace the longer-term big picture framework of reference is the moment you are writing about the singularity. And if that is not the case, then, you must justify why not. So, in a way, you are still writing about the singularity.

Thanks to Josh Calder, who made the effort to dig out and scan the original article, I can now show you a copy of the actual page where the term was used for the very first time in its contemporary technological context: the January 1983 issue of Omni magazine.

Hope you enjoy this little digital piece of history as much as I do!

Courtesy of Josh Calder from www.FutureAtlas.com (click on image for high resolution version)

Video update:

Adam Ford’s H+ interview with Vernor Vinge where they discuss “topics ranging from the Technological Singularity itself, how the concept came to Vernor, the metaphor implied by the Singularity, Evolution, Humans as goal setting creatures, similarities between the rise of artificial intelligence and the rise of humans within the animal kingdom, definitions of the Singularity, biasing the odds of a beneficial Singularity, strategic forecasting, scenario planning, narratives, education, future studies, how possibility shapes the future, utopias and dystopias, what do we want from the future?, missed opportunities to achieve great things in the past and what may we be missing out on if we don’t make the right choices today.”

 

Related articles
  • 17 Definitions of the Technological Singularity

Filed Under: Op Ed, Profiles, What if? Tagged With: Technological Singularity, Vernor Vinge

17 Definitions of the Technological Singularity

April 18, 2012 by Socrates

The term singularity has many meanings.

The everyday English definition is a noun that designates the quality of being one of a kind, strange, unique, remarkable or unusual.

If we want to be even more specific, we might take the Wiktionary definition of the term, which seems to be more contemporary and easily comprehensible, as opposed to those in classic dictionaries such as the Merriam-Webster’s.

So, the Wiktionary lists the following five meanings:

Noun
singularity (plural singularities)

1. the state of being singular, distinct, peculiar, uncommon or unusual
2. a point where all parallel lines meet
3. a point where a measured variable reaches unmeasurable or infinite value
4. (mathematics) the value or range of values of a function for which a derivative does not exist
5. (physics) a point or region in spacetime in which gravitational forces cause matter to have an infinite density; associated with Black Holes

What we are most interested in, however, is the definition of singularity as a technological phenomenon — i.e. the technological singularity. Here we can find an even greater variety of subtly different interpretations and meanings. Thus it may help if we have a list of what are arguably the most relevant ones, arranged in a rough chronological order.

Seventeen Definitions of the Technological Singularity:

1. R. Thornton, editor of the Primitive Expounder

In 1847, R. Thornton wrote about the recent invention of a four function mechanical calculator:

“…such machines, by which the scholar may, by turning a crank, grind out the solution of a problem without the fatigue of mental application, would by its introduction into schools, do incalculable injury. But who knows that such machines when brought to greater perfection, may not think of a plan to remedy all their own defects and then grind out ideas beyond the ken of mortal mind!”

2. Samuel Butler

It was during the relatively low-tech mid 19th century that Samuel Butler wrote his Darwin among the Machines. In it, Butler combined his observations of the rapid technological progress of the Industrial Revolution and Charles Darwin’s theory of the evolution of the species. That synthesis led Butler to conclude that the technological evolution of the machines will continue inevitably until the point that eventually machines will replace men altogether. In Erewhon Butler argued that:

“There is no security against the ultimate development of mechanical consciousness, in the fact of machines possessing little consciousness now. A mollusc has not much consciousness. Reflect upon the extraordinary advance which machines have made during the last few hundred years, and note how slowly the animal and vegetable kingdoms are advancing. The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time.”

3. Alan Turing

In his 1951 paper titled Intelligent Machinery: A Heretical Theory,  Alan Turing wrote of machines that will eventually surpass human intelligence:

“once the machine thinking method has started, it would not take long to outstrip our feeble powers. … At some stage therefore we should have to expect the machines to take control, in the way that is mentioned in Samuel Butler’s Erewhon.”

4. John von Neumann

In 1958 Stanislaw Ulam wrote about a conversation with John von Neumann who said that: “the ever accelerating progress of technology … gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.” Neumann’s alleged definition of the singularity was that it is the moment beyond which “technological progress will become incomprehensibly rapid and complicated.”

5. I.J. Good, who greatly influenced Vernor Vinge, never used the term singularity itself. However, what Vinge later called singularity Good called intelligence explosion. By that I. J. meant a positive feedback cycle within which minds will make technology to improve on minds which once started will rapidly surge upwards and create super-intelligence:

“Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.”

6. Vernor Vinge introduced the term technological singularity in the January 1983 issue of Omni magazine in a way that was specifically tied to the creation of intelligent machines:

“We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.”

He later developed further the concept in his essay the Coming Technological Singularity (1993):

“Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. […] I think it’s fair to call this event a singularity. It is a point where our models must be discarded and a new reality rules. As we move closer and closer to this point, it will loom vaster and vaster over human affairs till the notion becomes a commonplace. Yet when it finally happens it may still be a great surprise and a greater unknown.”

It is important to stress that for Vinge the singularity could occur in four ways: 1. The development of computers that are “awake” and superhumanly intelligent. 2. Large computer networks (and their associated users) may “wake up” as a superhumanly intelligent entity. 3. Computer/human interfaces may become so intimate that users may reasonably be considered superhumanly intelligent. 4. Biological science may find ways to improve upon the natural human intellect. [Vernor talks about the singularity after min 2:13 in the video below]

7. Hans Moravec: 

In his 1988 book Mind Children, computer scientist and futurist Hans Moravec generalizes Moore’s Law to make predictions about the future of artificial life. Hans argues that starting around 2030 or 2040, robots will evolve into a new series of artificial species, eventually succeeding homo sapiens. In his 1993 paper The Age of Robots Moravek writes:

“Our artifacts are getting smarter, and a loose parallel with the evolution of animal intelligence suggests one future course for them. Computerless industrial machinery exhibits the behavioral flexibility of single-celled organisms. Today’s best computer-controlled robots are like the simpler invertebrates. A thousand-fold increase in computer power in this decade should make possible machines with reptile-like sensory and motor competence. Properly configured, such robots could do in the physical world what personal computers now do in the world of data–act on our behalf as literal-minded slaves. Growing computer power over the next half-century will allow this reptile stage will be surpassed, in stages producing robots that learn like mammals, model their world like primates and eventually reason like humans. Depending on your point of view, humanity will then have produced a worthy successor, or transcended inherited limitations and transformed itself into something quite new. No longer limited by the slow pace of human learning and even slower biological evolution, intelligent machinery will conduct its affairs on an ever faster, ever smaller scale, until coarse physical nature has been converted to fine-grained purposeful thought.” 

8. Ted Kaczynski

In Industrial Society and Its Future (aka the “Unabomber Manifesto”) Ted Kaczynski tried to explain, justify and popularize his militant resistance to technological progress:

“… the human race might easily permit itself to drift into a position of such dependence on the machines that it would have no practical choice but to accept all of the machines decisions. As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decision for them, simply because machine-made decisions will bring better result than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.”

9. Nick Bostrom

In 1997 Nick Bostrom – a world-renowned philosopher and futurist, wrote How Long Before Superintelligence. In it Bostrom seems to embrace I.J. Good’s intelligence explosion thesis with his notion of superintelligence:

“By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills. This definition leaves open how the superintelligence is implemented: it could be a digital computer, an ensemble of networked computers, cultured cortical tissue or what have you. It also leaves open whether the superintelligence is conscious and has subjective experiences.”

10. Ray Kurzweil

Ray Kurzweil is easily the most popular singularitarian. He embraced Vernor Vinge’s term and brought it into the mainstream. Yet Ray’s definition is not entirely consistent with Vinge’s original. In his seminal book The Singularity Is Near Kurzweil defines the technological singularity as:

“… a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself.”

11. Kevin Kelly, senior maverick and co-founder of Wired Magazine

Singularity is the point at which “all the change in the last million years will be superseded by the change in the next five minutes.”

12. Eliezer Yudkowsky

In 2007 Eliezer Yudkowsky pointed out that singularity definitions fall within three major schools: Accelerating Change, the Event Horizon, and the Intelligence Explosion. He also argued that many of the different definitions assigned to the term singularity are mutually incompatible rather than mutually supporting.  For example, Kurzweil extrapolates current technological trajectories past the arrival of self-improving AI or superhuman intelligence, which Yudkowsky argues represents a tension with both I. J. Good’s proposed discontinuous upswing in intelligence and Vinge’s thesis on unpredictability. Interestingly, Yudkowsky places Vinge’s original definition within the event horizon camp while placing his own self within the Intelligence Explosion school. (In my opinion Vinge is equally within the Intelligence Explosion and Event Horizon ones.)

13. Michael Anissimov

In Why Confuse or Dilute a Perfectly Good Concept Michael writes:

“The original definition of the Singularity centers on the idea of a greater-than-human intelligence accelerating progress. No life extension. No biotechnology in general. No nanotechnology in general. No human-driven progress. No flying cars and other generalized future hype…”

According to the above definition, and in contrast to his SIAI colleague Eliezer Yudkowsky, it would seem that Michael falls both within the Intelligence Explosion and Accelerating Change schools. (In an earlier article, Anissimov defines the singularity as transhuman intelligence.)

14. John Smart

On his Acceleration Watch website John Smart writes:

“Some 20 to 140 years from now—depending on which evolutionary theorist, systems theorist, computer scientist, technology studies scholar, or futurist you happen to agree with—the ever-increasing rate of technological change in our local environment is expected to undergo a permanent and irreversible developmental phase change, or technological “singularity,” becoming either:

A. fully autonomous in its self-development,
B. human-surpassing in its mental complexity, or
C. effectively instantaneous in self-improvement (from our perspective),

or if only one of these at first, soon after all of the above. It has been postulated by some that local environmental events after this point must also be “future-incomprehensible” to existing humanity, though we disagree.”

15. James Martin

James Martin – a world-renowned futurist, computer scientist, author, lecturer and, among many other things, the largest donor in the history of Oxford University – the Oxford Martin School, defines the singularity as follows:

Singularity “is a break in human evolution that will be caused by the staggering speed of technological evolution.”

16. Sean Arnott: “The technological singularity is when our creations surpass us in our understanding of them vs their understanding of us, rendering us obsolete in the process.”

17. Your Definition of the Technological Singularity?!…

As we can see there is a large variety of flavors when it comes to defining the technological singularity. I personally tend to favor what I would call the original Vingean definition, as inspired by I.J. Good’s intelligence explosion because it stresses both the crucial importance of self-improving super-intelligence as well as its event horizon-type of discontinuity and uniqueness. (I also sometimes define the technological singularity as the event, or sequence of events, likely to occur right at or shortly after the birth of strong artificial intelligence.)

At the same time, after all of the above definitions it has to be clear that we really do not know what the singularity is (or will be). Thus we are just using the term to show (or hide) our own ignorance.

But tell me – what is your own favorite definition of the technological singularity?

Filed Under: Best Of, Op Ed Tagged With: singularity, Technological Singularity

Michael Shermer: Be Skeptical! (Even of Skeptics)

January 18, 2012 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/191317413-singularity1on1-michael-shermer-skeptic.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

I couple of days ago I interviewed Michael Shermer for Singularity 1 on 1.

I met Dr. Shermer at the recent Singularity Summit in New York where he was one of the most entertaining, engaging, and optimistic speakers. Since he calls himself a skeptic and not a singularitarian, I thought he would bring not only balance to my singularity podcast but also a healthy dose of skepticism, and I was not disappointed.

During our conversation we discuss a variety of topics such as his education at a Christian college and original interest in religion and theology; his eventual transition to atheism, skepticism, science, and the scientific method; SETI, the singularity and religion; scientific progress and the dots on the curve as precursors of big breakthroughs; life-extension, cloning and mind uploading; being a skeptic and an optimist at the same time; the “social singularity”; global warming; the tricky balance between being a skeptic while still being able to learn and make progress.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Michael Shermer’s Singularity Summit presentation: “Social Singularity: Transitioning from Civilization 1.0 to 2.0”

 

Who is Michael Shermer?

Dr. Michael Shermer is the Founding Publisher of Skeptic magazine (www.skeptic.com), the Executive Director of the Skeptics Society, a monthly columnist for Scientific American, the host of the Skeptics Distinguished Science Lecture Series at Caltech, and Adjunct Professor at Claremont Graduate University and Chapman University.

Dr. Shermer’s latest book is The Mind of the Market, on evolutionary economics. His last book was Why Darwin Matters: The Case Against Intelligent Design, and he is the author of Science Friction: Where the Known Meets the Unknown, about how the mind works and how thinking goes wrong. His book The Science of Good and Evil: Why People Cheat, Gossip, Care, Share, and Follow the Golden Rule, is on the evolutionary origins of morality and how to be good without God. He wrote a biography, In Darwin’s Shadow, about the life and science of the co-discoverer of natural selection, Alfred Russel Wallace. He also wrote The Borderlands of Science, about the fuzzy land between science and pseudoscience, and Denying History, on Holocaust denial and other forms of pseudohistory. His book How We Believe, presents his theory on the origins of religion and why people believe in God. He is also the author of Why People Believe Weird Things on pseudoscience, superstitions, and other confusions of our time.

According to the late Stephen Jay Gould (from his Foreword to Why People Believe Weird Things): “Michael Shermer, as head of one of America’s leading skeptic organizations, and as a powerful activist and essayist in the service of this operational form of reason, is an important figure in American public life.”

Dr. Shermer received his B.A. in psychology from Pepperdine University, M.A. in experimental psychology from California State University, Fullerton, and his Ph.D. in the history of science from Claremont Graduate University (1991). He was a college professor for 20 years (1979-1998), teaching psychology, evolution, and the history of science at Occidental College (1989-1998), California State University Los Angeles, and Glendale College. Since his creation of the Skeptics Society, Skeptic magazine, and the Skeptics Distinguished Science Lecture Series at Caltech, he has appeared on such shows as The Colbert Report, 20/20, Dateline, Charlie Rose, Larry King Live, Tom Snyder, Donahue, Oprah, Lezza, Unsolved Mysteries (but, proudly, never Jerry Springer!), and other shows as a skeptic of weird and extraordinary claims, as well as interviews in countless documentaries aired on PBS, A&E, Discovery, The History Channel, The Science Channel, and The Learning Channel. Shermer was the co-host and co-producer of the 13-hour Family Channel television series, Exploring the Unknown.

Related articles
  • Head-Transplantation: A Short Documentary about Dr. R. J. White’s Controversial Experiments
  • Randal Koene on Singularity 1 on 1: Mind Uploading is not Science Fiction
  • Singularity University Lectures: Dr. Alex Jadad on Making Longer Life Worth Living
  • The Complete 2011 Singularity Summit Video Collection

Filed Under: Podcasts Tagged With: Technological Singularity

Luke Muehlhauser: Superhuman AI is Coming This Century

January 16, 2012 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/191135033-singularity1on1-luke-muehlhauser-ai.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Last week, I interviewed Luke Muehlhauser for Singularity 1 on 1.

Luke Muehlhauser is the Executive Director of the Singularity Institute, the author of many articles on AI safety and the cognitive science of rationality, and the host of the popular podcast “Conversations from the Pale Blue Dot.” His work is collected at lukeprog.com.

I have to say that despite his young age and lack of a University Degree – a criticism which we discuss during our interview, Luke was one of the best and clearest spoken guests on my show and I really enjoyed talking to him. During our 56 min-long conversation, we discuss a large variety of topics such as Luke’s Christian-Evangelico personal background as the first-born son of a pastor in northern Minnesota; his fascinating transition from religion and theology to atheism and science; his personal motivation and desire to overcome our very human cognitive biases and help address existential risks to humanity; the Singularity Institute – its mission, members and fields of interest; the “religion for geeks” (or “rapture of the nerds”) and other widespread criticisms and misconceptions; our chances of surviving the technological singularity.

My favorite quote from the interview:

Superhuman AI is coming this century. By default, it will be disastrous for humanity. If you want to make AI a really good thing for humanity please donate to organizations already working on that or – if you are a researcher – help us solve particular problems in mathematics, decision theory, or cognitive science.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Related articles
  • The Complete 2011 Singularity Summit Video Collection
  • Spencer Greenberg on Singularity 1 on 1: To Become Better Thinkers – Study Our Cognitive Biases and Logical Fallacies
  • Facing the Singularity
  • 80,000 Hours
  • Video Q&A about Singularity Institute.
  • Robert J. Sawyer on Singularity 1 on 1: The Human Adventure is Just Beginning
  • So You Want to Save the World

Filed Under: Podcasts Tagged With: Artificial Intelligence, Technological Singularity

Vernor Vinge: We Can Surpass the Wildest Dreams of Optimism

April 16, 2011 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/188312812-singularity1on1-vernor-vinge.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Today, my guest on Singularity 1 on 1 is Vernor Vinge, the person who coined the term technological singularity.

Currently, Vernor Vinge is putting the final touches on the sequel to A Fire Upon the Deep. The new book is titled The Children of the Sky and is already available for pre-order on Amazon, though it is not expected to ship until October 2011.

Despite his busy schedule Prof. Vinge still managed to give us over an hour of his time, and during our conversation, I asked him to discuss issues such as: his childhood and early interest in science fiction; his desire to make sense of the universe; his definition of the technological singularity and the story behind the term; his now classic 1993 NASA paper; his favorite science fiction books and authors; major milestones on the way towards the singularity and our chances to survive such an unprecedented event.

As always, you can listen to or download the audio file above or scroll down to watch the video interview in full. To show your support, you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

Who is Vernor Vinge?

Arguably the second most recognized singularitarian, Vernor Vinge spent most of his life in San Diego, California where he taught mathematics and computer science at San Diego State University for over thirty years and where he still lives today.

After retiring from teaching Vernor became widely sought as a public speaker and presenter for business, science, science fiction and general audiences. He has won Hugo Awards for several of his books such as: A Fire Upon The Deep(1992), A Deepness in the Sky(1999) and for the novella Fast Times at Fairmont High(2001).

Known for his rigorous hard-science approach Vinge first became an iconic figure both among cybernetic scientists and sci fi fans with the publication of his 1981 novella True Names, widely considered to be the visionary work behind the internet revolution. Later he gained even more public attention for his coining the term, writing and presenting about the technological singularity.

For a collection of videos of Vernor Vinge see his profile page on SingularitySymposium.com

Related articles
  • Question Everything: Max More on Singularity 1 on 1
  • Under-predicting the Future

Filed Under: Podcasts Tagged With: singularity, singularity podcast, Technological Singularity, Vernor Vinge

The Singularity is Near! What’s Next?

April 13, 2011 by Nikki Olson

The Singularity is Near! What’s Next?

Those who look ahead three or four decades and see a technological singularity taking place confront a future in which time appears to stand still. However, few look beyond 2045 because it is near-impossible to foresee what happens post-Singularity. So although we may be certain that the Singularity is Near, we are uncertain about what’s next.

This reality differentiates Singularitarians from every other major worldview to shape human history: other groups, in looking out into the future, have always had some kind of ‘destination’, or ‘endpoint’ in mind.

Religious groups originating in the West have tended to envision ‘heaven’, or ‘hell’, as endpoints, while those in the East anticipate ‘rebirth’, and eventual nirvana. Some mythologies envision an ‘underworld’, while those not believing in an afterlife do their best to imagine death as their ultimate destination. Even those drinking Kool-Aid in hope of catching the next spaceship passing by have had a destination in mind.

After the Singularity, it is fairly certain that we will leave earth, colonize space, and keep expanding outward. Transhumanists prefer to think of humanity as a process, a species without an endpoint, emphasizing the constant change that will take place as we continue to improve ourselves with technology and transcend our limitations.

But where are we transcending to? And when we reach a point where technology will allow us to do almost anything, how do we figure out what’s worth doing? From where do we get direction?

I propose that three potential sources will impact our direction.

A first source of direction comes from the particulars of human history. In an interview on Singularity 1 on 1, Stephen Wolfram emphasizes the importance of thinking about human purpose when trying to predict the future, and that our sense of purpose, in many ways, comes from our past. He points out that “the things we think are worth doing now arose from the history of our civilization”, and that different prongs of civilization have produced different value systems. We don’t need to ‘invent’ what we find valuable in the present; to a large extent, we inherit those values from our individual and collective past.

If Wolfram is correct that our sense of purpose is largely based on history, then what we think is worth doing now will shape the direction we take post-Singularity.

Second, I propose that a great deal of direction will come from what we have to do in order to survive. From the time we were single-celled organisms, the push to survive has largely determined the direction of individual and collective lives. The will to survive will shape our future too, since in the post-Singularity era there will be new threats to our existence, and our reactions to those threats will determine the paths we end up taking.

Third, we will get direction from incidental attributes of the physical world, in particular from what we find beyond our immediate surroundings, once we expand out into the galaxy.

The Interplanetary Transport Network (ITN) provides an example of how incidental attributes of the physical world can provide direction in future situations, where there would otherwise be an infinite number of options to choose from. The ITN is ‘a collection of gravitationally determined pathways through the solar system that require very little energy for an object to follow’. In travelling the solar system, the ITN marks the path of least resistance, and so in a strictly physical sense, ‘guides the way’. Just as winding rivers directed the paths of our ancestors, influencing where they ended up, and what they came into contact with, physical realities in outer space, such as the ITN, will play a role in where we go in the future, and how we get there.

The Interplanetary Superhighway (NASA)

We have a difficult time imagining what our sense of direction and purpose will be post-Singularity, since it’s difficult to imagine what the world will look like when we have augmented intelligence, and have removed the many limitations that enforce structure upon our lives now. We don’t have a destination in mind, but we will be far from directionless and aimless post-Singularity, despite there being many more options to choose from.

About the Author:

Nikki Olson is a writer/researcher working on an upcoming book about the Singularity with Dr. Kim Solez, as well as relevant educational material for the Lifeboat Foundation. She has a background in philosophy and sociology, and has been involved extensively in Singularity research for 3 years. You can reach Nikki via email at [email protected].

 

Related articles
  • Stephen Wolfram on Singularity 1 on 1: To Understand the Future, Explore the Computational Universe
  • A Transhumanist Manifesto
  • A Turing Test Point of View: Will the Singularity be Biased?

Filed Under: Op Ed Tagged With: singularity, Technological Singularity

Elisabeth Kübler-Ross’ Five Stages of the Singularity

March 10, 2011 by Socrates

After the release of a significant documentary such as Transcendent Man, our very public and well-publicized defeat at Jeopardy by IBM’s Watson, and a growing mainstream coverage of the technological singularity, I started wondering about the potential stages of humanity’s collective emotional and other reaction towards the concept of the singularity.

Arthur Schopenhauer claims that “All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident.”

So, does this relate well to the singularity?

Let’s see. Firstly, is the singularity often ridiculed?

It seems that so far, it has been predominantly ignored, though that observation is increasingly inaccurate as we receive more coverage in the media.

Secondly, has it been violently opposed?

Well, we did have the Unabomber, even though Richard Clarke’s Breakpoint-type of violent resistance has not materialized yet and we are not yet divided into Luddites and Medievalists, or Terrans and Cosmists. This could easily change however, as artificial intelligence, genetics, robotics and nanotechnology become more and more advanced.

If one thing is sure, the singularity hasn’t been popularly embraced as self-evident.

So, within Schopenhauer’s framework, we are, at best, within or around the first stage.

But is it really so pure and simple?

Perhaps Elisabeth Kübler-Ross can provide a more subtle framework for examining our emotional attitude toward the singularity.

In her 1969 book On Death and Dying she argues that there are five stages of grief: denial, anger, bargaining, depression and acceptance.

Let’s see if those can present us with a better-adapted framework of reference:

Denial: “The singularity is not a big deal” or “Anyway, the singularity is not going to happen.”

Denial is usually temporary, especially in the face of a growing body of evidence, and is eventually replaced with a heightened awareness of the risks and the stakes (of the singularity).

Anger: “Why is the singularity happening? How can it happen to me (or to us)? Who is to blame?”

Once here, the person understands that denial cannot continue. But because of anger, she is very difficult to deal with in a rational manner. Any individual who embraces technology and progress, in general, is subject to projected resentment and jealousy.

Bargaining: “Just let me enjoy myself a little more.” “All I want is to remain human for a few more years.” “Can’t we wait (and hold progress) for just a little while?”

This stage involves the hope that one can somehow postpone or delay the singularity. Usually, the negotiation for an extended “timeout” is made with a higher power in exchange for a reformed lifestyle. Psychologically, the individual is saying, “I understand the singularity is near, but if I could just have a little more time…”

Depression: “The singularity sucks!”; “We are all going to die… What’s the point?”; “The machines are so much better at everything… resistance is futile… so, why go on?”

During the fourth stage, the person begins to understand the impending singularity. Because of this, she may become silent, refuse visitors, and spend much of her time crying or grieving. This process allows the depressed individual to overcome her sense that “biology is destiny” and embrace the fact that human is a process, not a defined entity. For this reason, it is not recommended to attempt to cheer up an individual in this stage. It is essential for a person to come to terms with the fact that change is the only certain thing.

Acceptance: “The singularity is cool.”; “I can’t fight progress; I may as well prepare for it.”; “If I can’t defeat the machines, I might as well join them.”; “I can live forever!”

In this last stage, the individual begins to come to terms with the potential upsides of the singularity and focuses on them rather than the negatives.

It is important to note that Elisabeth Kübler-Ross warned that these steps do not necessarily have to come in the above order. Nor are all steps experienced by all people. (Though she believes that one will always experience at least two.) Also, people often experience several of the above stages in a “roller coaster” effect – switching between two or more stages, returning to one or more several times before working through it entirely.

Finally, it is important that futurists and techno-experts do not force understanding on others. The process is highly personal and should not be rushed or lengthened. One should merely be aware that the stages can and most likely will be worked through, and the ultimate stage of “Acceptance” will be reached.

***

So, what do you think? Can we apply the Kübler-Ross framework to the singularity? If yes, then what stage are we at currently?

If not, why not? And how about any alternatives?

Filed Under: Op Ed Tagged With: singularity, Technological Singularity

Change of Plans: Kill All Humans

January 16, 2011 by Socrates

The singularity is often equated with a Terminator or Matrix type of a TechnoCalyps based on the presumption that once artificial intelligence becomes sentient then supposedly the most likely action they will undertake is to exterminate us.

The following cartoon has been circulating for a while around the general singularity and transhumanist community, but because it is so funny, I thought I’d post it anyway. Even if you may have seen it before you may still find it funny again… I know I laugh every time I read it, and I’ve read it a dozen times by now 😉

Hat tip to Singularity 2045 for finding the cartoon first.


Related articles
  • Singularities Happen: Alan Watts explains the Singularity… (singularityblog.singularitysymposium.com)
  • The Best of Singularity Weblog 2010 (singularityblog.singularitysymposium.com)
  • Why I Am an Optimist (singularityblog.singularitysymposium.com)
  • A Transhumanist Manifesto (singularityblog.singularitysymposium.com)

Filed Under: Funny, What if? Tagged With: Artificial Intelligence, Technological Singularity

Why I Am an Optimist

January 13, 2011 by Socrates

People sometimes ask me why am I such an optimist about the progress of technology in general and the technological singularity in particular?

Well, my reply is simple.

I choose to focus on the upside. I choose to be a deliberate, conscious optimist.

That is not to say that I suggest we ought to ignore the many dangers that lie certainly ahead of us. What it means is that, once I’ve done my best and the die is cast, the only thing that is left for me is to enjoy the ride, focus on the bright side of life and have a little sense of humor on the way.

Tony Robins says that, whether consciously or unconsciously, at any given moment in time we are always making the following decisions:

1. What do I focus on?

2. What does it mean for me?

3. How do I feel about it?

4. What am I going to do about it?

I choose to be very deliberate in those choices. Because not making a conscious choice is just but another kind of a choice – I believe, almost certainly, a very bad one.

So, I prefer to be a conscious optimist, rather than an unconscious pessimist.

I have chosen to:

1. Focus on the evolution of technology as exhibited by the exponential growth of development in computer science, artificial intelligence, genetics, robotics and nanotechnologies.

2. Find the positive meaning and unparalleled opportunities of the above, not only for me but also for all of us.

3. Feel great about the future – both mine and that of the rest of humanity.

4. Start and host Singularity Symposium and Singularity Weblog – to popularize, discuss and shape our future, without forgetting or denying the equally great risks and responsibilities we are collectively carrying on the way.

Sometimes people interrupt me and say:

But you are an atheist! How can you be a true optimist if you don’t believe in God! For only the Almighty can guarantee that in the end things will turn out for the best.

Well, would you allow me to be a pessimist about God? And, never-the-less, still insist I am an optimist?

Why do we need someone else (even God) to take responsibility for the outcome? Why can’t we embrace the fact that with our exponentially growing power comes an equally growing responsibility? Should we blame Santa if our kinds don’t like their presents?

I accept there is no guarantee that in the 21st century things will turn out for the best. Yet this realization does not make me into an immoral, evil or desperate nihilist. Just the opposite. It allows me to appreciate the  time I spend here, the freedom I am presented with and the consequent gravity of my personal, and our collective decisions.

It is up to us create the outcome whatever it may turn out to be – heaven or hell, apocalypse or Utopia. It is up to us to make a choice and take deliberate action towards accomplishing our goals. And even if it is not up to us, I prefer to err and take action, rather than sit idly and observe from the sidelines of history.

No, I don’t need God’s existence or help – I know that I can be happy, prosperous and good without Him. Just like I can choose to be a miserable, evil, stupid and murderous killer of innocent people, all while shouting my particular version of God’s name.

It was not God who lifted us from the holes in the ground – we did. And it will not be his fault if we end up back there. It was our curiosity to explore and our intelligence to discover, channeled through the scientific method of inquiry, that did so.

Thus, I choose to be guided by philosophy rather than superstition.

I choose to ask uncomfortable, skeptical questions rather than accept easy, convenient answers.

I embrace the scientific method and adore the Symphony of Science because there is real poetry in the real world and science is the poetry of reality. The same spiritual fulfillment that people seek in religion, can be found everywhere in the universe.

***

Yes, in this century we are probably going to face the biggest challenges humanity has ever faced and we can approach that pivotal moment either as pessimists or optimists.

We could embrace Murphy’s law and claim that if things can go wrong (isn’t that always a possibility for anything worthwhile?!) then they surely will. Or we can choose to embrace Moore’s Law and say that things always get better, cheaper and faster, while we clearly have safer, more comfortable, longer and healthier lifes.

We can proclaim that the TechnoCalyps is coming. Or that the Singularity is Near.

We can say that God has made us mortal. Or realize that it is us who made Him immortal.

It was not God, it was Science that changed the world and in this century it will help us change it more than ever: to transcend biology, go Beyond Human, build Human v2.0 and maybe even live forever.

Indeed, soon we’ll have to make a decision: Onwards to Utopia or backwards to the Dark Ages.

I believe that we not only can but actually will have “A Better Future” ahead of us. (and “Better You” too.)

***

Let me finish off this personal manifesto with 2 great videos, that may help you visualize the mixture of science and sense of humor, at the corner-stone of my optimism.

The first video is from BBC Channel Four. In it, statistics guru and program host Hans Rosling takes us through the last 200 years of global progress as measured in terms of life span and income. I believe it makes a very powerful argument, both visually and otherwise, as to why we ought to be all optimists.

The second video is from Monthy Python‘s eternally funny and equally brilliant Life of Brian.

For me, sense of humor, rather than belief in God, comes rather handy when my very limited personal knowledge and logic fail to deliver enough optimism. I think that it can work for your too…

I see that humanity’s cup is already half full and we have the best chance ever to fill it up in this century.

That is why I choose to be an optimist!

And what about you?

Related articles
  • A Transhumanist Manifesto (singularityblog.singularitysymposium.com)
  • The Change Agent: Onwards to Utopia or backwards to the Dark Ages? (singularityblog.singularitysymposium.com)
  • Hans Rosling Shows You 200 Years of Global Growth in 4 Minutes (video) (singularityhub.com)

Filed Under: Funny, Op Ed Tagged With: Exponential growth, God, progress, Socrates, Technological Singularity

Is A Spiritual Singularity Near?

November 2, 2010 by Matt Swayne

The deep connection between human spirituality and advancing technology has proven to be intimate.

As our understanding of technology grows, so does our understanding of the spiritual universe, it seems.

Over the eons, our conception of God has formed and reformed into the shape of our technology. Not only that, spiritual figures are seen as a master of current and future technology.

Early man saw gods and goddesses as hunters or warriors. Statues and artwork portrayed deities wielding the latest technology of destruction–bows and arrows, spears, and, sometimes, darting about in the new model year chariot.

The Middle East, where most of our current major religions ferments, saw the rise of a God who was more like a tribal chieftain.

As the Renaissance approached, God was no longer the angry warrior. He was a clockmaker, an expert, deterministic mathematician that only a Newtonian physicist could worship.

As the twentieth century dawned, the quantum mechanical revolution in physics appears to change our watchmaker description of God. God became not just a computer scientist, but a quantum computer scientist.

Profound technological change and profound philosophical change are intermingled.

It’s easy to assume that technology produced the new philosophies. But, in true chicken and egg fashion, it’s harder to define the causal relationship. For instance, new optical technology could be used to verify the earth’s new (and not central) place in the solar system. Without the spirit of discovery that dared to seek answers in a climate when answer-seeking was punishable to death, the telescope would have been nothing more than a curiosity, or conversation piece.

Looking ahead, as greater and greater technological power appears on our horizon, we can speculate how this rapid change will influence our philosophies. Maybe it will be the machine age of spirits where human consciousness will use the fabric and machinery of reality to create their own version of reality? Or, maybe it’s an age where human consciousness ends.

Or, perhaps, the Singularity must wait on us. It must wait for our imagination to adapt to these new possibilities before change is even possible.

The Singularity, then, will ultimately become less about machines. And more about spirit.

About the Author:

Matt Swayne is a blogger and science writer. He is particularly interested in quantum computing and the development of businesses around new technologies. He writes at Quantum Quant.

Related articles
  • Jason Silva on Singularity Podcast: Let Your Ideas Be Noble, Poetic and Beautiful (singularityblog.singularitysymposium.com)

Filed Under: Op Ed Tagged With: God, Technological Singularity, Technology

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy