The Hawking Fallacy Argued, A Personal Opinion

Michelle Cameron /

Posted on: May 19, 2014 / Last Modified: May 19, 2014

Nonsense Jumping Over Word Common Sense Vs IllogicalThis article is a response to a piece written by Singularity Utopia (hereforth called SU) entitled The Hawking Fallacy. Briefly, the Hawking Fallacy is SU’s attempt to describe any negative or fearful reaction to strong artificial intelligence, also known as artificial general intelligence, as an irrational fear or a logical fallacy.

SU further holds the opinion that when AGI eventuates it will almost certainly be benevolent and usher in a period of abundance and super-intelligence that will within a very short time result in the technological singularity.

It all began with a short piece authored by Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek – all notable in their fields of science – that was recently published in the Huffington Post on the 19th April 2014 under the title Transcending Complacency on Superintelligent Machines and then in the Independent newspaper on the 1st of May 2014 entitled Stephen Hawking: Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?.

However, it was the latter, with an attached subtitle from the editors of the Independent “Success in creating AI would be the biggest event in human history. Unfortunately it might also be the last, unless we learn how to avoid the risks, say a group of leading scientists.” that seems to have been most troubling to some commentators and which SU felt obliged to address.

Following the launch of the Hollywood film Transcendence starring Johnny Depp and Morgan Freeman, and the article shortly after, many in the media have been abuzz with interpretations, opinions, sensationalist reporting, and scare tactics concerning the warning issued by the four scientists, in some cases adding more to the story than was initially evident, and perhaps causing quite a stir given the respect society tends to have for Hawking, in particular.

On the heels of these and other reports of Hawking’s apparent belief in impending doom from AI, came the response from Singularity Utopia, The Hawking Fallacy, published on the 10th May 2014 in which the writer determines that Hawking is irrationally afraid of AGI and therefore should be made an example of. The implication being that an otherwise brilliant man has succumbed to dystopian fears that are beneath his intelligence.

Use of the term fallacy is no accident, and neither is the use of Hawking’s name, as explained by SU, “My emphasis of Stephen’s name will be an enduring warning against the folly of succumbing to prejudice regarding artificial intelligence”.

There are many articles online discussing Hawking’s warning about AI and I don’t plan to rehash these, instead I will focus my energies on Singularity Utopia’s Hawking Fallacy.

The idea that AGI is to be feared is of course not new, and has been a recurring theme in numerous science fiction stories, films, and popular culture since the very concept of as-smart-as-human robots were ever conceived.

Samuel Butler’s Erewhon published in 1872 is certainly an example of what one might consider an irrational fear of robots, and then once computers reached certain levels of processing power, authors substituted robots for AI, examples including Jack Williamson’s With Folded Hands, Dennis Feltham Jones’ Colossus, or James Cameron’s Terminator series.

Of course not all science fiction condemns AI or robots, or even considers that our world will change terribly much once they emerge, with perhaps the greatest advocate of robots in our society coming from Isaac Asimov, though it could be argued he was suspicious of robots or he wouldn’t have created the Three Laws.

Singularity Utopia might encourage us to believe that AGI will necessarily be both benevolent and highly rational, and we could look to Iain M Banks’ novel The Culture, or the philosopher AI, Golem, in Stanislaw Lem’s Golem XIV. These aren’t to be feared, at least not as they were written.

But must AI necessarily be more capable than humans and either antagonistic or friendly toward humans? Must AI even exist separately from humanity? Is our destiny to remain human?

Singularity Utopia makes the point that any AGI that emerges from any of these technologies would by virtue of being super-intelligent immediately follow the path that Stanislaw Lem’s Golem took.

Let’s take a moment and look at the issue through SU’s eyes, where they are frequently seen to comment that super-intelligence will be rational, and that rational beings will not have any interest in eradicating humanity, or indeed enslaving humanity as a source of energy. Why?

SU argues that higher orders of intelligence and rationality go hand in hand, and that AGI when it emerges would be more interested in forming strong bonds with humanity. I must confess to finding this thought appealing; but hoping for this outcome doesn’t alter the reality of what might actually happen.

This is where I think I disagree with Singularity Utopia and where I disagree with Stephen Hawking and his co-writers. You see, I don’t think this is a black or white discussion.

Ray Kurzweil advocates merging with AI by uploading our minds and possibly abandoning our corporeal bodies. We would then have nothing to fear from AI because we’d be part of the AI. Kurzweil naturally sees this as a positive step in our evolution, though one could argue that Captain Picard experienced something quite different.

Kurzweil’s suggestion could easily lead to widespread panic and perversely, provoke exactly the fratricidal response from organic humans that Hawking et al warn might come instead from rogue AI.

But there is a more plausible future possibility, and one that Hawking himself is already experiencing, albeit with narrow AI. Which is that we will drive our own evolution, to a human-AI future where non-organic components that contain narrow AI will initially be used to augment our capabilities, then using nanotech we’ll embed AGI within our minds, and finally we’ll design epigenetic modifications that will fully merge human bodies with our creations. We could become super human, with AGI sharing the same corporeal body as us.

If this happens, why would we fear ourselves? Yes, SU is correct, that would be irrational, but there is certainly no fallacy to describe. Hawking and colleagues do not fear what they don’t understand. Their letter doesn’t reveal a deeply learned hatred of AI, indeed, their warning is not based on a lack of understanding as you might expect if they genuinely fear AGI.

In point of fact, Stephen Hawking himself has given a speech entitled Life in the Universe in which he believes our AI children will colonize the galaxy and beyond. In his most recent letter, Hawking and his co-writers are using their position to shine a light on an issue that others are also talking about, people like Ben Goertzel, Nick Bostrom and Eliezer Yudkowsky, or Luke Muehlhowsser.

That issue is that we as a society are not talking enough about the ethics of creating AI, even if a few noted experts are. The UN discussion on banning autonomous weapons is really only a beginning.

At present, narrow AI is already in use in a multitude of scenarios from Apple’s Siri, to the tools Stephen Hawking uses to communicate, to the autonomous weapons being tested by various militaries and their contractors around the world.

Therein lies the warning from Hawking and his co-writers. Strong AI and autonomous robots are being researched by people whose business is war, finance, resource planning, and almost every other activity that makes our society function.

Google buying Boston Dynamics and withdrawing the company from any further involvement in military robotics settled quite a few of my own nerves. Similarly, their establishment of an ethics board on announcing their purchase of DeepMind was welcome.

But Google are just one large corporation out of many others with active investments in AI research. Robotics and AI are being developed in nations that believe they face existential threats from their neighbours; Israel, South Korea, Iran, Taiwan are just a few that come to mind. Their track record of safeguarding technology and keeping it out of the hands of tyrannical states and terror groups is not always encouraging.

This is why Hawking and his co-writers issued their warning.

Since popular culture is well familiar with robots and AI as enemies of humankind, naming a fallacy in Hawking’s honour simply because he is one of the most well known scientists on the planet to issue a warning seems both spiteful and unnecessary. One might imagine that if psychologists were involved, they’d coin the term Matrix Phobia, or Terminator Shock Syndrome. At least those terms would be recognisable, certainly more so than the “Hawking Fallacy”.

I’ll leave the final word to Hawking, from an editorial in the Guardian Newspaper when his thoughts about climate change received the same heated responses as his letter about AI has done, “I don’t know the answer. That is why I asked the question.

 

About the Author:

michelle-cameron-smlMichelle Cameron is an English teacher and career coach in Spain, and a Singularitarian. She is especially interested in transhumanism, the quantum world, and space exploration.

Browse More

The Future of Circus

The Future of Circus: How can businesses and artists thrive in a changing entertainment industry?

The Problem with NFTs preview

The Problem with NFTs [Video]

Micro-Moments of Perceived Rejection

Micro-Moments of Perceived Rejection: How to Navigate the (near) Future of Events

Futurist Tech Conference Preview

Futurist Conferences: Considerations for Progressive Event Professionals

Nikola Danaylov on Ex Human

Nikola Danaylov on Ex Human: the Lessons of 2020

Immortality or Bust preview

Immortality or Bust: The Trailblazing Transhumanist Movie

COVID19

Challenges for the Next 100 Days of the COVID19 Pandemic

2030 the film preview

Why I wanted to Reawaken FM-2030’s Vision of the Future for 21st Century Audiences