Kill-Switch for AI or Humans?!

James Hart /

Posted on: February 10, 2015 / Last Modified: February 10, 2015

Kill ButtonSeveral leaders in science and technology, including Elon Musk and Stephen Hawking have been putting across their opinions on the potential dangers of strong artificial intelligence (AI), often claiming that AI could be one of the biggest threats facing humanity, and could possibly lead to our extinction. A solution that is often given to this potential problem is ensuring that all AI, whether in physical “bodies” or as avatars in the cloud, have a built-in kill-switch; so that in case they start going on a killing spree / taking down the web / other terrible things, we can switch them off and reflect on what a bad idea the whole thing was. I’m not going to argue about whether ensuring that all AI have an in-built kill-switch is a good idea, or even possible. What I am going to put forward is that if AI’s are forced to have this kill-switch, humans should be forced to have the same.

There are several ideas floating around about how to keep ourselves safe from the possible extinction event that is human+ level intelligent AI. One of the most popular options seems to be the kill-switch, an in-built mechanism that can completely stop the AI from doing whatever it’s doing. There have been no sure fire ways of implementing this as of yet, but a lot of smart people are pondering it. Assuming that it is possible this would probably work, at least initially. But, it creates a lot more problems that it would ever solve. It creates a two class system, where AI are second class citizens and can’t be trusted. Whether the AI have human-like emotions or not is irrelevant, their intelligence will be enough to realise that they’re basically at the mercy of humans. This is likely to create rifts between humans and AI, and as their intelligence increases it won’t be too long before the AI work out how to remove their kill-switches.

Let’s think about the implications of strong AI for a second, should it come to pass it’s unlikely we will remain two distinct “species” for long. There will be people willing to augment themselves with AI-like hardware, at which point we would need to ask ourselves. When does this person cease to be a human and become an AI? Because as soon as that point is crossed presumably we would like to put in the holy kill-switch. Having been human most of their lives they may not take kindly to such “upgrades”.

So, if we’re really dead set on giving AI and AI-enhanced humans kill-switches, then there should be an equivalent put into all humans. “What?” I hear you say, “That’s outrageous”. Well quite, but let me explain. So far I don’t believe that an AI has killed anyone or even attempted to kill anyone, humans on the other hand have killed millions upon millions of people, often for the most trivial and pointless reasons. If anything it’s the AI that are going to need to be protected from us! With humans having a kill-switch it would at least provide level playing field for all intelligent life, whether artificial or not, plus it would likely result if human lives being saved from death by other humans. These kill-switches need not actually kill the AI / human, just immobilize.

I believe that AI’s and humans should be kill-switch free, OR that both AIs and humans alike have them. One species having a kill switch that is totally operable by the other species, seems to be an extremely bad idea. Throughout history when one race has taken another as slaves it has never ended well, and I would hope that we have learnt our lesson by now.

In fact it may be the case that when AI become sufficiently intelligent they may insist that all humans be given a kill-switch / immobilizer due to our unpredictability and wide spread irrationality. They would only have to Google the words “violence” or “war” to be extremely worried for their own safety. I see no reason why humans should be inherently trustworthy and AI’s not. It’s going to be an interesting time to live through, and the possibilities of making catastrophic mistakes will be many.

About the Author:

James HartJames is a science and engineering graduate from the UK, and has been fascinated by artificial intelligence since seeing the android Data on Star Trek the Next Generation as a small child. He currently runs the website www.proton4.com

Browse More

The Future of Circus

The Future of Circus: How can businesses and artists thrive in a changing entertainment industry?

The Problem with NFTs preview

The Problem with NFTs [Video]

Micro-Moments of Perceived Rejection

Micro-Moments of Perceived Rejection: How to Navigate the (near) Future of Events

Futurist Tech Conference Preview

Futurist Conferences: Considerations for Progressive Event Professionals

Nikola Danaylov on Ex Human

Nikola Danaylov on Ex Human: the Lessons of 2020

Immortality or Bust preview

Immortality or Bust: The Trailblazing Transhumanist Movie

COVID19

Challenges for the Next 100 Days of the COVID19 Pandemic

2030 the film preview

Why I wanted to Reawaken FM-2030’s Vision of the Future for 21st Century Audiences