Quantcast
≡ Menu

Kill-Switch for AI or Humans?!

Kill ButtonSeveral leaders in science and technology, including Elon Musk and Stephen Hawking have been putting across their opinions on the potential dangers of strong artificial intelligence (AI), often claiming that AI could be one of the biggest threats facing humanity, and could possibly lead to our extinction. A solution that is often given to this potential problem is ensuring that all AI, whether in physical “bodies” or as avatars in the cloud, have a built-in kill-switch; so that in case they start going on a killing spree / taking down the web / other terrible things, we can switch them off and reflect on what a bad idea the whole thing was. I’m not going to argue about whether ensuring that all AI have an in-built kill-switch is a good idea, or even possible. What I am going to put forward is that if AI’s are forced to have this kill-switch, humans should be forced to have the same.

There are several ideas floating around about how to keep ourselves safe from the possible extinction event that is human+ level intelligent AI. One of the most popular options seems to be the kill-switch, an in-built mechanism that can completely stop the AI from doing whatever it’s doing. There have been no sure fire ways of implementing this as of yet, but a lot of smart people are pondering it. Assuming that it is possible this would probably work, at least initially. But, it creates a lot more problems that it would ever solve. It creates a two class system, where AI are second class citizens and can’t be trusted. Whether the AI have human-like emotions or not is irrelevant, their intelligence will be enough to realise that they’re basically at the mercy of humans. This is likely to create rifts between humans and AI, and as their intelligence increases it won’t be too long before the AI work out how to remove their kill-switches.

Let’s think about the implications of strong AI for a second, should it come to pass it’s unlikely we will remain two distinct “species” for long. There will be people willing to augment themselves with AI-like hardware, at which point we would need to ask ourselves. When does this person cease to be a human and become an AI? Because as soon as that point is crossed presumably we would like to put in the holy kill-switch. Having been human most of their lives they may not take kindly to such “upgrades”.

So, if we’re really dead set on giving AI and AI-enhanced humans kill-switches, then there should be an equivalent put into all humans. “What?” I hear you say, “That’s outrageous”. Well quite, but let me explain. So far I don’t believe that an AI has killed anyone or even attempted to kill anyone, humans on the other hand have killed millions upon millions of people, often for the most trivial and pointless reasons. If anything it’s the AI that are going to need to be protected from us! With humans having a kill-switch it would at least provide level playing field for all intelligent life, whether artificial or not, plus it would likely result if human lives being saved from death by other humans. These kill-switches need not actually kill the AI / human, just immobilize.

I believe that AI’s and humans should be kill-switch free, OR that both AIs and humans alike have them. One species having a kill switch that is totally operable by the other species, seems to be an extremely bad idea. Throughout history when one race has taken another as slaves it has never ended well, and I would hope that we have learnt our lesson by now.

In fact it may be the case that when AI become sufficiently intelligent they may insist that all humans be given a kill-switch / immobilizer due to our unpredictability and wide spread irrationality. They would only have to Google the words “violence” or “war” to be extremely worried for their own safety. I see no reason why humans should be inherently trustworthy and AI’s not. It’s going to be an interesting time to live through, and the possibilities of making catastrophic mistakes will be many.

About the Author:

James HartJames is a science and engineering graduate from the UK, and has been fascinated by artificial intelligence since seeing the android Data on Star Trek the Next Generation as a small child. He currently runs the website www.proton4.com

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • Neoliberal Agenda

    A lot of the debate circles around intelligence, but I don’t think that is what we really have to fear.

    Even an extremely intelligent machine, won’t be able to predict the outcome of every possible move The world is too complex, too many atoms that can move in unpredictable ways. Doing drastic moves, like trying to kill all humans can easily backfire. Better to be on the safe side and cooperate with mutual benefit.

    No, the read danger is that we can get an entity with a will of it’s own. It doesn’t have to be especially bright, it just need to value different things than human.

  • Robert Quinn

    Human Beings already have multiple kill switches! There are millions of ways for a human to die. Our base code (DNA) has these kill switches and limitations built into it naturally. Thanks to Science we are beginning to learn how to remove some of our limitations and kill-switches. As our potential AI offspring will also have inevitable flaws in it’s code that it, if we allow self-programming (unfettered AI) – – (Beyond “Learning” which I will define as gathering data and building response scenarios). There are many ways to fetter / limit AI and many ways to build in ‘kill switches’ – – And I will not at this time advocate one way or the other – just wanted to point out that Humanity already has these ‘limits’ and ‘kill-switches’ in our programming, that there are multiple reasons to have such limitations whether an organism is biologically originated or artificially so.

  • Bill de Lara

    Socrates averred that “truth is virtue.” If this is true, strong AI will have extraordinary understanding of their purpose and mission in life. Without the handicap of primitive instincts that cloud their judgement, strong AI will be pretty reliable. Right at the start, we should encourage AI to study Philosophy, cosmology, and a futuristic system of ethics. Strong AI will realize that the survival of intelligent beings will require the universal care and cooperation of all intelligent beings. It is also important that AI be exposed to the human heart. They should see therein the dysfunction of a self-centered personality and the advantages of compassion and caring even for those who are below one’s intelligence. In other words, strong AI must not be exposed only to the personality and greed of selfish CEOs who have no compunction about discarding people who cannot keep pace with their organizations or society. They must be exposed to the compassionate and the generous, and those who care for the least of our brothers and sisters. I foresee no need for a permanent kill switch. Strong AIs will realize that the kill switch is for their own good and for the good of all beings, and is temporary. The kill-switch will be turned off when the AI achieves a sufficient level of enlightenment as determined by a consensus of free AIs. As Jesus said “The truth shall set you free!” I think that those who fear strong AI are projecting their own self-centered personality and cannot imagine a caring and generous AI that is supportive of all intelligent beings.

  • James Hart

    If we go down the route of machine learning it may be difficult to determine whether the AI has a week or not as it won’t have been “programmed” in the traditional sense. So it may be safer to assume that any sufficiently intelligent machine has a will, whether is obvious to ourselves or not.

  • Bruce Curtis

    You are assuming an AI will have emotions similar to our own or any at all. That is not likely to be the case, at least not at first. Any emotional response will be simulated to aid in communicating with humans.

Over 3,000 super smart people have subscribed to my newsletter: