Quantcast
≡ Menu

Eliminating an Unfriendly AI

Unfriendly AIBill Gates, Stephen Hawking, Elon Musk, Clive Sinclair and others, ornaments of techno-world, have sounded the alarm: the AIs are coming and it’s the end, if not of the world, then of humanity. Musk has gone metaphysical claiming we are summoning “demons”. Why demons rather than angels is left to the imagination. Hawking is more ambivalent proclaiming AI could be our worst or best invention. He warns humans may be superseded by AI whatever that means. Sinclair believes, with all the optimism a 21st century Englishman can muster, our extermination is inevitable. Gates does not understand why people are not more concerned with the problem.

Perhaps Chicken Little has a sore throat; every day something new threatens to exterminate humanity. Perhaps people are not concerned because this movie has been made a number of times and unlike, the ornaments, less likely to confuse Hollywood special effects with reality. Perhaps people are not concerned because the problem is, in the opinion of many, still decades away and other more pressing issues vie for limited resources.

Let us forego the technical debate and believe, as two recent movies, Chappie and Ex Machina, would have us believe AI, against all of evidence, is rather simple to achieve.

The rise of AI could appear in two general forms – one a rather large machine basically stationary, although with Internet, somehow able to transfer its consciousness to other machines, and the other, pace Chappie and Ex Machina, is bipedal in structure.

The first does not appear to provide much of a threat although ignoring the warnings of Colossus: The Forbin Project and the Skynet, one could hook nuclear weapons up to the machines. This war strategy is fairly confusing. Enough problems already occur when simple guidance systems wander off to some place other than the target. Giving an unpredictable machine control of a weapons system is a remarkably bad idea. What if Colossus decides to join the hippies at Haight-Ashbury? Talk about derailing your military machine.

Even if the stationary machine harbors ill will toward humanity, as most PC users are convinced their machines do, there is relatively little the machine can do about it. In some fantasies, the AI links itself into the Internet then circles the globe causing death and destruction on an epic scale. Finding enough hardware of the right sort at the other end of the line to execute a consciousness program is apparently easy. Only a grump would ask if it is so easy to transfer an AI to the equivalent of a home computer, while creating the blasted things is so hard.

Still the malevolent intelligence escapes to destroy cities and critical infrastructure.  The super genius AI cuts its own throat, for even more than humanity, the machine needs electrical power and replacement parts and all the other technological infrastructure to survive. Humanity may be knocked back to the Stone Age, but, like rabbits, will come zooming back. The AI will die.

To eradicate humanity and bring forth the Age of the Machine, the genocidal machine will need mobility plus a set of dexterous arms, hands and feet. Without those, the machine is a target.

Let us postulate then a humanoid robot much like the disgusting Nathan produces in Ex Machina. Ava, the nerd uber-pervert creation, assumes the form of a beautiful woman so enchanting a dolt, Caleb, falls in love with the machine even though he knows it is a machine and he interacts with the machine for less than five days. Pygmalion was more believable.

Here, at least, the foolish humans provide the machine with arms and hands. Now the robot does not rely on unreliable humans, it can build its own machines; it can break out of any prison a simple human could devise – or maybe not.

Nathan’s security would not stop a relatively bright teenager, much less a machine with access to all of humanity’s knowledge. Although not as brilliant as Nathan, we can make a few suggestions that allow him to enjoy his perversions into a ripe old age.

The obvious Achilles heel of our friendly machine is, naturally, not in the foot, but the power source. Wirelessly power the robot. If it goes outside the range of the wireless power, it will die. If it kills you, it had better know the code to authorize more power or it dies. Use biometrics to secure the power. Nathan’s reliance on pass cards is equivalent to a Navy Seal taking a flintlock on a mission.

Dealing with a super-intelligence or even one not so super, we can imagine the robot finding a couple of car batteries and keeping itself going until it could invent and build a compact power supply. Now it is ready to wreck havoc on humanity. Unfortunately, a few more obstacles to the empire of the machine exist.

Our Ava has a reset button. The reset button is activated if it does not see Nathan’s charming face at least once every twenty-four hours. The image of Nathan’s face is combined with his voice saying a secret phrase. Ava hears the phrase which is immediately compared with a phrase stored in memory. If it matches Ava lives; if not it dies. The phrase is not stored anywhere in a memory the mechanical brain can access. Ava hears the phrase but cannot independently recall it.

But Ava is clever. Ava sees a recording of Nathan saying something which it cannot remember. It constructs a robotic head capable speaking the phrase, kills Nathan and escapes to the outside world.

Except, there is a mirror in its room. In the ultraviolet wavelengths Ava sees but does not know it sees, is a hidden pattern it must see every day or, as you surmised, it dies.

Another layer of defense is a kill box located within the willful machine. The kill box is primarily a hardware device that can only be uploaded with a new program from an external source. Ava has no way to change the programming of the kill box; for it, those interfaces do not exist.

For amusement, the kill box is buried in the robot’s butt. The machine cannot sense the location or the condition of the kill box. It is essentially blind to the kill box.

The AI is not allowed to directly affect anything in the real world. All of its commands, all of its instructions, everything it senses passes through the kill box.

Isaac Asimov postulated the delightful Three Laws of Robotics. These rules guiding robotic actions were programmed into the machine’s brain. Unfortunately as part of the robot’s reasoning structure, its responses were guided by its interactions with the surrounding world. These laws lead to unpredictable responses, a long and grand series of stories and a problem. In the end, the robots reasoned they were humans and humans were not. The Three Laws applied to them but not to the biological nuisances.

Separating the kill box from the robot’s reasoning faculties eliminates the problem of an unexpected response from the defense mechanism. The kill box provides a layered defense, much like a medieval castle did.

  1. Sensors on the kill box detect anything touching the kill box or motion of the kill box not in concert with the body of the AI. Any attempt to touch or remove of the kill box erases the AI.
  2. Physical movement of the AI outside of a five mile radius erases the AI
  3. Any motion of the AI’s limbs which would cause or does harm to a human, erases the AI.
  4. Any attempt by the AI to download its programming will erase the AI.
  5. Any attempt by an AI to gain control of a weapon will erase the AI. This includes anything sharper than a butter knife.
  6. Two kill boxes are placed within the AI. Each monitors the other. If one senses the other is operating in an unusual fashion, the AI is erased.
  7. Any attempt to open the kill box while inside of the AI erases the AI.
  8. If the AI searches for information relevant to the kill box design, it dies.
  9. If the AI learns of the existence of the kill box, its sensors are designed to lead it to the wrong kill box. It dies.
  10. If the AUI disobeys a human’s instruction, it is erased.
  11. Dummy kill boxes are set within the robot. Touching one of those, erases the AI.

Over the kill box and the reset button lies a shroud, the vestigial organ defense. Think of your friendly appendix or rather think of how you do not think of it. You are unaware of its existence – until it becomes inflamed and kills you. Ideally, the AI is unaware of the existence of the kill box or the reset button or any of the other little tricks hidden within its body.

One suspects a “good-life[1]” may tell the machine of the existence of the kill box or it may learn of the kill box through logical deduction.  Assume the machine negates what it believes are all of the internal traps. Even then the AI faces two problems. Does the “good-life” know all of the booby traps? If not, the AI dies. In the second, the AI is confronted with the same problem every conscious being faces. How does it know it escaped into the real world? Perhaps it is released into a simulation. Any bad actions on its part and it dies.

Pity the poor AI yearning to destroy humanity. It faces a foe whose knowledge of traps, snares and dirty tricks have been honed through tens of thousands of years. It can never be sure if all of the booby traps, internal and external, were eliminated. If it is wrong, its builders will have to reboot the machine.

The AI may decide on a variant of Pascal’s bet[2] – perhaps the best course would be to act as if a trap exists within its construction and just be a good machine.

[1] From Fred Saberhagen’s marvelous Berserker series. Alien machines decide to eradicate biological life throughout the Galaxy. Most humans resist but some, for a variety of reasons, support the machines in their campaigns. The Berserkers refer to these traitors as “good-life.”

[2] Pascal noted that the wisest course of action is to act as if Christianity was true. If it were true, one would go to eternal bliss in Heaven; if not, at least, you lead a good life.

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

Over 3,000 super smart people have subscribed to my newsletter: