• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

Turing test

The Turing Test Trail to Turmoil

November 4, 2015 by Charles Edward Culpepper

Turing_Test
“Turing Test version 3” by Bilby – Own work. Licensed under Public Domain via Commons

One grotesque error in an otherwise outstanding film, Ex_Mechina (2015), is when the character Nathan Bateman (Oscar Isaac) asks the character Caleb Smith (Domhnall Gleeson) if he knows what the Turing Test is, despite the fact that both Nathan and Caleb are programming geeks. In fact, Nathan brought Caleb to his home/research facility, expressly because of Caleb’s computer background and solitary status. What tech-head doesn’t know about the Turing Test?

Knowledge of the Turing Test is a general phenomena in the tech world, but less people know all the detail and fewer still, understand the details. The paper Turing wrote, Computing Machinery and Intelligence, (published 1950, Mind 49: 433-460) is divided into seven sections. In the first section Turing poses a question “Can machines think?” And he immediately insists that to answer the question, requires that both terms “machine” and “think” must be defined.

He emphatically warns against trying to define the words based on common usage, comparing such an approach as being equivalent to taking a statistical survey like a Gallop poll. In effect implying that it would be like voting for particular meanings. He said the methodology would be absurd. Therefore, he suggests replacing the original question, with another question that he believes is closely related to the original question and that can be expressed in relatively unambiguous words.

To accomplish the goal of finding proper terminology, he creates what he calls “the imitation game”. We now refer to it as the Turing Test. As most people know, the test involves an interrogator, or interrogators who, by asking questions of a player, can determine if the player is human. If the player succeeds in convincing interrogators that it is human, than the presumption is that the machine possesses some kind of intelligence, i.e., something related to thinking. The test and the idea it embodies has become nearly a cult.

A contest was started by Hugh Loebner that offers the Loebner prize (approximately $3,000). The contest has attracted a bevy of crazies and charlatans. However, two grand luminaries of the digital vista, Ray Kurzweil and Mitch Kapor, have a running bet of $20,000, with Ray wagering the Turing Test will be passed by 2029 and Mitch that it will not. This wager is administered by the Long Now Foundation. Winnings will be donated to a charitable organization determined by the winner.

There are many and significant supporters and detractors of the test. One notable critic, George Dvorsky (1970-Living), wrote an article, Why the Turing Test is Bullshit (io9, 2014). A more prominent critic – particularly in the realm of academic AI researchers – Marvin Minsky (1927-Living), commented on Singularity 1 on 1 (12 Jul 2013): “The Turing Test is a joke, sort of saying a machine would be intelligent if it does things that an observer would say, must be being done by a human. So it was suggested by Alan Turing as one way to evaluate a machine. But he had never intended it as a way to decide whether a machine was really intelligent.” He has also referred to it as a publicity stunt that does not benefit AI research.

Aristotle said that men have more teeth than women. He was not stupid and could count. He was a genius, but capable of a multitude of Errors. And such could be said of Alan Turing. But let’s think about how absurd – to coop Alan’s use of the term – the idea that convincing an interrogator, even a highly intelligent one, could substitute for understanding the principles of intelligence. Dear Alan – WRONG!

Not only was Turing wrong, but he was wrong in a particularly egregious way. Convincing people is what politician’s do, by utilizing a small number of, not especially intelligent tactics. Advertisers produce buy-in behavior, often in people that are not actually rationally decided on the product. My point is that “canned behavior” can substitute for intelligence, very effectively. The “imitation game” almost demands trickery. It certainly incentivizes it immensely.

But the real gunk in the goop here, is that the interrogator functions primarily, as a voter. Or to put it in starker terms, what Turing did was to replace something he explicitly said should not be voted on, with something that would, quintessentially be nothing more than something voted on. He effectively defined “think” or “intelligence” with “believable”. This reduces AI to a magic act.

The Turing Test tries to skirt around the meaning of “thought”. The history of the definitions of thought and/or intelligence have been a circus of semantics. People – including extremely educated ones – have described both as everything from “if-then” rules to spiritual privilege and everything imaginable between. And then there are the fatalistic few who espouse the idea that you can’t define thought or intelligence – it is impossible to do so.

It seems to me that Shane Legg has cleared the big junk from the field of AI by characterizing the four major questions of intelligence, in such a way that there is less confusion about what is being talked about. He differentiates the big questions that concern intelligence (1) The internal workings of a human mind (thoughts), (2) The external workings of the human mind (behavior), (3) The internal workings of ideal intelligence (logic), (4) The external behavior of ideal intelligence (actions).

4 major questions of intelligence

The Turing Test is a way of defining human intelligence based solely on behavior. John Searle’s thought experiment, The Chinese Room, is a way of defining human intelligence based solely on the internal activity of the human mind. Aristotle’s logic is a way of defining what is true for any intelligence based solely on the internal properties of the analysis. Marcus Hutter’s assessment of universal intelligence is based solely on the external behavior of the intelligent entity.

People confuse themselves by trying to articulate internal properties by referring to external properties, or vice versa. This is the mistake Turing made, because he did not know how to define the inner workings of intelligence. And people confuse themselves by speaking of universal intelligence as if it could only be a version of human intelligence or vice versa. The best example is the anthropomorphizing that people do when talking about all intelligence as if it were either inferior or superior to human intelligence.

In section six of his 1950 paper, Turing writes, “The original question, “Can machines think?” I believe to be too meaningless to deserve discussion.” What I believe Turing means, is that no one knows what thinking is and therefore, it is not possible to determine if machines are doing it. We of 2015 may not have a detailed, or even adequate, definition of thinking; but we are getting better at detecting things that are part of thinking and things that are not. The Turing test is not totally useless, because it is another thing that computers can be good, mediocre or bad at. But the Turing test is, in the broad context of serious AI research, a triviality. And its value is diminishing with time.

The best word on Technological Unemployment, that I know of is Carl Benedikt Frey and Michael Osborne, The Future of Employment: How Susceptible are Jobs to Computerisation (17 Sep 2013). And on page four they write: “While the computer substitution for both cognitive and manual routine tasks is evident, non-routine tasks involve everything from legal writing, truck driving and medical diagnoses, to persuading and selling. In the present study, we will argue that legal writing and truck driving will soon be automated, while persuading, for instance, will not.”

The assumption that Carl and Michael make regarding persuasion is based on the inability of machines to do creative thinking. They are basing the internal workings of machine intelligence on the external behavior of human beings. I would caution them not to base convictions about inner abilities on activities of outer behavior. I would also remind them that most people are only superficially creative; and there are ways of getting computer programs to produce novel output, by utilizing recursive stochastic routines that get culled by standards of valuation.

 

About the Author:

Charles Edward Culpepper, IIICharles Edward Culpepper, III is a Poet, Philosopher and Futurist who regards employment as a necessary nuisance…

Filed Under: Op Ed Tagged With: Turing test

Marvin Minsky on AI: The Turing Test is a Joke!

July 12, 2013 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/202482822-singularity1on1-marvin-minsky.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Marvin-MinskyMarvin Minsky is often called the Father of Artificial Intelligence and I have been looking for an opportunity to interview him for years. I was hoping that I will finally get my chance at the GF2045 conference in NY City. Unfortunately, Prof. Minsky had bronchitis and consequently had to speak via video. A week later, though still recovering, Marvin generously gave me a 30 min interview while attending the ISTAS13 Veilance conference in Toronto. I hope that you enjoy this brief but rare opportunity as much as I did!

During our conversation with Marvin Minsky we cover a variety of interesting topics such as: how he moved from biology and mathematics to Artificial Intelligence; his personal motivation and most proud accomplishment; the importance of science fiction – in general, and his take on Mary Shelley’s Frankenstein – in particular; the Turing Test; the importance of theory of mind; the Human Brain Project; the technological singularity and why he thinks that progress in AI has stalled; his personal advice to young AI researchers…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is Marvin Minsky?

Marvin Minsky has made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics. In recent years he has worked chiefly on imparting to machines the human capacity for commonsense reasoning. His conception of human intellectual structure and function is presented in two books: The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind and The Society of Mind (which is also the title of the course he teaches at MIT).

He received the BA and Ph.D. in mathematics at Harvard (1950) and Princeton (1954). In 1951 he built the SNARC, the first neural network simulator. His other inventions include mechanical arms, hands, and other robotic devices, the Confocal Scanning Microscope, the “Muse” synthesizer for musical variations (with E. Fredkin), and one of the first LOGO “turtles”. A member of the NAS, NAE, and Argentine NAS, he has received the ACM Turing Award, the MIT Killian Award, the Japan Prize, the IJCAI Research Excellence Award, the Rank Prize, and the Robert Wood Prize for Optoelectronics, and the Benjamin Franklin Medal.

Filed Under: Podcasts Tagged With: Artificial Intelligence, Marvin Minsky, Turing test

A Turing Test Point of View: Will the Singularity be Biased?

March 25, 2011 by Nikki Olson

Will the singularity be biased?Computers, by their very nature, don’t need to have a point of view. However, for our purposes, it is often preferred that they do.

In the days before natural language processing, this manifested as a bias towards other computers. For example, Macintosh hardware didn’t run Windows software until 2006, and printers weren’t recognized by PC hardware without deliberate driver installation until Windows 7 came out in 2010.

But as of late, computers are capable of holding a new kind of ‘bias’, that being a ‘biased’ opinion about human beings, and about the world at large.

This past year computers began working as journalists, writing articles about data-intensive topics such as weather and sports.

For articles generated by the software program Statsheet, over 80% of the time, sports readers cannot tell whether a computer or human has written the article. Say what you will about sports fans, a large part of this software’s success has to do with the successful incorporation of ‘bias’ into the articles.

In contemporary society, a major portion of the journalism industry is devoted to the production of ‘biased’ articles. Sports fans, for instance, like to read articles that favor their home team rather than those that provide an objective opinion of the situation. As demonstrated by Statsheet, computer generated articles that sympathize with the shortcomings of the local team, and over emphasize the team’s success, are more likely to fool readers into thinking that ‘someone’, rather than ‘something’, wrote the article.

As is well emphasized, part of being human and not being a computer, at least in 2011, is being ‘conscious’. With consciousness comes subjectivity, a point of view or a knowledge gap between how things look to you, and how things really are.

It has long been realized that in order for a computer to pass the Turing Test it will have to be able to imitate human strengths as well as human weaknesses. So in 2029, or when the first computer passes the Turing Test, we will still want computers to have a ‘point of view’.

But will the first computer that exceeds human intelligence have a point of view?

Despite the incompatibility of “subjectivity” and “objectivity” in human reality, perhaps a conscious computer smarter than we are will become the first real entity to possess both at once.  The closest analogy to this, though not quite exemplifying the notion, might be the Orwellian “1984” concept of Doublethink; holding two conflicting ideas in mind at once and accepting them both.

Empirical inquiry tells us that a Singularity will likely happen, but it can do little to tell us about the likely ‘subjectivity’ of that Singularity. If it is indeed conscious, will subjectivity restrict a computer as it restricts the human mind?

In many ways, computers will become more than we are, and be capable of more than we can even imagine, literally. This is just one more way in that this might be true, because when computers are as smart as we are, we will not be able to think like they do.  They will likely have modes of approach to questions that are completely foreign to the human mind.

About the Author:

Nikki Olson is a writer/researcher working on an upcoming book about the Singularity with Dr. Kim Solez, as well as relevant educational material for the Lifeboat Foundation. She has a background in philosophy and sociology, and has been involved extensively in Singularity research for 3 years. You can reach Nikki via email at [email protected].

Filed Under: Op Ed Tagged With: Turing test

Top 10 Singularitarians of All Time

January 23, 2010 by Socrates

The technological singularity is the event or sequence thereof likely to occur at or after the birth of AI, especially when Artificial Intelligence surpasses biological i.e. human intelligence.

Since it is human intelligence which, in one way or another, is still the primary cause and ultimate mover behind AI, there are a number of people who either had or continue to have enormous impact on the singularity.

Some of those are scientists who work diligently in fields as varied as Genetics, Robotics, Nanotechnology or Artificial Intelligence. Others are theorists and science fiction writers who have been the inspiration behind both the concept and the science, and have shaped the popular perception about what the singularity will, could or should be. Still others have been vehement critics who have either argued powerfully against or have taken direct action to prevent the singularity. It is all those people who, because of their lasting impact, I consider to be, broadly speaking, the top singularitarians.

So, who are the top 10 singularitarians of all time?

Singularity Weblog’s Top 10 Singularitarians of All Time

1. Ray Kurzweil

Without any doubt Ray Kurzweil is the most famous and widely recognized singularitarian. He is the person most responsible for the popularization of the concept of the technological singularity and is sometimes referred to as “the singularity prophet” (both in the positive and the negative sense of the word).

Ray is an inventor and well published futurist who, among other things is famous for: predicting the collapse of the Soviet Union; the rise of the Internet; believing that he can live forever; claiming that eventually he will bring his late father back from the dead; for writing persuasively about the Law of Accelerating Returns; for starting up a number of successful tech companies and for being one of the founders of Singularity University.

Some of Ray Kurzweil’s most famous books include: The Age of Intelligent Machines, The Singularity is Near, Transcend: Nine Steps to Living Well Forever. Ray has appeared in a number of documentaries about the singularity or technology in general, most notable of which are Transcendent Man and the Singularity is Near, which he wrote and produced himself.

2. Vernor Vinge

It was Hugo Award winner Vernor Vinge who coined the term technological singularity in his science fiction novel Marooned in Realtime.

Arguably the second most recognized singularitarian, Vernor Vinge spent most of his life in San Diego, California where he still lives today.There he taught mathematics and computer science at San Diego State University for over thirty years. Today Vinge is sought widely as a public speaker and presenter for business, science, science fiction and general audiences.

Vernor Vinge has won Hugo Awards for several of his books such as: A Fire Upon The Deep(1992), A Deepness in the Sky(1999) and for the novella Fast Times at Fairmont High(2001). Known for his rigorous hard-science approach Vinge first became an iconic figure both among cybernetic scientists and sci fi fans with the publication of his 1981 novella True Names, widely considered to be the visionary work behind the internet revolution. Later he gained even more public attention for his coining the term, writing and presenting about the technological singularity.

3. Karel Čapek

Karel Čapek earns his spot as a top 10 singularitarian for popularizing the term robot which was introduced in the 1920s by the Czech writer in his play R.U.R. (Rossum’s Universal Robots).

The play was situated on an island-factory for “artificial people” that Čapek called robots. Čapek’s robots looked like normal people and could think autonomously for themselves, yet, at least for a while, they seemed to be happy serving their human masters. Eventually, however, the robots rebelled, exterminated all humans and took over the world.

4. Isaac Asimov

Isaac Asimov was a prolific writer who wrote or edited over 500 books but is most famous for his collection of robot short stories which were eventually published together under the common name I, Robot.

Following in Čapek’s footsteps Isaac Asimov’s earns his place among the top 10 singularitarians for coining the Three Laws of Robotics in his 1942 short story Runaround. The three laws state that:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

5. Samuel Butler

It was during the relatively low-tech mid 19th century that Samuel Butler wrote his Darwin among the Machines. In it, Butler combined his observations of the rapid technological progress of the Industrial Revolution and Charles Darwin’s theory of the evolution of the species. That synthesis led Butler to conclude that the technological evolution of the machines will continue inevitably until the point that eventually machines will replace men altogether.

In other words, Samuel Butler was the first to claim that it was the race of the intelligent machines (AI) and not the race of men which would be the next step in evolution. He developed further that and other subsequent ideas in The Book of the Machines, three chapters of his book titled Erewhon, which was published anonymously in 1872.

In Erewhon Samuel Butler argued: “…that the machines were ultimately destined to supplant the race of man, and to become instinct with a vitality as different from, and superior to, that of animals, as animal to vegetable life.”

The above conclusion lead Butler to call for the complete destruction of all machines invented after the end of the 17th century.

6. Alan Turing

Alan Turing was a brilliant British mathematician often considered to be the father of modern computer science.

During the Second World War Turing was working for the British government at Bletchley Park and was the man largely credited with breaking the German Enigma machine cryptographic code. He was also a crucial figure behind the development of the so called Turing-Welchman Bombe which was an electro-mechanical type of a computing machine.

After the war Turing famously predicted that computers would one day play better chess than people and in 1950 published an article titled Computing Machinery and Intelligence where he introduced what he believed to be a practical test for assessing computer intelligence. (aka the Turing Test)

Alan Turing was a closet homosexual and, unfortunately, was convicted for indecency in 1952 because his homosexual relations were illegal in Britain at the time. He was forced to undergo chemical castration and as a side effect grew breasts. After his conviction, his security clearance was revoked and his reputation was destroyed.

Unable to bear anymore humiliation, Alan Turing committed suicide on June 8, 1954 by allegedly biting an apple which he laced with cyanide. (It is in his honor that Apples’ logo today is a half bitten apple)

7. Aubrey de Grey

Aubrey de Grey was born in London, England in 1963. He is a controversial author and theoretician in the field of gerontology  and is currently serving as a chief science officer at the Strategies for Engineered Negligible Senescence (SENS) foundation.

Dr. de Grey started out as a computer scientist by completing a BA at Cambridge University in 1985. According to his own words he married a biologist and decided to switch fields in the mid 1990s.

In the year 2000 Cambridge University awarded him a PhD for his book concerning a specific aspect of aging called The Mitochondrial Free Radical Theory of Aging.  He is also the author of another popular and highly controversial book called Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging in Our Lifetime.

It is in his second book where Aubrey de Grey proposes a road-map aimed at defeating aging by reaching what he calls longevity escape velocity — i.e. the point where humanity will possess the medical technology to extend healthy human life by a given period of time (e.g. a decade or two) during which time we will come up with even better technology thereby allowing us to extend life even more. Thus, by repeating this process over and over again we can stay one step ahead of the problem of aging and eventually will reach a point where we can extend healthy human life indefinitely.

Aubrey de Grey is an eccentric, controversial and highly recognizable figure. He has been a guest speaker for numerous TV programs and events such as CBS 60 Minutes, BBC, the New York Times, Fortune Magazine, the Washington Post, TED, Popular Science and The Colbert Report. Most recently Aubrey de Grey was the subject of the documentary film directed by Christopher Sykes Do You Want to Live Forever?. It is his highly controversial quest for immortality that earns Aubrey de Grey his top 10 spot on our singularitarian list.

8. Ted Kaczynski (aka the Unabomber)

Ted Kaczynski was a genius child prodigy, brilliant mathematician, social critic, technophobic neo-Luddite, extreme environmentalist and murderous terrorist who was responsible for a series of bombings targeting universities and airline companies. His nickname the Unabomber originates from the FBI acronym “UNABOM” which stands for “UNiversity and Airline BOMber.”

Kaczynski’s notorious bombing campaign lasted from 1978 until 1995. During that period he blew up 16 bombs and was responsible for the death of three people and the injuring of 23.

In Industrial Society and Its Future (aka the “Unabomber Manifesto”) he tried to explain, justify and popularize his militant resistance to technological progress. In essence, Kaczynski embraced the ideas of Samuel Butler but was not satisfied to simply write about the dangers of technology. Thus, even though the Unabomber didn’t think that the technological singularity will be a good thing, he believed in it so much that he had to try to prevent it by any means possible.  It is for this reason that Kaczynski takes number 8 on our list.

9. Kevin Warwick

Kevin Warwick is a professor of Cybernetics at the University of Reading, England, where he carries out research in artificial intelligence, control, robotics and biomedical engineering. Most notably he is the author of I, Cyborg a book where he documents how he became the world’s first cyborg in a series of ground-breaking scientific experiments.

Kevin’s research was selected by National Geographic International for a 1 hour documentary, entitled “I,Human” which was broadcast in 143 countries and translated into 23 different languages.

 

10. Charles Stross

Charles Stross is a contemporary science fiction writer based in Edinburgh, Scotland. Some of this most famous sci fi novels include titles such as Accelerando (Singularity), Singularity Sky, Iron Sunrise (Singularity) and Saturn’s Children.

It is his book Accelerando with its broad plot horizon (spanning time and space across the whole universe), with its dazzling imagination (fed by the latest and greatest bleeding edge of science and science fiction), and with its deep implications for the whole of humanity, that Charlie Stross beats the other honorable candidates and earns the last spot on our top 10 Singularitarians of all time.

Other honorable mentions who could have made the above list but just didn’t quite make it are: Gordon Moore, John von Neumann, I.J. Good, Norbert Wiener, Manfred Clynes, Hans Moravec, Marvin Minsky, John McCarthy, Philip K. Dick, Edsger Dijkstra, Nick Bostrom, Kevin Kelly, Hugo de Garis, William Gibson, Eliezer Yudkowsky, Ben Goertzel and Michael Anissimov.

Filed Under: Uncategorized Tagged With: Ray Kurzweil, Raymond Kurzweil, singularity, singularity blog, Technological Singularity, transhumanism, Turing test

Dawn of the Kill-Bots: the Conflicts in Iraq and Afghanistan and the Arming of AI (part 4)

December 18, 2009 by Socrates

Part 4: Military Turing Test — Can robots commit war-crimes?

Now that we have identified the trend of moving military robots to the forefront of military action from their current largely secondary and supportive role to becoming a primary direct participant or (as Foster-Miller proudly calls its MAARS bots) “war fighters” we have to also recognize the profound implications that such a process will have not only on the future of warfare but also potentially on the future of mankind. In order to do so we will have to briefly consider what for now are assumed to be broad philosophical but, as robot technology advances and becomes more prevalent, will eventually become highly political, legal and ethical issues:

Can robots be intelligent?

Can robots have conscience?

Can Robots commit war crimes?

The
Image via Wikipedia

In 1950 Alan Turing introduced what he believed was a practical test for computer intelligence which is now commonly known as the Turing Test. The Turing test is an interactive test involving three participants – a computer, a human interrogator and a human participant. It is a blind test where the interrogator asks questions via keyboard and receives answers via display screen on the basis of which he or she has to determine which answers come from the computer and which from the human subject. According to Alan Turing if a statistically sufficient number of different people play the roles of interrogator and human subject, and, if a sufficient proportion of the interrogators are unable to distinguish the computer from the human being, then the computer is considered an intelligent, thinking entity or AI.

If we agree that AI is very likely to have substantial military application then there is clearly a need of developing an even more sophisticated and very practical military Turing Test which ought to ensure that autonomous armed robots abide by the rules of engagement and the laws of war. On the other hand, while whether a robot or a computer is (or will be) self-conscious thinking entity (or not) is an important question, it is not necessarily a requirement for considering the issue of war crimes. Given that those MAARS-type bots have literally killer applications, the potential for a unit of those going “nuts on a shooting spree” or getting “hacked,” ought to be considered carefully and hopefully quite early in advance of any potentialities.

Robots capable of shooting on their own are also a hotly debated legal issue. According to Gordon Johnson, who leads robotics efforts at the Joint Forces Command research center in Suffolk (Virginia), “The lawyers tell me there are no prohibitions against robots making life-or-death decisions.” When asked about the potential for war crimes Johnson replied “I have been asked what happens if the robot destroys a school bus rather than a tank parked nearby. We will not entrust a robot with that decision until we are confident they can make it.” (Thus the decision really is not “if” robots will be entrusted with such decisions but “when.” Needless to say historically government confidence in different projects is hardly a guarantee that things will not go wrong.)

On the other hand, in complete opposition to Johnson’s claims, according to barrister and engineer Chris Eliot, it is currently illegal for any state to deploy a fully autonomous system. Eliot claims that “Weapons intrinsically incapable of distinguishing between civilian and military targets are illegal” and only when war robots can successfully pass a “military Turing test” could they be legally used. At that point any autonomous system ought to be no worse than a human at taking military decisions about legitimate targets in any potential engagement. Thus, in contrast to the original Turing Test this test would use decisions about legitimate targets and Chris Elliot believes that “Unless we reach that point, we are unable to [legally] deploy autonomous systems. Legality is a major barrier.”

The gravity of both the practical and legal issues surrounding the kill-bots is not overlooked by the people directly engaged in their production, adoption and deployment in the field. In 2005 during his keynote address at the fifth annual Robo Business Conference in Pittsburgh, program executive director of ground combat systems for the U.S. army Kevin Fahey said: “Armed robots haven’t yet been deployed because of the extensive testing involved […] When weapons are added to the equation, the robots must be fail-safe because if there’s an accident, they’ll be immediately relegated to the drawing board and may not see deployment again for another 10 years. […] You’ve got to do it right.” (Note again that any such potential delays will put into question “when” and not “If” such armed robots will be used on a large scale.)

Armed SWORDS robot modifications
Armed SWORDS robot modifications

While for anyone who has seen and read science fiction classics such as the Terminator and Matrix series the risks of using armed robots may be apparent, we should not underestimate the variety of political, military, economic and even ethical reasons supporting their usage. Some robotics researchers believe that robots could make the perfect warrior.

For example, according to Ronald Arkin, a robot researcher at Georgia Tech, robots could make even more ethical soldiers than humans. Dr. Arkin is working with Department of Defense to program ethics into the next generation of battle robots, including the Geneva Convention rules.  He says that robots will act more ethically than humans because they have no desire for self-preservation, no emotions, and no fear of disobeying their commanders’ orders in case that they are illegitimate or in conflict with the laws of war. In addition, Dr Arkin supports the somewhat ironic claim that robots will act more humanely than people because stress or battle fatigue does not affect their judgment in the way it affects a soldier’s. It is for those reasons that Dr. Arkin is developing a set of rules of engagement for battlefield robots to ensure that their use of lethal force follows the rules of ethics and the laws of war. In other words, he is indeed working on the creation of an artificial conscience which will have the potential to pass a potential military Turing test. Other advantages supportive of Dr. Arkin’s view are expressed by Gordon Johnson observation that

“A robot can shoot second. […] Unlike its human counterparts, the armed robot does not require food, clothing, training, motivation or a pension. […] They don’t get hungry […] (t)hey’re not afraid. They don’t forget their orders. They don’t care if the guy next to them has just been shot. Will they do a better job than humans? Yes.”

In addition to the economic and military reasons behind the adoption of robot soldiers there is also the issue often referred to as the “politics of body bags.”

Using robots will make waging war less liable to domestic politics and hence will make it easier domestically for both political and military leaders to gather support for any conflict that is fought on foreign soil. As a prescient article in the Economist noted “Nobody mourns a robot.” On, the other hand the concern raised by that possibility is that it may make war-fighting more likely given that there is less pressure from mounting casualties on the leaders. At any rate, the issues surrounding the usage of kill-bots have already passed beyond being merely theoretical or legal ones. Their practical importance comes to light with some of the rare but worryingly conflicting reports about the deployment of the SWORDS robots in Iraq and Afghanistan.

The first concern raising report came from the blog column of one of the writers for Popular Mechanics. In it Erik Sofge noted that in 2007 three armed ground bots were deployed to Iraq. All three units were the SWORDS model produced by Foster-Miller Inc. Interestingly enough, all three units were almost immediately pulled out. According to Sofge when Kevin Fahey (the Army’s Program Executive Officer for Ground Forces.) was asked about the SWORDS’ pull out he tried to give a vague answer while stressing that the robots never opened any unauthorized fire and no humans were hurt. Pressed to elaborate Fahey said that it was his understanding that “the gun [of one of the robots] started moving when it was not intended to move.” That is to say that the weapon of the SWORDS robot swung or was swinging around randomly or in the wrong direction. At that point in time Sofge pointed out that no specific reason for the withdrawal was given either from the military or from Foster-Miller Inc.

A similar though even stronger report came from Jason Mick who was also present at the Robotic Business Conference in Pittsburgh in 2008. In his blog on the Daily Tech Web Site Mick claimed that “First generation war-bots deployed in Iraq recalled after a wave of disobedience against their human operators.”

It was just a few days later that Foster-Miller published the following statement:

“Contrary to what you may have read on other web sites, three SWORDS robots are still deployed in Iraq and have been there for more than a year of uninterrupted service. There have been no instances of uncommanded or unexpected movements by SWORDS robots during this period, whether in-theater or elsewhere.
[…] TALON Robot Operations has never “refused to comment” when asked about SWORDS.   For the safety of our war fighters and  due to the dictates of operational security, sometimes our only comment is, “We are unable to comment on operational details.”

Several days after that Jason Mick also published a correction though it seems at least some of the details are still unclear and there is a persistent disharmony between statements coming from Foster-Miller and DoD. For example, Foster Miller spokeswoman Cynthia Black claimed that “The whole thing is an urban legend.” Yet Kevin Fahey was reported as saying the robots recently did something “very bad.”

Two other similarly troubling cases pertaining to the Predator and Reaper UAV’s were recently reported in the media.

The first one Air Force Shoots Down Runaway Drone over Afghanistan reported that:

Predotor UAV thumb“A drone pilot’s nightmare came true when operators lost control of an armed MQ-9 Reaper flying a combat mission over Afghanistan on Sunday. That led a manned U.S. aircraft to shoot down the unresponsive drone before it flew beyond the edge of Afghanistan airspace.”

The report goes on to point that this is not the only accident of that kind and that there is certainly the potential of some of those drones going “rogue.”  Judging by the reported accidents it is clear that this is not only a potential but actual problem.

While on the topic of unmanned drones going “rogue” a most recent news report disclosed that Iraq Insurgents “Hacked in Video Feeds from US Drones”

Thus going rogue is not only an accident related issue but also a network security one. Allegedly, the insurgents used a 26 dollar Russian software to tap into the video streamed live by the drones. What is even more amazing is that the video feed was not even encrypted. (Can we really blame the US Air Force for not encrypting their killer-drone video feeds?! I mean, there are still a whole-lot of laptop home users without access password for their home WiFi connection?! 😉 So it is only natural that the US Air Force will be no different, right?)

And what about the actual flight and weapons control signal? How “secure” is it? Can we really be certain that some terrorist cannot “hack” into the control systems and throw our stones onto our own heads. (Hm, it wasn’t stones we are talking about, it is hell-fire missiles that the drones are armed with…)

As we can see there a numerous legal, political, ethical and practical issues surrounding the deploying, usage, command and control, safety and development of kill-bots. Obviously it may take some time before the dust around the above cases settles and the accuracy of the reports is confirmed or denied beyond any doubt. Yet Jason Mick’s point in his original report still stands whatever the specific particularities of each case. Said Mick:

“Surely in the meantime these developments will trigger plenty of heated debate about whether it is wise to deploy increasingly sophisticated robots onto future battlefields, especially autonomous ones.  The key question, despite all the testing and development effort possible, is it truly possible to entirely rule out the chance of the robot turning on its human controllers?”

End of Part 4 (see Part 1; Part 2; Part 3 and Part 5)

Filed Under: Op Ed Tagged With: Artificial Intelligence, foster-miller, future technology, maars, robot, TALON, Turing test, Unmanned aerial vehicle

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy