• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

robot

Laura Major and Julie Shah on What to Expect When You’re Expecting Robots

November 10, 2020 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/926628661-singularity1on1-laura-major-julie-shah.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Hans Moravec famously claimed that robots will be our (mind) children. If true, then, it is natural to wonder What to Expect When You’re Expecting Robots? This is the question that Laura Major and Julie Shah – two expert robot engineers, are addressing in their new book. Given the subject of robots and AI as well as the fact that both Julie and Laura have experience in the aerospace, military, robotics, and self-driving car industries, I thought that they’d make great guests on my podcast. I hope you enjoy our conversation as much as I did.

During this 90 min interview with Laura Major and Julie Shah, we cover a variety of interesting topics such as: the biggest issues within AI and Robotics; why humans and robots should be teammates, not competitors; whether we ought to focus more on the human as a weak link in the system; what happens when technology works as designed and exceeds our expectations; problems in defining driverless (or self-driving) car, AI and robot; why, ultimately, technology is not enough; whether the aerospace industry is a good role model or not.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is Julie Shah?

Julie Shah is a roboticist at MIT and an associate dean of social and ethical responsibilities of computing. She directs the Interactive Robotics Group in the Schwarzman College of Computing at MIT. She was a Radcliffe fellow, has received an National Science Foundation Career Award, and has been named one of MIT Technology Review’s “Innovators Under 35.” She lives in Cambridge, Massachusetts.

 

Who is Laura Major?

Laura Major is CTO of Motional (previously Hyundai-Aptiv Autonomous Driving Joint Venture), where she leads the development of autonomous vehicles. Previously, she led the development of autonomous aerial vehicles at CyPhy Works and a division at Draper Laboratory. Major has been recognized as a national Society of Women Engineers Emerging Leader. She lives in Cambridge, Massachusetts.

Filed Under: Podcasts Tagged With: robot

Why We Need an Ethical Enlightenment in AI Relations

October 22, 2015 by Daniel Faggella

Ethics word cloud concept. Vector illustrationWhile many may be intrigued by the idea, how many of us actually care about robots – in the relating sense of the word? Dr. David Gunkel believes we need to take a closer and more objective view of our moral decision making, which in the past has been more capricious than empirical. He believes that our moral-based decisions have differed less on hard rationality and more on culture, tradition, and individual choice.

If (or when) robots end up in our homes, taking care of people and helping to manage daily chores, we’ll inevitably be in a position where certain decisions can be made to include or exclude those robots from certain rights and privileges. In this new frontier or ethical thought, it’s useful to think of past examples of rights and privileges being extended to entities other than people.

In the last few years, the U.S. Supreme Court has strengthened its recognition of corporations as individuals. The Indian government recognized dolphins as “non-human persons” in 2013, effectively putting a ban on cetacean captivity. These examples of past evolutions in society’s moral compass are crucial as we engineer various robotic devices that will become a part of the ‘normal’ societal makeup in the coming decades.

Part of what we need to do is Nietzschian in nature. That is, we should interrogate the values behind our values, and try to get people to ask the tough, sometimes abstract questions i.e. “That’s been our history, but can we do better?” David believes that we should not fool ourselves into thinking that we are doing something that we are, in actuality, not doing – at least with integrity.

When it comes to our AI-driven devices, Gunkel wants us to look seriously at the way we situate these objects in our world. What do we do in our relationship with these devices? Granted, we don’t have much of a record at this point in time. “Some people have difficulties getting rid of their Smartphone, they become kind of attached to it in an odd way”, remarks Gunkel. There is certainly no rulebook that tells us how to treat our smart devices.

He acknowledges that some might wave off the idea of any sort of AI ethics until one has been created that has an intelligence, or even an awareness, that is close to that of a human’s. “I don’t think it’s a question of level of awareness of a machine – it’s not about the machine – what will matter is how we (humans) relate to AI,” says David. “Sentience may be a red herring, it may be the way that we excuse thinking about this problem, to say that it’s not our problem now, we’ll just kick it down the road.”

Without any set rules, will more advanced social robots be slaves? Will we treat them like companions? PARO, an interactive robot in the form of a seal, is now being used in hospitals and other care facilities around the world. There is already evidence that many elderly treat them like pets; what happens when one of these seals has to be replaced or taken away? Again, there’s not a record of activity to set a precedent.

We might draw some relation here to children and their relationship to their stuffed animals. Any parent knows that you can’t just replace a stuffed animal with one that looks like the old stuffed animal – the child will cry and ask for the old version, no matter how attractive or expensive the new toy. “This is a real biological way in which we connect with not only people but also objects”, says Gunkel. It may be more challenging to anticipate how people will react to future robotic entities than we realize.

Kant presented a tangent argument that we can apply to this train of thought, explains David. The famous philosopher didn’t love animals, but he talked about not kicking a dog because it diminishes the moral sphere in which we live, within our own conscience and the greater moral community. With this concept in mind, what basic foundational policy might we set in place i.e. what are some ground rules to help direct us in our relation to AI as we move forward?

These will certainly evolve, like all man-made laws and ethical conceptions, but Gunkel suggests some key questions that we should ask now to come up with these ground rules for the nearer-term future:

  1. What is it we are designing?

We need to be very careful about what and how we design AI-driven machines. Engineering is too often solely a results-generated opportunity, without enough time spent on thinking about the ethical outcomes.  We are currently facing the very real and dangerous predicament of whether to continue down the road of designing autonomous weapons.

  1. After we’ve created such entities, what do we do with them? How do we situate them in our world?
  1. What happens in terms of law and policy?

Court decisions have been made that set up early precedents for how we treat entities that are not human. It seems plausible to make the same argument for autonomous AI. For example, a corporation isn’t sentient, but it’s made up of sentient people, and considered to have rights akin to a person’s.

Whether or not you agree with his notion, setting a precedent for our receptivity to the legal and moral aspects of future robotic entities; considering why we are creating such entities; and thinking through how they should be treated in return, is a necessary venture for citizens and politicians to help avoid future conflicts and hedge catastrophe.

 

About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

Filed Under: Op Ed Tagged With: Artificial Intelligence, robot

Hiroshi Ishiguro: Technology is a way to understand what is human!

February 26, 2014 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/206588319-singularity1on1-hiroshi-ishiguro.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Hiroshi IshiguroI first met Dr. Hiroshi Ishiguro at last year’s GF2045 conference in New York. Dr. Ishiguro is known around the world for his android, geminoid, and telenoid robots and I have been trying to get him on my podcast ever since we met. At last, last week we were able to find an empty slot in his busy schedule and I was able to ask him a few questions.

During our 50 min conversation with Dr. Ishiguro we cover a variety of interesting topics such as: how and why he got interested in building androids and geminoids; whether it is possible to build disembodied Artificial Intelligence; what is human; the cultural East-West divide on the perception of robots as being good or evil; the uncanny valley and the Turing Test; the importance of implementing emotions such as pleasure and pain; the differences (or lack thereof) of hardware and software; telenoid robots…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Hiroshi Ishiguro Laboratories Mission Statement:

Hiroshi-ishiguro-geminoidThe end of the information age will coincide with the beginning of the robot age. However, we will not soon see a world in which humans and androids walk the streets together, like in movies or cartoons; instead, information technology and robotics will gradually fuse so that people will likely only notice when robot technology is already in use in various locations.

Our role will be to lead this integration of information and robotics technologies by constantly proposing new scientific and technological concepts. Toward this, knowledge of art and philosophy will be invaluable. Technology has made art “reproducible”; likewise, artistic sense has contributed to the formation of new technologies, and artistic endeavors themselves are supported by philosophical contemplation and analysis.

Hereafter, human societies will continue to change due to “informationization” and robotization; in this ever-changing setting, artistic activities and philosophical speculation will allow us to comprehend the essential natures of humans and society so that we can produce truly novel science and technological innovations in a research space which lies beyond current notions of “fields” and boundaries of existing knowledge.

Who is Hiroshi Ishiguro?

Ishiguro was born in Shiga in 1963. In high school and university, while growing up, Hiroshi was devoted to painting. At Dr. Hanao Mori’s laboratory at Yamanashi University, he got inspired to learn about robots and computers. Today as a scientist attracting global attention, Hiroshi is focusing his research on humanoid robots such as androids, geminoids, and telenoids.

Having graduated from Yamanashi university, Ishiguro started his Ph.D. at Osaka University in 1988. He studied the methodology of research from Dr. Saburo Tsuji and has followed the principle “Seek the fundamental problem” to this day. Dr. Ishiguro has attended Yamanashi University, Osaka University, Kyoto University, University of California and Wakayama University, where has worked on distributed sensor systems and interactive robotics.

Currently, Hiroshi is a Professor in the Department of Systems Innovation in the Graduate School of Engineering Science at Osaka University (2009-). While going around universities, he has continued his research in ATR (Advanced Telecommunications Research Institute), and now he is Visiting Group Leader (2002-) of the Intelligent Robotics and Communication Laboratories. He also participated in the foundation of Vstone Co., Ltd., an academic-industrial venture company aiming for technology transfer. His principle is that robotics is just a philosophy. Dr. Ishiguro is the author of Robot to ha nanika? (What is Robot?) and Android Science.

Filed Under: Podcasts Tagged With: android, robot

This Article Was Written By A Robot

November 27, 2010 by wpengine

There is no one definition of robot which satisfies everyone and most people have their own. For example Joseph Engelberger, a pioneer in industrial robotics, once remarked: “I can’t define a robot, but I know one when I see one.”

According to the Encyclopaedia Britannica a robot is “any automatically operated machine that replaces human effort, though it may not resemble human beings in appearance or perform functions in a humanlike manner.” The Merriam-Webster dictionary describes a robot as a “machine that looks like a human being and performs various complex acts (as walking or talking) of a human being,” or a “device that automatically performs complicated often repetitive tasks,” or a “mechanism guided by automatic controls.”

ASIMO

A lot of people find it disturbing that humans are becoming more like robots while, at the same time, robots are also becoming more like humans. Many are philosophizing about what humans will become after we modify ourselves through genetic engineering or by implanting AI components into all parts of the body to improve our physical and mental abilities. There are concerns that such modifications will pervert us in some way and should perhaps be avoided. This is causing a lot of anxiety and some are warning that humans will stop being actually human.

I, for one, fail to see what the fuss is all about. There is a simple answer as to why the prospect of “artificial” human modification should not be a significant cause for concern.

Humans already are robots. One of my favorite quotes by Aubrey de Grey is “the human body is a very complex machine.” Yes, we are complex, self replicating and self-repairing, but we are machines never-the-less. Look at yourself, look at your hands – they are a small part of an extremely complex apparatus that is able to accomplish all kinds of sophisticated actions. Vertebrate life forms are the most complex apparatus ever developed and no definition of a robot says that it has to be man made. So what if the current life forms were created by the trial-and-error process called evolution for over 4.7 billion years?!…

It is a given that a person does look like a human and can replace other humans’ efforts and is able to perform various complex and often repetitive acts (such as walking and talking) and finally is guided by automatic controls (in our nervous system).

The human being is definitely not a perfect contraption for any mechanism can always be improved. However, the natural process of evolution that has updated humans until the start of the industrial revolution is no longer an option. Civilization needs to find a new way to improve their design. And just as humanity is transcending evolution the technology to modify the human machine will become available.

Original Ford Model T

The technology to maintain the human machine indefinitely in roughly its built condition will be fully available with the advent of regenerative medicine, as being developed by Aubrey de Grey and the SENS Foundation. In may take 20 or 30 years (or more) but the technological singularity (also estimated to take roughly another 20 or 30 years) will provide us with another way to improve the hardware we run on and build the next generation of humachines to be better then they would be by (evolutionary) chance.

Think of your body as an old car — you can keep it running in perfect condition indefinitely, for as long as you do the proper maintenance (i.e. regenerative medicine). Just like people who have an antique and perfectly working Ford Model T. Or you might want to put in a more powerful engine, an automatic gearbox and an air conditioning unit, so you can drive faster and more more comfortably. You can even turn it into a hot-rod muscle car for street racing or to impress the girls…

Ford Model T Hot Rod

Why would anyone worry about the option of modifying a robot to be a better robot?

Humans are always updating the programming of our biological CPU (through education) from the moment we are born. You are updating your programming even now – by reading this article. Further “artificial” mental and physical modifications that will be an option after the singularity will just be another hardware adjustment, not very much different from the one above. Some people want to keep their cars as if they just came off the assembly line. Others may let them wear out and go on to the scrap yard. But in my opinion, most will want to install parts that allow for better durability, performance, speed and comfort.

…I can now say that I know a robot when I see one. And that includes when I am looking at the mirror.

About the Author: Kieran Griffith is a voluntary consultant to the SENS Foundation for developing medical techniques that extend lifespan indefinitely. He has degrees in psychology, the Humanities and Space Science, and is planning a future career in the field of commercial spaceflight.

Related articles
  • Funny Or Serious: Are We Giving Robots Too Much Power? (singularityblog.singularitysymposium.com)
  • Can Terraforming Venus Be The Solution To Population Growth? (singularityblog.singularitysymposium.com)
  • Top 3 Robot Music Videos (singularityblog.singularitysymposium.com)

Filed Under: Op Ed, What if? Tagged With: Kieran Griffith, robot, singularity

Dawn of the Kill-Bots: the Conflicts in Iraq and Afghanistan and the Arming of AI (part 4)

December 18, 2009 by Socrates

Part 4: Military Turing Test — Can robots commit war-crimes?

Now that we have identified the trend of moving military robots to the forefront of military action from their current largely secondary and supportive role to becoming a primary direct participant or (as Foster-Miller proudly calls its MAARS bots) “war fighters” we have to also recognize the profound implications that such a process will have not only on the future of warfare but also potentially on the future of mankind. In order to do so we will have to briefly consider what for now are assumed to be broad philosophical but, as robot technology advances and becomes more prevalent, will eventually become highly political, legal and ethical issues:

Can robots be intelligent?

Can robots have conscience?

Can Robots commit war crimes?

The
Image via Wikipedia

In 1950 Alan Turing introduced what he believed was a practical test for computer intelligence which is now commonly known as the Turing Test. The Turing test is an interactive test involving three participants – a computer, a human interrogator and a human participant. It is a blind test where the interrogator asks questions via keyboard and receives answers via display screen on the basis of which he or she has to determine which answers come from the computer and which from the human subject. According to Alan Turing if a statistically sufficient number of different people play the roles of interrogator and human subject, and, if a sufficient proportion of the interrogators are unable to distinguish the computer from the human being, then the computer is considered an intelligent, thinking entity or AI.

If we agree that AI is very likely to have substantial military application then there is clearly a need of developing an even more sophisticated and very practical military Turing Test which ought to ensure that autonomous armed robots abide by the rules of engagement and the laws of war. On the other hand, while whether a robot or a computer is (or will be) self-conscious thinking entity (or not) is an important question, it is not necessarily a requirement for considering the issue of war crimes. Given that those MAARS-type bots have literally killer applications, the potential for a unit of those going “nuts on a shooting spree” or getting “hacked,” ought to be considered carefully and hopefully quite early in advance of any potentialities.

Robots capable of shooting on their own are also a hotly debated legal issue. According to Gordon Johnson, who leads robotics efforts at the Joint Forces Command research center in Suffolk (Virginia), “The lawyers tell me there are no prohibitions against robots making life-or-death decisions.” When asked about the potential for war crimes Johnson replied “I have been asked what happens if the robot destroys a school bus rather than a tank parked nearby. We will not entrust a robot with that decision until we are confident they can make it.” (Thus the decision really is not “if” robots will be entrusted with such decisions but “when.” Needless to say historically government confidence in different projects is hardly a guarantee that things will not go wrong.)

On the other hand, in complete opposition to Johnson’s claims, according to barrister and engineer Chris Eliot, it is currently illegal for any state to deploy a fully autonomous system. Eliot claims that “Weapons intrinsically incapable of distinguishing between civilian and military targets are illegal” and only when war robots can successfully pass a “military Turing test” could they be legally used. At that point any autonomous system ought to be no worse than a human at taking military decisions about legitimate targets in any potential engagement. Thus, in contrast to the original Turing Test this test would use decisions about legitimate targets and Chris Elliot believes that “Unless we reach that point, we are unable to [legally] deploy autonomous systems. Legality is a major barrier.”

The gravity of both the practical and legal issues surrounding the kill-bots is not overlooked by the people directly engaged in their production, adoption and deployment in the field. In 2005 during his keynote address at the fifth annual Robo Business Conference in Pittsburgh, program executive director of ground combat systems for the U.S. army Kevin Fahey said: “Armed robots haven’t yet been deployed because of the extensive testing involved […] When weapons are added to the equation, the robots must be fail-safe because if there’s an accident, they’ll be immediately relegated to the drawing board and may not see deployment again for another 10 years. […] You’ve got to do it right.” (Note again that any such potential delays will put into question “when” and not “If” such armed robots will be used on a large scale.)

Armed SWORDS robot modifications
Armed SWORDS robot modifications

While for anyone who has seen and read science fiction classics such as the Terminator and Matrix series the risks of using armed robots may be apparent, we should not underestimate the variety of political, military, economic and even ethical reasons supporting their usage. Some robotics researchers believe that robots could make the perfect warrior.

For example, according to Ronald Arkin, a robot researcher at Georgia Tech, robots could make even more ethical soldiers than humans. Dr. Arkin is working with Department of Defense to program ethics into the next generation of battle robots, including the Geneva Convention rules.  He says that robots will act more ethically than humans because they have no desire for self-preservation, no emotions, and no fear of disobeying their commanders’ orders in case that they are illegitimate or in conflict with the laws of war. In addition, Dr Arkin supports the somewhat ironic claim that robots will act more humanely than people because stress or battle fatigue does not affect their judgment in the way it affects a soldier’s. It is for those reasons that Dr. Arkin is developing a set of rules of engagement for battlefield robots to ensure that their use of lethal force follows the rules of ethics and the laws of war. In other words, he is indeed working on the creation of an artificial conscience which will have the potential to pass a potential military Turing test. Other advantages supportive of Dr. Arkin’s view are expressed by Gordon Johnson observation that

“A robot can shoot second. […] Unlike its human counterparts, the armed robot does not require food, clothing, training, motivation or a pension. […] They don’t get hungry […] (t)hey’re not afraid. They don’t forget their orders. They don’t care if the guy next to them has just been shot. Will they do a better job than humans? Yes.”

In addition to the economic and military reasons behind the adoption of robot soldiers there is also the issue often referred to as the “politics of body bags.”

Using robots will make waging war less liable to domestic politics and hence will make it easier domestically for both political and military leaders to gather support for any conflict that is fought on foreign soil. As a prescient article in the Economist noted “Nobody mourns a robot.” On, the other hand the concern raised by that possibility is that it may make war-fighting more likely given that there is less pressure from mounting casualties on the leaders. At any rate, the issues surrounding the usage of kill-bots have already passed beyond being merely theoretical or legal ones. Their practical importance comes to light with some of the rare but worryingly conflicting reports about the deployment of the SWORDS robots in Iraq and Afghanistan.

The first concern raising report came from the blog column of one of the writers for Popular Mechanics. In it Erik Sofge noted that in 2007 three armed ground bots were deployed to Iraq. All three units were the SWORDS model produced by Foster-Miller Inc. Interestingly enough, all three units were almost immediately pulled out. According to Sofge when Kevin Fahey (the Army’s Program Executive Officer for Ground Forces.) was asked about the SWORDS’ pull out he tried to give a vague answer while stressing that the robots never opened any unauthorized fire and no humans were hurt. Pressed to elaborate Fahey said that it was his understanding that “the gun [of one of the robots] started moving when it was not intended to move.” That is to say that the weapon of the SWORDS robot swung or was swinging around randomly or in the wrong direction. At that point in time Sofge pointed out that no specific reason for the withdrawal was given either from the military or from Foster-Miller Inc.

A similar though even stronger report came from Jason Mick who was also present at the Robotic Business Conference in Pittsburgh in 2008. In his blog on the Daily Tech Web Site Mick claimed that “First generation war-bots deployed in Iraq recalled after a wave of disobedience against their human operators.”

It was just a few days later that Foster-Miller published the following statement:

“Contrary to what you may have read on other web sites, three SWORDS robots are still deployed in Iraq and have been there for more than a year of uninterrupted service. There have been no instances of uncommanded or unexpected movements by SWORDS robots during this period, whether in-theater or elsewhere.
[…] TALON Robot Operations has never “refused to comment” when asked about SWORDS.   For the safety of our war fighters and  due to the dictates of operational security, sometimes our only comment is, “We are unable to comment on operational details.”

Several days after that Jason Mick also published a correction though it seems at least some of the details are still unclear and there is a persistent disharmony between statements coming from Foster-Miller and DoD. For example, Foster Miller spokeswoman Cynthia Black claimed that “The whole thing is an urban legend.” Yet Kevin Fahey was reported as saying the robots recently did something “very bad.”

Two other similarly troubling cases pertaining to the Predator and Reaper UAV’s were recently reported in the media.

The first one Air Force Shoots Down Runaway Drone over Afghanistan reported that:

Predotor UAV thumb“A drone pilot’s nightmare came true when operators lost control of an armed MQ-9 Reaper flying a combat mission over Afghanistan on Sunday. That led a manned U.S. aircraft to shoot down the unresponsive drone before it flew beyond the edge of Afghanistan airspace.”

The report goes on to point that this is not the only accident of that kind and that there is certainly the potential of some of those drones going “rogue.”  Judging by the reported accidents it is clear that this is not only a potential but actual problem.

While on the topic of unmanned drones going “rogue” a most recent news report disclosed that Iraq Insurgents “Hacked in Video Feeds from US Drones”

Thus going rogue is not only an accident related issue but also a network security one. Allegedly, the insurgents used a 26 dollar Russian software to tap into the video streamed live by the drones. What is even more amazing is that the video feed was not even encrypted. (Can we really blame the US Air Force for not encrypting their killer-drone video feeds?! I mean, there are still a whole-lot of laptop home users without access password for their home WiFi connection?! 😉 So it is only natural that the US Air Force will be no different, right?)

And what about the actual flight and weapons control signal? How “secure” is it? Can we really be certain that some terrorist cannot “hack” into the control systems and throw our stones onto our own heads. (Hm, it wasn’t stones we are talking about, it is hell-fire missiles that the drones are armed with…)

As we can see there a numerous legal, political, ethical and practical issues surrounding the deploying, usage, command and control, safety and development of kill-bots. Obviously it may take some time before the dust around the above cases settles and the accuracy of the reports is confirmed or denied beyond any doubt. Yet Jason Mick’s point in his original report still stands whatever the specific particularities of each case. Said Mick:

“Surely in the meantime these developments will trigger plenty of heated debate about whether it is wise to deploy increasingly sophisticated robots onto future battlefields, especially autonomous ones.  The key question, despite all the testing and development effort possible, is it truly possible to entirely rule out the chance of the robot turning on its human controllers?”

End of Part 4 (see Part 1; Part 2; Part 3 and Part 5)

Filed Under: Op Ed Tagged With: Artificial Intelligence, foster-miller, future technology, maars, robot, TALON, Turing test, Unmanned aerial vehicle

Dawn of the Kill-Bots: the Conflicts in Iraq and Afghanistan and the Arming of AI (part 2)

December 17, 2009 by Socrates

Part 2: The Past — Robot Etymology, Brief History and Military Classification

Contrary to what popular intuition may dictate the idea of thinking machines or artificial beings has existed for millennia. Some of the first examples can be found in the ancient Greek myths and legends such as the bronze giant Talos of Crete and the golden mechanical servants of Hephaestus.

Artificial mechanical beings or, as we now commonly call them, robots have appeared in numerous forms and functions in modern popular culture: as a servant – R2D2 in Star Wars, as a fellow comrade – Data in Star Trek, and as both an exterminator and savior – in the Terminator series. Of course, it is not hard to notice that all of the above cases are entirely fictional for they are either ancient Greek Mythology or modern Hollywood science fiction. Yet, what is science fiction one day may well turn out to be reality the next one.

Today, production lines for virtually any large scale commodity are dominated by robots that do the majority of operations within the production process and play a vital part in our

Roomba (CC) Larry D. Moore or GFDL photo by Larry D. Moore
Roomba (CC) Larry D. Moore

globalized capitalist mode of production. It is no surprise then that robots have been migrating from the production lines into every other aspect of our lives. According to Dr. Rodney Brooks, CFO and co-founder of iRobot Corporation, in 2002 there were almost no robots in people’s homes. By 2007, in just five years, his company produced and sold over 2.5 million home clean-bots. From the artificial baby-seal robot Paro, through the iRobot Roomba/Scoomba Vacuum-Bots and the home-made vigilante Bum-Bot in Atlanta, to the deadly Predator and Reaper drones, there seems to be no human activity that will not be soon impacted by robots to one degree or another. In fact the South Korean government aims to put a robot in every house there by 2015 or 2020.

So before we move on to a brief time-line of robotic development let us look at the etymology of the term. The word robot was introduced in the 1920s by Czech writer Karel Čapek in his play R.U.R. (Rossum’s Universal Robots). The play was situated on an island-factory for “artificial people” that Čapek called robots,  and those robots were manufactured so well that they could be mistaken for real human beings. Čapek’s robots could think autonomously for themselves, yet at least for a while they seemed to be happy serving their human masters.

Now that we have a basic understanding of the term robot let us look at a brief timeline of some of the major relevant events within the history of robotic and especially military robotic development. As mentioned earlier the idea for robots can be traced back 3,000 years ago to the ancient Greek myths and legends. Arguably, around that time, similar ideas appear in Ancient Egyptian, Judaic and Chinese writings.

For example, in ancient China in the Lie Zi text, there is a description of an encounter between King Mu of Zhou (1023 – 957 BC) and the “artificer” (i.e. what we will call today a mechanical engineer) Yan Shi. Yan Shi created a human size mechanical figure which was allegedly able to walk, dance, sing and even flirt with the court ladies. Later examples can be found in Ancient Greece as long ago as the 4th century BC, when the Greek mathematician Archytas of Tarentum postulated a mechanical bird he called “the Pigeon” which was propelled by a steam engine. Ctesibium and Hero of Alexandria are two other examples of ancient Greek inventors who allegedly created several automatons at least one of which was supposedly able to speak.

In the middle ages it was the Muslim world where one could find the most sophisticated and impressive automatons and in his Book of Stones the alchemist Jabir ibn Hayyan published recipes for creating artificial snakes, scorpions and even humans. Another Muslim inventor was Al-Jazari (1136-1206) who designed and constructed a number of automatic machines, among which most notably was the first programmable humanoid robot in 1206 – a boat with four automated musicians playing music to entertain guests at royal drinking parties.

In the west, one of the first recorded designs of a humanoid robot was made by Leonardo da Vinci (1452-1519) around 1495. Da Vinci had detailed drawings of a mechanical knight but it is not known whether he attempted to actually build his robot or not. Later on in 1738 Jacques de Vaucanson created a mechanical duck that was able to eat and digest grain, flap its wings, and even excrete. In the east, in 19th century Japan the brilliant craftsman Hisashige Tanaka created an array of extremely complex mechanical toys, some of which were capable of serving tea, firing arrows, or even painting. It has to be noted though that even thought automatons were the closest things to robots, and while they may have looked humanoid, and their movements were complex, they were not capable of adapting to their environment, re-adjusting their movement, self-control or decision making. Arguably, progress on those issues began in the United States in 1898 when Nikola Tesla publicly demonstrated a radio-controlled boat, which was probably the first remotely operated vehicle (ROV). Tesla hoped to develop his ROV into a wireless torpedo used as a weapon by the US Navy, though despite its impressiveness his ROV was not adopted.

British soldiers with captured German Goliath remote-controlled demolition vehicles (Battle of Normandy, 1944).
British soldiers with captured German Goliath UGV

The first Unmanned Ground Vehicle (UGV) was the German Goliath used in WWII. While in essence it was little more than a tracked mine, that looked like a small tank without the turret, it was mobile, remotely operated and packed quite a punch so the Wehrmacht used it to clear mines and bunkers. On the eastern front Russian teletanks were among the first armed UGVs for they had machine guns, flamethrowers, smoke canisters and explosive charges.

It proved easier for engineers to build unmanned vehicles that go through the air than unmanned vehicles that move on the ground. As far back as the late 1930s, the U.S. Navy and Air Force used unmanned aerial vehicles (UAV) as target drones. Thanks to the success and reliability of those drones, the military began looking for other ways to use the planes and reconnaissance was an obvious alternative. By the 1960s and especially in the 1970s American UAV’s collected intelligence on targets in Vietnam, China and North Korea. Thus at least until the late 1980s UGVs were far behind UAVs in terms of development, but by the early 1990s they begun catching up. Unmanned ground vehicles such as the Robotic Ranger (an armed moving platform) and the ROBAT (a modified M-60 tank meant for mine clearing) were funded by the US government and were tested by Foster-Miller Inc. Those ground robots, together with the UAVs and the more recent UUVs (unmanned underwater vehicles)  laid the foundation for the future expansion of robots within the US military. Each of the above three different types of robot technologies is designed for a specific realm of the battlefield and will take increasingly important roles within the US military planning, development and deployment in the twenty first century.

End of Part 2 (see Part 1; Part 3; Part 4; Part 5)

Filed Under: Op Ed Tagged With: Artificial Intelligence, future technology, robot

Dawn of the Kill-Bots: the Conflicts in Iraq and Afghanistan and the Arming of AI (part 3)

December 17, 2009 by Socrates

Part 3: The Present — close infantry support and force multiplier

Not surprisingly, once robots began migrating from the production lines to the military, death became not an accidental but deliberate and heavily invested in outcome.

MQ-1 Predator on display at 2006 Edwards AFB o...
Image via Wikipedia

In 2002 the Air Force officially changed the Predator’s designation from RQ-1 (R for reconnaissance) to MQ-1 (M for multi-use). Not just for intelligence gathering anymore, Predators were then officially capable of carrying hell-fire missiles. Even before the official change in its military designation, it was well known that the CIA had already possessed several Predators capable of carrying weapons and conducting bombing raids. So, whatever the official beginning, the Predator was probably the first modern actively armed robotic or unmanned war machine. The first officially reported person to have been deliberately killed by robot was Mohammed Atef. In November 2001 missiles fired from a CIA Predator killed Atef, who was al-Qaeda’s chief of military operations and one of Osama bin Laden’s most important associates.

Currently there are two main suppliers of UGVs for the US military forces: iRobot Inc. and Foster Miller Inc.

Even before deployment to Afghanistan and Iraq the military was testing some limited models and numbers of UGVs that were produced by the above companies with a relatively narrow focus of application consisting mainly of bomb disposal and reconnaissance missions which are particularly dangerous to human soldiers. The active deployment of troops in a hostile theatre of operations only accelerated the large scale introduction of Army-Bots. The Pack-Bot, which was among the first bomb-sniffing robots, was produced by iRobot. Starting out with just a dozen or so units in 2003, by 2008 iRobot has delivered more than 1,500. In its turn, Foster-Miller Inc. started with a modest 162 TALON multi-purpose robots deployed to Iraq and Afghanistan in 2004 and by 2008 grew to over 5,500 units (number debated). Thus together the two companies add up to a total number of over 7,000 UGVs which have been deployed on the battlefield in Iraq and Afghanistan by the US Army and Marine Corps.

Despite the large number of UGVs and, by now, their mundane usage for bomb-sniffing and disposal, their most classified, controversial and most important application is their actual or potential usage as “close infantry support and force multiplier.” What this means is that some of the units are armed with typical infantry weapons and, in certain situations, are being (or at least intended to be) used as soldier replacements. Given their ability to operate, shoot and kill “the enemy” (i.e. humans) and in order to avoid tautology I will occasionally call them Kill-Bots. This will also avoid confusion with the vastly prevalent un-armed bomb-sniffing Army-Bots (be it TALON or PackBot).

The combat version of the Foster-Miller TALON,...
Image via Wikipedia

The Foster-Miller company is owned by the QinetiQ Group which in its turn is a joint venture between the UK’s Ministry of Defence and US-based holding company the Carlyle Group. While the current deployment, missions and usage of the armed UGVs may have been cloaked in secrecy their production and specifications are not. The first robot fighter introduced by Foster-Miller has been code-named SWORDS, after the acronym for Special Weapons Observation Reconnaissance Detection Systems. It is based on the Talon family robots widely used to disarm bombs and therefore has the same all terrain and weather tracked body/vehicle capable to operate in sand, mud, snow and heavy brush. In addition, the SWORDS Kill-Bot can be fitted with “direct engagement” standard M16, M240, M249 or Barret 50.cal infantry weapons. It has alternate mounts for 40 mm grenade launcher or anti-tank rocket systems. It also has four cameras with night vision and zoom lenses and can travel autonomously through impediments such as barbed wire. Fully loaded the Kill-Bot weighs 196 lb. and can reach speed of 4 mph (6.6 km/h) while its batteries can run from two and a half to four hours depending on load, terrain and mission.

The Kill-Bot costs around $200,000 per unit (versus $120,000 for the general TALONs) and is remotely operated by a soldier-operator via a laptop and a video-game-type joystick.

Details about the missions and usage of the SWORDS are scarce yet the initial tests must have satisfied the military with its performance because Foster-Miller has already produced its upgraded replacement. The second generation Kill-Bot is called MAARS which stands for Modular Advanced Armed Robotic System. In contrast to it SWORDS predecessor the MAARS unit is specifically designed for combat.

The MAARS “war fighter” (as the company promo material calls it) “controls the escalation of force” on the battlefield or in any other applicable situation. It has a much heavier and more durable chassis with easier battery and electronics accessibility. Other improved features include a larger payload bay, higher torque, creating faster ground speeds and improved braking. The complete system weighs about 350 lb. but can still reach speeds of up to 7 mph (almost double the speed of SWORDS).

According to Foster-Miller the missions associated with the MAARS can span the entire spectrum: from non-lethal to fully lethal engagement. In non-lethal mode the main weapon system is pointed down and the robot engages by broadcasting voice and high pitch siren signals or by using its green eye-safe laser – “the Dazzler.” In “less-lethal” mode the MAARS can have its weapon pointed up and engages by employing tear gas, smoke and ultimately warning shots. Finally, the robot can go to full “lethal” mode and can use its M240B medium machine gun (which comes loaded with 400 rounds of 7,62mm ammunition) and a pack of 40mm High Explosive grenades. In such a situation, according to BBC, both kill-bots are more accurate shots than the average soldier because their weapons’ system is mounted on a stable platform and takes aim electronically. Thus, quoting a US officer who helped test the robot,

“It [the Robot] eliminates the majority of shooting errors you [soldiers] have.”

It is important to note that at least at this point the robot does not have the capability to fire autonomously but only after the specific instructions of its remote operator. The reason to keep people controlling the robots rather than making them totally autonomous is so that the human operator has to make the decision and analyze the situation before shots are fired. This is to prevent any accidental shootings attributed to a robot. However, while both current UAVs and UGVs are usually controlled remotely by distant human operators, ultimately the Pentagon would like to give these robots increasing amounts of autonomy, including the ability to decide when to use lethal force. Says Gordon Johnson of the Joint Forces Command at the Pentagon:

“It’s more than just a dream now, today we have an infantry soldier” […] “We give him a set of instructions: if you find the enemy, this is what you do. We give the infantry soldier enough information to recognize the enemy when he’s fired upon. He is autonomous, but he has to operate under certain controls. It’s supervised autonomy. By 2015, we think we can do many infantry missions. […] The American military will have these kinds of robots. It’s not a question of if, it’s a question of when.”

A retired three-star admiral and an industry insider, Joseph W. Dyer, who heads iRobot’s government and industrial division, not only agrees with the feasibility of such prospects but goes as far as to commit to a timeline. Commenting on the androids which the movie I, Robot projected to be produced by around 2030 Dyer said:

“We think that timeline is about right.”

***

End of Part 3 (see Part 1; Part 2; Part 4; Part 5)

Related articles by Zemanta
  • Iraq rebels ‘hack into US drones’ (news.bbc.co.uk)
  • How Are Robots Being Used In War? (blurtit.com)
  • Insurgents Intercept Drone Video in King-Sized Security Breach (wired.com)
  • Hacking the Predator drone: Cheaper than dinner and a movie (boingboing.net)
  • US to expand eyes in the sky over Afghanistan (seattletimes.nwsource.com)

Filed Under: Op Ed Tagged With: maars, robot, TALON, Unmanned aerial vehicle

Dawn of the Kill-Bots: the Conflicts in Iraq and Afghanistan and the Arming of AI (part 1)

December 16, 2009 by Socrates

Warfare, while seemingly the opposite of large scale industrial production, in so far as it is usually perceived to be large scale destruction, exhibits most if not all of the main characteristics of the capitalist mode of production. Features such as specialization, personal discipline within an ethos of team spirit, standardization of procedures, processes and products, are characteristic of both war and modern production.

Foster-Miller Inc's TALON/SWORDS robot
Foster-Miller Inc’s TALON/SWORDS robot

Whether it is more proper to say that warfare has been industrialized or that capitalist production has been militarized is an interesting and important question,  yet regardless of the answer it is evident that robots can be successfully applied both to the production process of capitalism as well as the destruction process of war.

Just like its cousin the manufacturing robot has the capacity to produce more products per unit of time compared to a worker, a war-bot could potentially bring more and/or “higher quality” of destruction than a soldier.

At the same time, similarly to the robot, the war-bot could accomplish the above at a lower (destruction) cost, with higher precision, without the protection of trade unions, the necessity of health or pension benefits, and (allegedly) any potential for even minimal disobedience. In addition, from domestic politics point of view more war-bots are perceived as putting less (of our) soldiers in harm’s way and hence provide an added political impetus behind their fast adoption. Thus at least in the developed capitalist world today there are powerful military, political and economic incentives behind the creation of large and multifaceted robotic armed forces aimed at gradually replacing most if not all of today’s soldiers.

My thesis in this series of several blog posts is that despite the high media coverage of the conflicts in Iraq and Afghanistan what we are (not) witnessing is the rise of armed military robots capable of killing humans.

Therefore I will argue that within the history of the human species the present conflicts in Iraq and Afghanistan may eventually turn out to be known as the dawn of the kill-bots – the period during which increasingly self-sufficient machines became capable of and started making increasingly autonomous decisions about killing human beings. Thus the conflicts in Iraq and Afghanistan may turn out to be a lot more than merely a chapter in the War on Terrorism or a conflict between incompatible ideologies. What those conflicts could turn out to be is but the foreword of an altogether new type of war – the conflict between man and machine, mankind and robotkind, with democracy being the first casualty.

End of Part 1 (see Part 2; Part 3; Part 4; Part 5)

Related articles by Zemanta
  • The Reuters News Kill-Bot Report (singularityblog.singularitysymposium.com)

Filed Under: Op Ed Tagged With: Artificial Intelligence, foster-miller, future technology, maars, robot, TALON

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy