• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

future technology

Regenerative Medicine on TEDMED

January 25, 2010 by Socrates

Dr. Anthony Atala from the Wake Forest Institute for Regenerative Medicine in an amazing video demonstrating the promise of regenerative medicine in general and artificially grown organs, in particular.

Here is the timeline description of the video as posted by Singularity Hub.

1:46 – Atala alerts the audience to the scope of the problems with organ transplants and limited organs. Starting with the quote from above.
2:28 – An interesting time-elapsed video of a salamander regrowing a fore-limb.
3:55 – Biomaterials can act as a bridge to encourage tissue growth over damaged areas or gaps. The bridge length, however, is limited to about 1 cm.
5:15 – Using cells that are cultured outside the body, Atala can take an artificial scaffold and create a new organ.
6:20 – Atala uses a bio-reactor to exercise and condition muscle tissue before it is placed in a patient. It’s really amazing to watch the tissue be stimulated outside the body.
6:58 – The same techniques can be used to create engineered blood vessels.
8:30 – The Wake Forest team can take cell samples to create an engineered bladder and place it into a patient in just six to eight weeks!
9:50 – Is a bladder not impressive enough for you? Check out this heart valve that Atala’s team grew and then exercised in a reactor.
10:12 – Ok…This is just incredible. Watch as Atala shows you an artificially grown ear! And then a few seconds later, the bones of a finger!
11:03 – Here we get to a demonstration that has generated a lot of interest. Using a common inkjet printer, the Wake Forest Institute can print muscle cells into tissue. The two chambered heart shown here is still very experimental and not for use in patients.
11:50 – Another possible means of regrowing tissue is to take donor organs and strip them of everything but collagen. This scaffold is then used as a base to grow a new organ from a patient’s own cells. We’ve seen this with stem cells and hearts, and also with a trachea.
13:20 – 90% of patients waiting for organs in the US need a kidney. The Wake Forest team has used wafers of cells to create miniature kidneys that are still in the test phase.
13:58 – Atala discusses strategies in regenerative medicines.
15:37 – The progression of developing these new medical techniques will not be easy.
16:55 – Atala ends with an example of how the (relatively) long history of regenerative medicine has already helped the lives of patients.

Reblog this post [with Zemanta]

Filed Under: Video Tagged With: future technology, transhumanism

Dawn of the Kill-Bots: the Conflicts in Iraq and Afghanistan and the Arming of AI (part 5)

December 20, 2009 by Socrates

Part 5: The Future of (Military) AI — Singularity

While being certainly dangerous for humans, especially the ones that are specifically targeted by the kill-bots, arming machines is not on its own a process that can threaten the reign of homo sapiens in general. What can though is the fact that it is occurring within the larger confluent revolutions in Genetics, Nanotechnology and Robotics (GNR). Combined with exponential growth trends such as Moore’s law we arguably get the right conditions for what is referred to as the Technological Singularity.

Alan Turing

In 1945 Alan Turing famously predicted that computers would one day play better chess than people. Fifty years later, a computer called Deep Blue defeated the reigning world champion Gary Kasparov. Today, whether it is a mouse with a blue-tooth brain implant that directs the movements of the mouse via laptop, a monkey moving a joystick with its thoughts, humans talking through their thoughts via computers or robots with rat-brain cells for CPU, we have already accomplished technological feats which mere years ago were considered complete science fiction.

Isn’t it plausible, then, to consider that one day, not too many decades from now, machines may not only reach human levels of intelligence but ever surpass it?

(Facing the pessimists Arthur C. Clark famously said once that “If a … scientist says that something is possible he is almost certainly right, but if he says that it is impossible he is very probably wrong.”)

Isn’t that the potential, if not actual direction towards which the multiple confluent and accelerating technological developments lead us?

It is this moment – the birth of AI (machine sapiens), which is often referred to as the Technological Singularity.

Ray Kurzweil

So, let us look at the concept of Singularity. For some it is an overblown myth or, at best, science fiction. For others, it is the next step in evolution and the greatest scientific watershed. According to Ray Kurzweil’s definition “It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian nor dystopian, this epoch will transform the concepts that we rely on to give meaning to our life’s, from our business models to the cycle of human life, including death itself.”

According to Kutzweil’s argument the Singularity is nothing short of the next step in evolution. (A position often referred to as Transhumanism) For many millions of years biology has been indeed (our) destiny. But if we consider our species to be a cosmological phenomenon, with its unique feature being its intelligence and not its structural make up, then our biological past is indeed highly unlikely to depict the nature of our future. So, Kurzweil and other transhumanists, see biology as nothing more but our past and technology as our future. To illuminate the radical implications of such a claim it is worth quoting 2 whole paragraphs from Ray Kurzweil:

“The Singularity will represent the culmination of the merger of our biological thinking and existence with our technology, resulting in a world that is still human but transcends our biological roots. […] if you wonder what will remain unequivocally human in such a world, it’s simply this quality: ours is the species that inherently seeks to extend its physical and mental reach beyond current limitations.”

“Some observers refer to this merger as creating a new “species.” But the whole idea of a species is a biological concept, and what we are doing is transcending biology. The transformation underlying the Singularity is not just another in a long line of steps in biological evolution. We are upending biological evolution altogether.”

So how does the Singularity relate to the process of arming AI?

Well, most singularitarians believe that the technological Singularity is a probable and even highly likely event, but most of them certainly do not believe that it is inevitable. Thus there are several potential reasons that can either delay or altogether prevent the event itself or any of the potential benefits for homo sapiens. Global war is, of course, on the top of the list, and it can lead into both of the above directions. In the first instance, a sufficiently large-scale non-conventional war could destroy much or all of human capacity to further technological progress. In that case, the Singularity will be at least delayed or, in the case that homo sapience goes extinct, will become altogether impossible. In the latter instance, if at the point of or around the Singularity there is a conflict between homo sapiens and AI (machine sapiens), then, given our complete dependence on the machines, there may be no merging between the two races (humans and machines) and humanity may forever remain trapped in biology. In turn, this may mean either our extinction or becoming nothing more but an inferior i.e. subservient race to the superiority of the ever growing machine intelligence.

It is for reason like those that some scientists believe that Ray Kurzweil is dangerously naive about the Singularity and especially the benevolence of AI with respect to the human race, and argue that the post-Singularity ArtIlects (artificial intellects) will take us not to immortality but, at least to war, if not complete oblivion. In a way this is a debate about the potential for either techno salvation — as foreseen by Ray Kurzweil, or techno holocaust — as predicted by his critics. Whatever the case, the more and the better the machines of the future are trained and armed the more possible it becomes that one day they may have the capability, if not (yet) the intent to destroy the whole of the human race.

The potential for conflict is arguably likely to increase as the singularity approaches and it does not need to be necessarily a war between man and machine, but can also be among humans. Looking at the current global geopolitical realities one may argue that a global non-conventional war is unlikely if not completely impossible. Yet, for the next several decades, the potential of such war may indeed grow with the pace of technology.

First of all, it is very likely that there will be a large and accelerating proliferation of advanced weapons and military, technological and scientific capabilities all throughout the twenty-first century. Thus many more state and non-state actors will be capable of waging or, at least, starting war.

Secondly, as the singularity approaches the breakpoint and becomes a visible possibility, there are likely to be fundamental rifts within humanity as to whether we ought to continue or stop such developments. Thus many people may push for a global neo-luddite rebellion against the machines and all of those that support the Singularity. This may lead to a realignment of the whole global geo-political reality with both overt and covert centers of resistance. For example, one potentiality may be an alliance between radical Muslim, Christian and Judaic fundamentalists.  (It may currently seem impossible but it is people such as the former chief counter-terrorism adviser Richard A. Clarke who raises those as possibilities.)

It was in the rather low-tech mid 19th century that Samuel Butler wrote his Darwin among the Machines and argued that machines will eventually replace man as the next step in evolution. Butler concluded that

“Our opinion is that war to the death should be instantly proclaimed against them. Every machine of every sort should be destroyed by the well-wisher of his species. Let there be no exceptions made, no quarter shown; let us at once go back to the primeval condition of the race. If it be urged that this is impossible under the present condition of human affairs, this at once proves that the mischief is already done, that our servitude has commenced in good earnest, that we have raised a race of beings whom it is beyond our power to destroy, and that we are not only enslaved but are absolutely acquiescent in our bondage.”

Unabomber FBI Sketch

Another well known modern neo-luddite is Ted Kaczynski aka the Unabomber. Kaczynski not only called for resistance to the rise of the machines via his Manifesto (See Industrial Society and its Future) but even started a terrorist bombing campaign to support and popularize his cause. While Samuel Butler’s argument was largely unknown or ignored by the majority of his contemporaries and the Unabomber was called a terrorist psycho, history may take a second look at them both. It may not be impossible that as the Singularity becomes more visible, if not for the whole humanity, at least for the neo-luddites, Butler may come to be seen as a visionary and Kaczynski – as a hero who stood up against the rise of the machines. Thus, if humanity gets divided into transhumanists and neo-luddites, or if the machines rebel against humanity, conflict may be impossible to avoid.

It may be ironic that Karel Čapek, who first used the term robot, ended his play R.U.R. with the demise of humanity and robots taking over the world. The good news, however, is that this possibility is brought about by our own ingenuity and at our own pace. Hence the technology which we create doesn’t have to be nihilistic – similarly to the Terminator; it may be our exterminator or our savior, our end or a new beginning…

This blog does not try to address the issue of arming AI exhaustively or provide solutions or policy recommendations. What it attempts to do is to put forward an argument about the issues, the context and the stakes within which the above process takes place. Thus, it has been successful if after reading it one is at least willing to consider the possibility that the crude and lightly armed robots currently tested in the conflicts in Iraq and Afghanistan are not simply one of the latest tools in the large US military inventory for what they are today is not what they can turn out to be tomorrow.

Today we are witnessing the dawn of the kill-bots. How high and under what conditions will the robot star rise tomorrow is up for us to consider…

the End (see Part 1; Part 2; Part 3; Part 4)

Related articles by Zemanta
  • SINGULARITY UPDATE: Machines could ultimately match human intelligence, says Intel CTO. “The notio… (pajamasmedia.com)
  • Video of Kurzweil’s Latest Talk at Google (singularityhub.com)
  • Exit Brain, Enter Computer (abcnews.go.com)
  • A school for changing the world (guardian.co.uk)
  • Singularity University, Day One: Infinite, In All Directions (wired.com)

Filed Under: Op Ed, What if? Tagged With: Artificial Intelligence, cyborg, Future, future technology, posthuman, Ray Kurzweil, Raymond Kurzweil, singularity, Technological Singularity, transhumanism

Dawn of the Kill-Bots: the Conflicts in Iraq and Afghanistan and the Arming of AI (part 4)

December 18, 2009 by Socrates

Part 4: Military Turing Test — Can robots commit war-crimes?

Now that we have identified the trend of moving military robots to the forefront of military action from their current largely secondary and supportive role to becoming a primary direct participant or (as Foster-Miller proudly calls its MAARS bots) “war fighters” we have to also recognize the profound implications that such a process will have not only on the future of warfare but also potentially on the future of mankind. In order to do so we will have to briefly consider what for now are assumed to be broad philosophical but, as robot technology advances and becomes more prevalent, will eventually become highly political, legal and ethical issues:

Can robots be intelligent?

Can robots have conscience?

Can Robots commit war crimes?

The
Image via Wikipedia

In 1950 Alan Turing introduced what he believed was a practical test for computer intelligence which is now commonly known as the Turing Test. The Turing test is an interactive test involving three participants – a computer, a human interrogator and a human participant. It is a blind test where the interrogator asks questions via keyboard and receives answers via display screen on the basis of which he or she has to determine which answers come from the computer and which from the human subject. According to Alan Turing if a statistically sufficient number of different people play the roles of interrogator and human subject, and, if a sufficient proportion of the interrogators are unable to distinguish the computer from the human being, then the computer is considered an intelligent, thinking entity or AI.

If we agree that AI is very likely to have substantial military application then there is clearly a need of developing an even more sophisticated and very practical military Turing Test which ought to ensure that autonomous armed robots abide by the rules of engagement and the laws of war. On the other hand, while whether a robot or a computer is (or will be) self-conscious thinking entity (or not) is an important question, it is not necessarily a requirement for considering the issue of war crimes. Given that those MAARS-type bots have literally killer applications, the potential for a unit of those going “nuts on a shooting spree” or getting “hacked,” ought to be considered carefully and hopefully quite early in advance of any potentialities.

Robots capable of shooting on their own are also a hotly debated legal issue. According to Gordon Johnson, who leads robotics efforts at the Joint Forces Command research center in Suffolk (Virginia), “The lawyers tell me there are no prohibitions against robots making life-or-death decisions.” When asked about the potential for war crimes Johnson replied “I have been asked what happens if the robot destroys a school bus rather than a tank parked nearby. We will not entrust a robot with that decision until we are confident they can make it.” (Thus the decision really is not “if” robots will be entrusted with such decisions but “when.” Needless to say historically government confidence in different projects is hardly a guarantee that things will not go wrong.)

On the other hand, in complete opposition to Johnson’s claims, according to barrister and engineer Chris Eliot, it is currently illegal for any state to deploy a fully autonomous system. Eliot claims that “Weapons intrinsically incapable of distinguishing between civilian and military targets are illegal” and only when war robots can successfully pass a “military Turing test” could they be legally used. At that point any autonomous system ought to be no worse than a human at taking military decisions about legitimate targets in any potential engagement. Thus, in contrast to the original Turing Test this test would use decisions about legitimate targets and Chris Elliot believes that “Unless we reach that point, we are unable to [legally] deploy autonomous systems. Legality is a major barrier.”

The gravity of both the practical and legal issues surrounding the kill-bots is not overlooked by the people directly engaged in their production, adoption and deployment in the field. In 2005 during his keynote address at the fifth annual Robo Business Conference in Pittsburgh, program executive director of ground combat systems for the U.S. army Kevin Fahey said: “Armed robots haven’t yet been deployed because of the extensive testing involved […] When weapons are added to the equation, the robots must be fail-safe because if there’s an accident, they’ll be immediately relegated to the drawing board and may not see deployment again for another 10 years. […] You’ve got to do it right.” (Note again that any such potential delays will put into question “when” and not “If” such armed robots will be used on a large scale.)

Armed SWORDS robot modifications
Armed SWORDS robot modifications

While for anyone who has seen and read science fiction classics such as the Terminator and Matrix series the risks of using armed robots may be apparent, we should not underestimate the variety of political, military, economic and even ethical reasons supporting their usage. Some robotics researchers believe that robots could make the perfect warrior.

For example, according to Ronald Arkin, a robot researcher at Georgia Tech, robots could make even more ethical soldiers than humans. Dr. Arkin is working with Department of Defense to program ethics into the next generation of battle robots, including the Geneva Convention rules.  He says that robots will act more ethically than humans because they have no desire for self-preservation, no emotions, and no fear of disobeying their commanders’ orders in case that they are illegitimate or in conflict with the laws of war. In addition, Dr Arkin supports the somewhat ironic claim that robots will act more humanely than people because stress or battle fatigue does not affect their judgment in the way it affects a soldier’s. It is for those reasons that Dr. Arkin is developing a set of rules of engagement for battlefield robots to ensure that their use of lethal force follows the rules of ethics and the laws of war. In other words, he is indeed working on the creation of an artificial conscience which will have the potential to pass a potential military Turing test. Other advantages supportive of Dr. Arkin’s view are expressed by Gordon Johnson observation that

“A robot can shoot second. […] Unlike its human counterparts, the armed robot does not require food, clothing, training, motivation or a pension. […] They don’t get hungry […] (t)hey’re not afraid. They don’t forget their orders. They don’t care if the guy next to them has just been shot. Will they do a better job than humans? Yes.”

In addition to the economic and military reasons behind the adoption of robot soldiers there is also the issue often referred to as the “politics of body bags.”

Using robots will make waging war less liable to domestic politics and hence will make it easier domestically for both political and military leaders to gather support for any conflict that is fought on foreign soil. As a prescient article in the Economist noted “Nobody mourns a robot.” On, the other hand the concern raised by that possibility is that it may make war-fighting more likely given that there is less pressure from mounting casualties on the leaders. At any rate, the issues surrounding the usage of kill-bots have already passed beyond being merely theoretical or legal ones. Their practical importance comes to light with some of the rare but worryingly conflicting reports about the deployment of the SWORDS robots in Iraq and Afghanistan.

The first concern raising report came from the blog column of one of the writers for Popular Mechanics. In it Erik Sofge noted that in 2007 three armed ground bots were deployed to Iraq. All three units were the SWORDS model produced by Foster-Miller Inc. Interestingly enough, all three units were almost immediately pulled out. According to Sofge when Kevin Fahey (the Army’s Program Executive Officer for Ground Forces.) was asked about the SWORDS’ pull out he tried to give a vague answer while stressing that the robots never opened any unauthorized fire and no humans were hurt. Pressed to elaborate Fahey said that it was his understanding that “the gun [of one of the robots] started moving when it was not intended to move.” That is to say that the weapon of the SWORDS robot swung or was swinging around randomly or in the wrong direction. At that point in time Sofge pointed out that no specific reason for the withdrawal was given either from the military or from Foster-Miller Inc.

A similar though even stronger report came from Jason Mick who was also present at the Robotic Business Conference in Pittsburgh in 2008. In his blog on the Daily Tech Web Site Mick claimed that “First generation war-bots deployed in Iraq recalled after a wave of disobedience against their human operators.”

It was just a few days later that Foster-Miller published the following statement:

“Contrary to what you may have read on other web sites, three SWORDS robots are still deployed in Iraq and have been there for more than a year of uninterrupted service. There have been no instances of uncommanded or unexpected movements by SWORDS robots during this period, whether in-theater or elsewhere.
[…] TALON Robot Operations has never “refused to comment” when asked about SWORDS.   For the safety of our war fighters and  due to the dictates of operational security, sometimes our only comment is, “We are unable to comment on operational details.”

Several days after that Jason Mick also published a correction though it seems at least some of the details are still unclear and there is a persistent disharmony between statements coming from Foster-Miller and DoD. For example, Foster Miller spokeswoman Cynthia Black claimed that “The whole thing is an urban legend.” Yet Kevin Fahey was reported as saying the robots recently did something “very bad.”

Two other similarly troubling cases pertaining to the Predator and Reaper UAV’s were recently reported in the media.

The first one Air Force Shoots Down Runaway Drone over Afghanistan reported that:

Predotor UAV thumb“A drone pilot’s nightmare came true when operators lost control of an armed MQ-9 Reaper flying a combat mission over Afghanistan on Sunday. That led a manned U.S. aircraft to shoot down the unresponsive drone before it flew beyond the edge of Afghanistan airspace.”

The report goes on to point that this is not the only accident of that kind and that there is certainly the potential of some of those drones going “rogue.”  Judging by the reported accidents it is clear that this is not only a potential but actual problem.

While on the topic of unmanned drones going “rogue” a most recent news report disclosed that Iraq Insurgents “Hacked in Video Feeds from US Drones”

Thus going rogue is not only an accident related issue but also a network security one. Allegedly, the insurgents used a 26 dollar Russian software to tap into the video streamed live by the drones. What is even more amazing is that the video feed was not even encrypted. (Can we really blame the US Air Force for not encrypting their killer-drone video feeds?! I mean, there are still a whole-lot of laptop home users without access password for their home WiFi connection?! 😉 So it is only natural that the US Air Force will be no different, right?)

And what about the actual flight and weapons control signal? How “secure” is it? Can we really be certain that some terrorist cannot “hack” into the control systems and throw our stones onto our own heads. (Hm, it wasn’t stones we are talking about, it is hell-fire missiles that the drones are armed with…)

As we can see there a numerous legal, political, ethical and practical issues surrounding the deploying, usage, command and control, safety and development of kill-bots. Obviously it may take some time before the dust around the above cases settles and the accuracy of the reports is confirmed or denied beyond any doubt. Yet Jason Mick’s point in his original report still stands whatever the specific particularities of each case. Said Mick:

“Surely in the meantime these developments will trigger plenty of heated debate about whether it is wise to deploy increasingly sophisticated robots onto future battlefields, especially autonomous ones.  The key question, despite all the testing and development effort possible, is it truly possible to entirely rule out the chance of the robot turning on its human controllers?”

End of Part 4 (see Part 1; Part 2; Part 3 and Part 5)

Filed Under: Op Ed Tagged With: Artificial Intelligence, foster-miller, future technology, maars, robot, TALON, Turing test, Unmanned aerial vehicle

Dawn of the Kill-Bots: the Conflicts in Iraq and Afghanistan and the Arming of AI (part 2)

December 17, 2009 by Socrates

Part 2: The Past — Robot Etymology, Brief History and Military Classification

Contrary to what popular intuition may dictate the idea of thinking machines or artificial beings has existed for millennia. Some of the first examples can be found in the ancient Greek myths and legends such as the bronze giant Talos of Crete and the golden mechanical servants of Hephaestus.

Artificial mechanical beings or, as we now commonly call them, robots have appeared in numerous forms and functions in modern popular culture: as a servant – R2D2 in Star Wars, as a fellow comrade – Data in Star Trek, and as both an exterminator and savior – in the Terminator series. Of course, it is not hard to notice that all of the above cases are entirely fictional for they are either ancient Greek Mythology or modern Hollywood science fiction. Yet, what is science fiction one day may well turn out to be reality the next one.

Today, production lines for virtually any large scale commodity are dominated by robots that do the majority of operations within the production process and play a vital part in our

Roomba (CC) Larry D. Moore or GFDL photo by Larry D. Moore
Roomba (CC) Larry D. Moore

globalized capitalist mode of production. It is no surprise then that robots have been migrating from the production lines into every other aspect of our lives. According to Dr. Rodney Brooks, CFO and co-founder of iRobot Corporation, in 2002 there were almost no robots in people’s homes. By 2007, in just five years, his company produced and sold over 2.5 million home clean-bots. From the artificial baby-seal robot Paro, through the iRobot Roomba/Scoomba Vacuum-Bots and the home-made vigilante Bum-Bot in Atlanta, to the deadly Predator and Reaper drones, there seems to be no human activity that will not be soon impacted by robots to one degree or another. In fact the South Korean government aims to put a robot in every house there by 2015 or 2020.

So before we move on to a brief time-line of robotic development let us look at the etymology of the term. The word robot was introduced in the 1920s by Czech writer Karel Čapek in his play R.U.R. (Rossum’s Universal Robots). The play was situated on an island-factory for “artificial people” that Čapek called robots,  and those robots were manufactured so well that they could be mistaken for real human beings. Čapek’s robots could think autonomously for themselves, yet at least for a while they seemed to be happy serving their human masters.

Now that we have a basic understanding of the term robot let us look at a brief timeline of some of the major relevant events within the history of robotic and especially military robotic development. As mentioned earlier the idea for robots can be traced back 3,000 years ago to the ancient Greek myths and legends. Arguably, around that time, similar ideas appear in Ancient Egyptian, Judaic and Chinese writings.

For example, in ancient China in the Lie Zi text, there is a description of an encounter between King Mu of Zhou (1023 – 957 BC) and the “artificer” (i.e. what we will call today a mechanical engineer) Yan Shi. Yan Shi created a human size mechanical figure which was allegedly able to walk, dance, sing and even flirt with the court ladies. Later examples can be found in Ancient Greece as long ago as the 4th century BC, when the Greek mathematician Archytas of Tarentum postulated a mechanical bird he called “the Pigeon” which was propelled by a steam engine. Ctesibium and Hero of Alexandria are two other examples of ancient Greek inventors who allegedly created several automatons at least one of which was supposedly able to speak.

In the middle ages it was the Muslim world where one could find the most sophisticated and impressive automatons and in his Book of Stones the alchemist Jabir ibn Hayyan published recipes for creating artificial snakes, scorpions and even humans. Another Muslim inventor was Al-Jazari (1136-1206) who designed and constructed a number of automatic machines, among which most notably was the first programmable humanoid robot in 1206 – a boat with four automated musicians playing music to entertain guests at royal drinking parties.

In the west, one of the first recorded designs of a humanoid robot was made by Leonardo da Vinci (1452-1519) around 1495. Da Vinci had detailed drawings of a mechanical knight but it is not known whether he attempted to actually build his robot or not. Later on in 1738 Jacques de Vaucanson created a mechanical duck that was able to eat and digest grain, flap its wings, and even excrete. In the east, in 19th century Japan the brilliant craftsman Hisashige Tanaka created an array of extremely complex mechanical toys, some of which were capable of serving tea, firing arrows, or even painting. It has to be noted though that even thought automatons were the closest things to robots, and while they may have looked humanoid, and their movements were complex, they were not capable of adapting to their environment, re-adjusting their movement, self-control or decision making. Arguably, progress on those issues began in the United States in 1898 when Nikola Tesla publicly demonstrated a radio-controlled boat, which was probably the first remotely operated vehicle (ROV). Tesla hoped to develop his ROV into a wireless torpedo used as a weapon by the US Navy, though despite its impressiveness his ROV was not adopted.

British soldiers with captured German Goliath remote-controlled demolition vehicles (Battle of Normandy, 1944).
British soldiers with captured German Goliath UGV

The first Unmanned Ground Vehicle (UGV) was the German Goliath used in WWII. While in essence it was little more than a tracked mine, that looked like a small tank without the turret, it was mobile, remotely operated and packed quite a punch so the Wehrmacht used it to clear mines and bunkers. On the eastern front Russian teletanks were among the first armed UGVs for they had machine guns, flamethrowers, smoke canisters and explosive charges.

It proved easier for engineers to build unmanned vehicles that go through the air than unmanned vehicles that move on the ground. As far back as the late 1930s, the U.S. Navy and Air Force used unmanned aerial vehicles (UAV) as target drones. Thanks to the success and reliability of those drones, the military began looking for other ways to use the planes and reconnaissance was an obvious alternative. By the 1960s and especially in the 1970s American UAV’s collected intelligence on targets in Vietnam, China and North Korea. Thus at least until the late 1980s UGVs were far behind UAVs in terms of development, but by the early 1990s they begun catching up. Unmanned ground vehicles such as the Robotic Ranger (an armed moving platform) and the ROBAT (a modified M-60 tank meant for mine clearing) were funded by the US government and were tested by Foster-Miller Inc. Those ground robots, together with the UAVs and the more recent UUVs (unmanned underwater vehicles)  laid the foundation for the future expansion of robots within the US military. Each of the above three different types of robot technologies is designed for a specific realm of the battlefield and will take increasingly important roles within the US military planning, development and deployment in the twenty first century.

End of Part 2 (see Part 1; Part 3; Part 4; Part 5)

Filed Under: Op Ed Tagged With: Artificial Intelligence, future technology, robot

Dawn of the Kill-Bots: the Conflicts in Iraq and Afghanistan and the Arming of AI (part 1)

December 16, 2009 by Socrates

Warfare, while seemingly the opposite of large scale industrial production, in so far as it is usually perceived to be large scale destruction, exhibits most if not all of the main characteristics of the capitalist mode of production. Features such as specialization, personal discipline within an ethos of team spirit, standardization of procedures, processes and products, are characteristic of both war and modern production.

Foster-Miller Inc's TALON/SWORDS robot
Foster-Miller Inc’s TALON/SWORDS robot

Whether it is more proper to say that warfare has been industrialized or that capitalist production has been militarized is an interesting and important question,  yet regardless of the answer it is evident that robots can be successfully applied both to the production process of capitalism as well as the destruction process of war.

Just like its cousin the manufacturing robot has the capacity to produce more products per unit of time compared to a worker, a war-bot could potentially bring more and/or “higher quality” of destruction than a soldier.

At the same time, similarly to the robot, the war-bot could accomplish the above at a lower (destruction) cost, with higher precision, without the protection of trade unions, the necessity of health or pension benefits, and (allegedly) any potential for even minimal disobedience. In addition, from domestic politics point of view more war-bots are perceived as putting less (of our) soldiers in harm’s way and hence provide an added political impetus behind their fast adoption. Thus at least in the developed capitalist world today there are powerful military, political and economic incentives behind the creation of large and multifaceted robotic armed forces aimed at gradually replacing most if not all of today’s soldiers.

My thesis in this series of several blog posts is that despite the high media coverage of the conflicts in Iraq and Afghanistan what we are (not) witnessing is the rise of armed military robots capable of killing humans.

Therefore I will argue that within the history of the human species the present conflicts in Iraq and Afghanistan may eventually turn out to be known as the dawn of the kill-bots – the period during which increasingly self-sufficient machines became capable of and started making increasingly autonomous decisions about killing human beings. Thus the conflicts in Iraq and Afghanistan may turn out to be a lot more than merely a chapter in the War on Terrorism or a conflict between incompatible ideologies. What those conflicts could turn out to be is but the foreword of an altogether new type of war – the conflict between man and machine, mankind and robotkind, with democracy being the first casualty.

End of Part 1 (see Part 2; Part 3; Part 4; Part 5)

Related articles by Zemanta
  • The Reuters News Kill-Bot Report (singularityblog.singularitysymposium.com)

Filed Under: Op Ed Tagged With: Artificial Intelligence, foster-miller, future technology, maars, robot, TALON

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy