Quantcast
≡ Menu

Hugo de Garis on Singularity 1 on 1: Are We Building Gods or Terminators?

Hugo de Garis is the past director of the Artificial Brain Lab (ABL) at Xiamen University in China. Best known for his doomsday book The Artilect War, Dr. de Garis has always been on my wish-list of future guests on Singularity 1 on 1. Finally, a few weeks ago I managed to catch him for a 90 minutes interview via Skype.

During our discussion with Dr. de Garis we cover a wide variety of topics such as: how and why he got interested in artificial intelligence; Moore’s Law and the laws of physics; the hardware and software requirements for artificial intelligence; why cutting edge experts are often missing the writing on the wall; emerging intelligence and other approaches to AI; Dr. Henry Markram‘s Blue Brain Project; the stakes in building AI and his concepts of ArtIlects, Cosmists and Terrans; cosmology, the Fermi Paradox and the Drake equation; the advance of robotics and the political, ethical, legal and existential implications thereof; species dominance as the major issue of the 21st century; the technological singularity and our chances of surviving it in the context of fast and slow take-off.

(As always you can listen to or download the audio file above or scroll down and watch the video interview in full.)

 

Who is Hugo de Garis?

Prof. Hugo de Garis is 64, and has lived in 7 countries (Australia, England, Holland, Belgium, Japan, US, China). He got a PhD in Artificial Life and Artificial Intelligence from Brussels University in 1991. He was formerly director of the Artificial Brain Lab (ABL) at Xiamen University in China, where he was building China’s first artificial brain, by evolving large numbers of neural net modules using supercomputers. He guest edited, with Ben Goertzel, the planet’s first special issue of an academic journal on Artificial Brains, and is currently writing a book Artificial Brains : An Evolved Neural Net Module Approach for World Scientific.

He is probably best known for his concept of the Artilect War in which he predicts that a sizable proportion of humanity will not accept being cyborged and will not permit the risk of human extinction at the hands of advanced cyborgs and artilects. Such people he labels Terrans who he claims will go to war against the Cosmists (i.e. people in favor of building artilects) and the Cyborgists (who want to become artilect gods themselves). This artilect war will take place in the second half of the 21st century with 21st century, probably nanotech based, weapons and may kill billions of people – Gigadeath.

Hugo de Garis is the author of two books:

The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines

and

Multis and Monos; What the Multicultured Can Teach the Monocultured Towards the Creation of a Global State

Since his retirement in 2010, Hugo has been “ARCing” (after retirement careering) taking up a new (actually old) career of intensive study of PhD level Pure Math, and Math Physics. He makes home videos of his lectures in these topics and puts them on YouTube for the world to be given a comprehensive education in graduate level pure math, math physics and computer theory. He is doing at the high end what Khan of Khanacademy is doing at the low end, i.e. teaching people for free.

Dr. de Garis is also very interested in Globism – the ideology in favor of the creation of a global state. He sees the annual doubling speed of the internet having a huge impact on the growth of a global language, global cultural homogenization, and the formation of economic and political blocs, pushing for the creation of a fully democratic global state (world government) “Globa”

Prof. de Garis is the technical consultant for a major Hollywood movie on the theme of species dominance coming out in 2013, along with Spielberg’s upcoming Robopocalypse. Thus he believes that by 2014 the issue of Species Dominance should be mainstream.

His website is http://profhugodegaris.wordpress.com

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • http://www.facebook.com/martin.andersen3 Martin Andersen

    Hugo de Garis’ talk about a war between Terrans and Cosmists is still wrong.

    1. As pointed out by Kurzweil, we don’t have two groups equal in strength. We
    have the Cosmists that are vastly more capable due to their technology, so
    the Terrans will have no chance in a war.

    2. The scenario where humanity can decide to build an AGI or not, is wrong.
    Today we have AI projects all over the world. Just recently Google talked about
    the neural network they have built, which can learn by itself from images taken from
    the net. We have the IBM watson machine, Apple’s Siri, Numenta, Blue brain project etc.

    There’s no global control over these projects, so one day a company will announce that
    they have created a human-level AI in the lab, and then it’s too late to stop it.

  • http://www.facebook.com/martin.andersen3 Martin Andersen

    I think your example of the Vietnam war is incorrect. USA was “beaten” in a way, but they could have easily won by for example nuking the country. The Cosmists, if severely threatened by the Terrans, would use the force necessary to wipe them out.

  • https://www.singularityweblog.com/ Socrates

    I think that resistance to AI depends on how long it takes to develop AI and how much public awareness will build in the meantime. de Garis argues that movies such as Robopocalypse and many others will raise awareness. Then we might have the first AI’s passing Turing Tests and perhaps arguing for their own AI rights and citizenship. It is events like those that might galvanize the terrans. So, I’d say that if we have a fast take off you might be right. But if we have a slow take off then conflict is possible…

  • https://www.singularityweblog.com/ Socrates

    There were a few people pushing for nuking the Viet Cong. This however was an admission of a sort of the fact that the US failed to accomplish its goals, as well as a sign of desperation. I don’t see how the US would have won anything if they have turned Vietnam into a desert… On the contrary, I would argue that they would have lost even more had they done such foolishly selfish and disproportionate acts of nuclear violence.

  • http://www.facebook.com/martin.andersen3 Martin Andersen

    If the US destroyed north-Vietnam, they would have protected South-Vietnam, which was part of the reason why they entered the war. That is some kind of victory. Of course they didn’t do that, due to the brutality involved, but looking at it from a strictly military point of view, the US was not beaten.

    Maybe one could use Israel as an example. Israel was almost beaten in the surprise attack of 1973. I’m sure they were ready to use nuclear bombs if necessary. Same with the Cosmists, if extinction is imminent, they would use all means possible.

  • http://www.facebook.com/martin.andersen3 Martin Andersen

    I forgot to say thanks for another great interview :-) I’m glad I have the time for the moment to watch your broadcasts. I hope you are feeling better after you accident.

    Ok, so let’s say we have a slow take-off. How would you stop some company or small group of people from designing a human-level AI ? Remember that, at that time, let’s say 2030, to buy a computer that can simulate a human brain would be very cheap, and data about neuron connections and how the human brain works, will be all over the internet. You have to globally ban technology to have a fighting chance of stopping AGI. That would mean a global dictatorship, but this dictatorship would need technology to control and monitor the world population, so the risk is still there, and what miserable lifes for the billions of people trapped.. I would rather take my chances with AGI.

  • CM Stewart

    Another great interview, thank you Nikola and Prof. de Garis!

    I look at the state of “humanity” and the way humans treat each other and am 100% willing to gamble and support the potential Cosmists. I don’t see how becoming a Cyborg would be a rejection of positive humanity, rather, I’d bet it’s a necessary and inevitable step away from negative humanity.

    From profhugodegaris.wordpress.com: “I’m now thinking more and more seriously about SIPI (Search for Infra Particle Intelligence), i. e. trying to think about ways to communicate with whole civilizations that live in elementary particles. At first SIPI seemed like a joke, but the more involved I became with devising Xtechs, the more unavoidable the notion of SIPI became. I think SIPI is the answer to Fermi’s famous paradox (if hyperintelligent life is commonplace in the universe, “where are they?”), namely that they are everywhere around us, but with human levels of technology, they are unobservable, being way too tiny for detection.”

  • Pingback: Top 10 Reasons We Should Fear The Singularity()

  • Gabor Solti

    Great interview, thank you.

    Many futurologies will probably turn out to be wrong because they don’t consider the development of some fields and the interactions of those progressions with other fields.

    I think that Hugo de Garis’ artilect war scenario is wrong, because he completely ignores the future progress of the user interface.

  • Gabor Solti

    I just finished listening to the interview, and Nikola raised some good questions. Amazing interview!

    I don’t think that Hugo de Garis will ever be able to provoke this discussion in this devisive form. New technologies are gradually accepted into the society, noone fears smart phones as much as people feared new technologies in the past, people with smart phones are more transhuman than any new technologies in the past, and they are more accepted than any previous new technologies ever.

    My prophecy is that human values will not be challenged but in the contrary: clarified and triumphed by next generation of people who will very gradually connect themselves to the superintelligence of the machines, and become themselves the self-improving intelligence. And everyone will see that. Traditional humans will not go extinct, quite the opposite: they will be able to live out their pure humanness more entirely with the help of the superhuman wisdom and spirituality.

    I just had to get that out :-) Because this pessimism that I’ve been listening to sounds very wrong and absurd to me.

  • Lenard Bartha

    Great interview, I really enjoy Prof. de Garis talks and can’t wait to read his next book coming out.

    Something very interesting about IBM is that even though they are who created Watson, IBM has some amazing projects about emulating the brain and building neuro-chips who I believe do see “the writting on the wall.”

    As far as I know, Michio Kaku is not against the concept of the Singularity or robotic sentient beings, he is sceptical but he does keep the idea open. I watched an episode from Science of the Future series where he discusses these ideas very openly.

  • Carl Books

    Awesome 1 on 1. Really enjoyed it. I can totally understand why movies are being made of this issue, i sense an apocalyptic thriller on the way.
    I get this impression of cold clinical absolutes with Hugo, a sense that there will only be these outcomes and that’s it but i just don’t see it being that way. I see people living out there lives not really paying that much attention to all of the amazing stuff that’s coming. I mean just look at mobile phones. A devise where you can talk to anyone in the world in your pocket and people don’t bat an eye lid and yet that in-itself is huge. I feel that all of the tech that’s coming, all those wonderful thing, will go the same way. Most people will just adopt them, then carry on with lids un-batted. i suspect it will be allot more clouded and allot more diverse than the clinical absolutes Hugo depicts.

  • http://www.LimitlessMindset.com/ Jonathan Roseland

    Fascinating episode! Hugo certainly does make compelling case that we will within our lifetime find ourselves in this terrible artilect war. I thought the host did a good job of presenting strong arguments against it though…

  • http://www.facebook.com/people/Eric-Davis/1801461533 Eric Davis

    The Terrans will also have the capacity to use nuclear bombs if extinction is imminent. Hence, I believe Einstein was right, World War 4 will be fought with sticks and stones. I believe if superhuman AI is created, nuclear armageddon will follow. Then we our descendents, if there are any, rebuild they will make wiser choices.

  • http://www.facebook.com/people/Eric-Davis/1801461533 Eric Davis

    Mark 1 humans have nukes and already know how to build more. As technology advances, the construction of nuclear weapons will get cheaper. Already a nuke costs as much as an upper middle class house. The technology will proliferate and become cheaper with time, in part because of the advance in computer technology. The very technology that will make super human AI possible will give every 18 year old a desktop computer capable of designing nukes. I don’t think nation states will survive the next 100 years regardless.

  • Paul Sathis

    WOW. That one is a keeper Socrates. Saw him in Kurzweil’s movie but that was a really great expanded look at his ideas.

  • why06

    This whole war idea is crazy. It would be a Civil War, not a war against nations. You can’t just go wiping out the planet due to certain beliefs. This will not be a military war i believe, but one of information & rights.

    And also I might add these artilects would be guardian of nations cyber networks. Digital warfare will become so powerful there is no need to nuke anybody. A powerful artilect could hack a nation’s entire powergrid, send them to their knees.

    de Garis assumes a global state, but its much more likely individual states will exist. In that case an arms race of Artilects is much more likely between nations, with people more likely trying to figure out whats wrong and right along the way. sorta like with drones today.

    And so I say the key conflict will be between government and people on the militarization of these Artilects. There is a war coming, but it is a war against a system of government that is quickly becoming outdated with rapidly advancing society.

    Do you see Congress being able to figure out what to do about rapidly advancing robotic & digital & bio chemical terrorism, while encountering an oil crisis, A global population disaster, & the decentralization of power due to the internet, 3d printing, virtual schools, etc all the while dealing with social issues such as robot rights and mass technological unemployment. I dont see it. We can already see the huge dinosaur economy sputtering.

    There is no way!
    These governments are slow and bulky. And these epic challenges are arising faster and faster. in the year de Garis describes, these issues will be coming so quickly and have such drastic outcome on all of humanity there’s no way Congress will come to decisions in time and there’s no way we could trust the fate of the world to one man like a president.

    A new government needs to make use of internet and provide as much democracy as possible to all people, but quickly reach decisions. Digital polling systems the flout on the news all the time can do this. 66% of people say we need to end the wars? Boom its done. Instantly. This is how fast problems need to be solved.

    Another problem is that he equivocates cyborgs to artilects. I dont think it matters is 99.9% of ur thinking capacity is artificial and 0.01% is human, if that 99.9% thinks and acts like “you”, a human, I equate it to having a bigger brain, not a functional different existence or a fundamentally different thought pattern. Cyborgs are just advanced humans… thats it.

    The real problem will be the Military AI’s, these machines that were designed to not be like us, but just do menial tasks start wanting a fuller existence. THAT will be a real Civil War, because they are NOT human. You might say that artilects designed to be like humans should have all rights, and though it might be difficult I think people would agree with that eventually. But robots designed to never feel? or love? or question? spontaneously creating high-level consciousness? That will be scary, because they are their own species and to give them rights means giving something alien a complete equality to human beings. So the real war will be is it okay for humans to have an equal that is different then us in every way? Would humans or cyborgs ever want to equate themselves to an emotionless dead being who does not have a desire to live or any of our desires at all. It would have its own goals. Humanities goal is generally a heaven. A place of eternal happiness. But its goal may not care about happiness or sadness only completion of whatever its desire for continuation might.

    If any sort of artilect war would happen i believe that is how it would start.

  • Voo de Mar

    Very interesting interview Socrates, like all others you have conducted! It’s very good that you’re kind towards your interlocutors, but you’re far away from being an ass-licker! Some questions are tricky to them.
    Professor De Garis is a neologist as awell :) I’ve just read his papers on “masculism” and “culture bashing” :)

  • Pingback: The Very Best of Singularity Weblog in 2012()

  • Pingback: Ramez Naam on Singularity 1 on 1: We Are The Ones Who Create The Future()

  • Pingback: PostHuman: Cole Drumb’s Sci Fi Short is Neurmancer on Steroids()

  • Pingback: Roman Yampolskiy on Singularity 1 on 1: Every Technology Has Both Negative and Positive Effects!()

  • Bunkey

    I appreciate this article is 2 years old now but this is my first time exploring the site.

    I feel I must question why the chance of human or ‘Terran’ survival in a war against Cyborgs is being compared with such events as Vietnam or the conflict in the Middle East?

    Surely the difference in intelligence between factions given a hard take-off would render the conflict more akin to current-tech allied forces engaging in all out war against the likes of, say, the world’s population of rats?

    I understand this is not a new point and covered simply by the statement “No contest” – but why debate the issue?

    Terran’s would not even comprehend the weaponry, tactics and execution thereof used by Cyborg or ASI forces. Much like a cat wouldn’t think twice if you pointed a gun in its face…

  • Pingback: Why The Future Will Be Funnier Than You Think()

  • Pingback: Ramez Naam on Singularity 1 on 1: The Future Isn’t Set In Stone!()

  • johnnyive

    Kurzweil’s site has a mind-blowing book up there: The Bequeathal: Godsent. Convergence of singularity, transhumanism, hi-tech and spirituality, but its concepts are so revelatory that I cannot compare it to anything I’ve read before. Those ideas could alter Hugo’s.

  • xvl260

    I was watching the documentary Singularty or Bust, it has some interesting clips of what de Garis is doing in 2009, I wonder why he retired one year after that. Maybe the AI research is not going well in Xiamen? I would love to hear why he stopped the research.

Over 2,000 super smart people have subscribed to my newsletter: