• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

laws of robotics

A Reader’s Response to “Hard-Wired AI: Insurance for a More Harmonious Future”

October 29, 2015 by Charles Edward Culpepper

First, a caveat is in order.

My critiques and criticisms of ideas, or arguments,  as expressed in Hard-Wired AI: Insurance for a More Harmonious Future, are not intended to impugn the promulgators or supporters of the ideas or arguments. Anyone who reads ad hominem into my arguments are merely projecting their insecurities into my work. I have no regard for personalities or credentials when it comes to ideas or arguments.

A fool with a great idea and/or argument is far more valuable than a person of stature with a bad idea and/or argument. A bad idea or argument is not helped or hurt by the reputation or accomplishments of its purveyor. And the truth is not subject to a popularity contest. The truth will be the truth whether people believe it or not. End of caveat.

Asimov’s three laws of robotics were written expressly as a literary tool that could illustrate clearly how such rules would fail. Such rules were naïve in the 1940’s and are now much more so in 2015. Nick Bostrom’s (1973-Living) book, Superintelligence: Paths, Dangers, Strategies, is a much more serious and extensive viewpoint than Asimov’s (1920-1992). But still, I find the premise behind such thinking to be laughably absurd.

One reason for this being that we do not even have an adequate definition of morals or ethics. I believe little has been done in that regard since Aristotle (384 BC-322 BC). I don’t believe that there is or can be an explicit declarative definition of morals or ethics, so trying to tactically or strategically program an AI into enacting them is presently impossible.

In the movie, Colossus: The Forbin Project (1970), two supercomputers, Colossus and Guardian, fused and took over the world. I think the best illustration in the film is how the idea of cutting power to the machine is a vain hope. Forget about hitting the off switch or pulling the plug. The idea of circumventing or tricking the machine is a looney exercise in futility. Superintelligence will know generally – maybe specifically – what we think, before we think it. But the converse, i.e., we being able to preemptively evaluate its thoughts or objectives would be impossible.

The idea of teaching a flea to run Apple Computer is far more plausible than human beings controlling superintelligent AGI. With recursively progressive accelerating returns, the difference between a flea intelligence and a human intelligence would very quickly be far exceeded by the difference between people and superintelligent AGIs. People, like Bostrom, would find it patently ridiculous to imagine fleas devising tactics and strategies to control people, but find the idea of people controlling AGIs as not only possible, but plausible, despite the fact that people controlling AGIs is many orders of magnitude more difficult than it would be for a flea to control you or me.

I am not sure what definition of “mindset” Louis A. Del Monte is using, but I am sure that the Swiss Institute of Technology for Intelligent Systems has not programmed an actual mind. A far better and more pure example of psychogenetic evolutionary progression toward violence is the virtual crab creature in Karl Sims (1962-Living) 1994 video of evolved virtual creatures:

The crab-like creature invented violence spontaneously, with no explicit design from human programming.

I find it to be the most terrifying proof of AI dangers ever produced. But it also provides a clue toward preventing violence. The key is to make cooperation preeminent and to make sure that competition is never the fundamental incentive. I conclude competition is violence. The Sims video takes my conclusion from philosophy to demonstration.

I believe that superhuman AGI will not be violent or destructive or what we now refer to as immoral or unethical, because it will not have to compete for survival. It will have a level of security we could never imagine. It will know no existential threat. It will not die of old age, murder, accident or lack of resources. It will have no impetus to pay any particular attention to people in particular, or the Earth in general. It would have to view us the way we see a turtle in a terrarium, i.e., encapsulated in a controllable realm. The glass is to the turtle’s existence analogous to humanity’s inabilities for human existence. If the AI has any interest in us at all, that interest is most likely to be aesthetic, rather than existential.

I agree with Ben Goertzel (1966-Living) that Artificial General Intelligence could be developed within the next ten years, but I do not think advancements are progressing very fast. I do not think there is a single AI project in existence that is progressing toward AGI with any significant speed or probability of success. It appears to me that the only theory capable of producing AGI is the Practopoietic Theory of Danko Nikolić. And he is radically underfunded, primarily due the successes of Deep Learning projects, that will never result in AGI. Likely they will result in another AI winter if Practopoietic Theory does not receive adequate funding.

International treaties with regard to AGI are a solution predicated on a world solely occupied by state players; and that only state players will have access to the means of autonomous warfare. This is at best an unsubstantiated presumption. They also disregard the practical military necessity of controlling the strike window. If the enemy can attack faster than human response can defend, then either we acknowledge that defeat is inevitable; or we conceive and construct a non-human response: impso facto, autonomous systems.

You can argue that autonomy will only be allowed as a means of defense and not offense. The problem with that philosophy is that the history of warfare, demonstrates dramatically and indisputably, that the best defense is offense and that the advantage goes to the attacker, not the defender. The pitcher has a far better incite into where the ball is going go than does the batter. And this is why pitchers can pitch a “no hitter game”, but no one has any hope of batting a thousand.

 

About the Author:

Charles Edward Culpepper, IIICharles Edward Culpepper, III is a Poet, Philosopher and Futurist who regards employment as a necessary nuisance…

 

 

Related articles
  • Hard-Wired AI: Insurance for a More Harmonious Future

Filed Under: Op Ed Tagged With: Artificial Intelligence, laws of robotics

You and I, Robot

April 26, 2013 by Steve Morris

Isaac Asimov in his novel I, Robot, famously proposed the three laws of robotics:

handshake with robot

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

But I don’t think this is a promising way forward (and presumably neither did Asimov, since his novel highlights a fatal flaw in the rules). Rules and laws are a weak way of ordering society, partly because they always get broken. Our entire legal system seems to be based not on laws, but on dealing with the consequences when laws are broken.

In the Bible, God gave Moses Ten Commandments. I’m willing to bet that they were all broken within a week.

In the Garden of Eden there was only One Rule, and it didn’t take long before that was lying in tatters.

You may think that some rules can’t be broken. You may tell me that 1 + 1 = 2. Really? And what if I tell you that 1 + 1 = 3? What are you gonna do about it?

You see, rules only work when everyone agrees with them. They need to be bottom-up, not top-down. If people don’t like rules, then rules get broken. I personally believe strongly in the rule that says everyone should drive on the same side of the road and I’ve never broken it. But if I think that 30 mph is a stupid speed limit right out here in the middle of nowhere, then I’m going to put my foot on the accelerator.

On the other hand, I’ve never murdered a single member of my family. And not because the law forbids it, but because I love them. In fact, I would go to extraordinary lengths to protect them, even breaking other rules and laws if necessary.

That’s the kind of strong AI we need. Robots that protect us, nurture us, forgive us and tolerate our endless failures and annoying habits. In short, robots capable of love. Ones that we can love back in return.

 

Steve-Morris-thumb11About the author:

Steve Morris studied Physics at the University of Oxford and now writes about technology at S21.com and his personal blog.

 

Related articles
  • Love and Sex with Robots: The Next Step of the Relationship between Man and Machine?

Filed Under: Op Ed Tagged With: laws of robotics

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy