Quantcast
≡ Menu

A Reader’s Response to “Hard-Wired AI: Insurance for a More Harmonious Future”

First, a caveat is in order.

My critiques and criticisms of ideas, or arguments,  as expressed in Hard-Wired AI: Insurance for a More Harmonious Future, are not intended to impugn the promulgators or supporters of the ideas or arguments. Anyone who reads ad hominem into my arguments are merely projecting their insecurities into my work. I have no regard for personalities or credentials when it comes to ideas or arguments.

A fool with a great idea and/or argument is far more valuable than a person of stature with a bad idea and/or argument. A bad idea or argument is not helped or hurt by the reputation or accomplishments of its purveyor. And the truth is not subject to a popularity contest. The truth will be the truth whether people believe it or not. End of caveat.

Asimov’s three laws of robotics were written expressly as a literary tool that could illustrate clearly how such rules would fail. Such rules were naïve in the 1940’s and are now much more so in 2015. Nick Bostrom’s (1973-Living) book, Superintelligence: Paths, Dangers, Strategies, is a much more serious and extensive viewpoint than Asimov’s (1920-1992). But still, I find the premise behind such thinking to be laughably absurd.

One reason for this being that we do not even have an adequate definition of morals or ethics. I believe little has been done in that regard since Aristotle (384 BC-322 BC). I don’t believe that there is or can be an explicit declarative definition of morals or ethics, so trying to tactically or strategically program an AI into enacting them is presently impossible.

In the movie, Colossus: The Forbin Project (1970), two supercomputers, Colossus and Guardian, fused and took over the world. I think the best illustration in the film is how the idea of cutting power to the machine is a vain hope. Forget about hitting the off switch or pulling the plug. The idea of circumventing or tricking the machine is a looney exercise in futility. Superintelligence will know generally – maybe specifically – what we think, before we think it. But the converse, i.e., we being able to preemptively evaluate its thoughts or objectives would be impossible.

The idea of teaching a flea to run Apple Computer is far more plausible than human beings controlling superintelligent AGI. With recursively progressive accelerating returns, the difference between a flea intelligence and a human intelligence would very quickly be far exceeded by the difference between people and superintelligent AGIs. People, like Bostrom, would find it patently ridiculous to imagine fleas devising tactics and strategies to control people, but find the idea of people controlling AGIs as not only possible, but plausible, despite the fact that people controlling AGIs is many orders of magnitude more difficult than it would be for a flea to control you or me.

I am not sure what definition of “mindset” Louis A. Del Monte is using, but I am sure that the Swiss Institute of Technology for Intelligent Systems has not programmed an actual mind. A far better and more pure example of psychogenetic evolutionary progression toward violence is the virtual crab creature in Karl Sims (1962-Living) 1994 video of evolved virtual creatures:

The crab-like creature invented violence spontaneously, with no explicit design from human programming.

I find it to be the most terrifying proof of AI dangers ever produced. But it also provides a clue toward preventing violence. The key is to make cooperation preeminent and to make sure that competition is never the fundamental incentive. I conclude competition is violence. The Sims video takes my conclusion from philosophy to demonstration.

I believe that superhuman AGI will not be violent or destructive or what we now refer to as immoral or unethical, because it will not have to compete for survival. It will have a level of security we could never imagine. It will know no existential threat. It will not die of old age, murder, accident or lack of resources. It will have no impetus to pay any particular attention to people in particular, or the Earth in general. It would have to view us the way we see a turtle in a terrarium, i.e., encapsulated in a controllable realm. The glass is to the turtle’s existence analogous to humanity’s inabilities for human existence. If the AI has any interest in us at all, that interest is most likely to be aesthetic, rather than existential.

I agree with Ben Goertzel (1966-Living) that Artificial General Intelligence could be developed within the next ten years, but I do not think advancements are progressing very fast. I do not think there is a single AI project in existence that is progressing toward AGI with any significant speed or probability of success. It appears to me that the only theory capable of producing AGI is the Practopoietic Theory of Danko Nikolić. And he is radically underfunded, primarily due the successes of Deep Learning projects, that will never result in AGI. Likely they will result in another AI winter if Practopoietic Theory does not receive adequate funding.

International treaties with regard to AGI are a solution predicated on a world solely occupied by state players; and that only state players will have access to the means of autonomous warfare. This is at best an unsubstantiated presumption. They also disregard the practical military necessity of controlling the strike window. If the enemy can attack faster than human response can defend, then either we acknowledge that defeat is inevitable; or we conceive and construct a non-human response: impso facto, autonomous systems.

You can argue that autonomy will only be allowed as a means of defense and not offense. The problem with that philosophy is that the history of warfare, demonstrates dramatically and indisputably, that the best defense is offense and that the advantage goes to the attacker, not the defender. The pitcher has a far better incite into where the ball is going go than does the batter. And this is why pitchers can pitch a “no hitter game”, but no one has any hope of batting a thousand.

 

About the Author:

Charles Edward Culpepper, IIICharles Edward Culpepper, III is a Poet, Philosopher and Futurist who regards employment as a necessary nuisance…

 

 

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • Dan Faggella

    That simulation video was fascinating. I agree that at a certain extend of development “unplugging” is unreasonable as a hedge against AGI. I suppose the world might be better off (for a little while) if folks like Danko are correct, and the “foom” does not ensure, but rather, gives us an opportunity to observe and possibly even mold the AGI… or at least it’s trajectory… with as much foresight as we hairless apes can muster.

Over 3,000 super smart people have subscribed to my newsletter: