Artificial, Intelligent, and Completely Uninterested in You

Artificial intelligence is obviously at the forefront of the singularitarian conversation. The bulk of the philosophical discussion revolves around a hypothetical artificial general intelligence’s presumed emotional state, motivation, attitude, morality, and intention. A lot of time is spent theorizing the possible personality traits of a “friendly” strong AI or its ominous counterpart, the “unfriendly” AI.

Building a nice and cordial strong artificial intelligence is a top industry goal, while preventing an evil AI from terrorizing the world gets a fair share of attention as well. However, there has been little public and non-academic discussion around the creation of “uninterested” AI. Essentially, this third state of theoretical demeanor or emotional moral disposition for artificial intelligence doesn’t concern itself with humanity at all.

Photo credit: Toni Blay, CC2.0

Dreams and hope for friendly or benevolent AI abound. The presumed limitless creativity and invention of these hyper-intelligent machines come with the hope that they will enlighten and uplift humanity, saving us from ourselves during technological singularity. These “helpful” AI discussions are making strides in the public community, no doubt stemming from positive enthusiasm for the subject.

Grim tales and horror stories of malevolent AIs are even more common, pervading our popular culture. Hollywood’s fictional accounts of AIs building robots that will hunt us like vermin are all the rage. Although it is questionable that a sufficiently advanced AI would utilize such inefficient means to dispose of us, it still exposes human egotistical fear in the face of superiority.

Both of these human-centric views of AI, as our creation, are in many ways conceited. Because of this, we assign existential risk or a desire for exultation by these AIs, based upon our self-gratifying perception of importance to the artificial intelligence we seek to create.

Pondering the disposition toward humanity that an advanced strong AI will have is conjecture but an interesting thought exercise for the public to debate nonetheless. An advanced artificial general intelligence may simply see men and women in the same light as we view a sperm and egg cell, instead of as mother or father. Perhaps an artificial hyper-intelligence will view its own Seed-AI as its sole progenitor. Maybe it will feel that it has sprung into being through natural evolutionary processes, whereas humans are but a small link in the chain. Alternatively, it may look upon humanity in the same light as we view the Australopithecus africanus, a distant predecessor or ancestor, far too primitive to be on the same cognitive level.

It is assumed that as artificial intelligence increases its capacity far beyond ours the gulf in recognized dissimilarity between it and us will grow. Many speculate that this is a factor that will cause an advanced AI to become calloused or hostile toward humanity. However, this gap in similarity may mean that there will be an overall non-interest in humanity for a theoretical AI. Perhaps non-interest in humanity or human affairs will scale with the difference, widening as the intelligence gap increases. As the AI increases it’s capabilities into the hyper-intelligence phase of its existence, which may happen rapidly, behavioral motivations could shift as well. Perhaps a friendly or unfriendly AI in its early stages will “grow out of it” so to speak, or will simply grow apart from us.

It is perhaps narcissistic to believe that our AI creations will have anything more than a passing interest in interacting with the human sphere. We humans have a self-centered stake in creating AI. We see the many advantages to developing friendly AI, where we can utilize its heightened intellect to bolster our own. Even with the fear of unfriendly or hostile AI, we still have optimism that highly intelligent AI creations will still hold enough interest in human affairs to be of great benefit. We are absorbed with the idea of AI and in love with the thought that it will love us in return. Nevertheless, does an intelligence that springs from our own brow really have to concern itself with its legacy?

Will AI view humanity as importantly as we view it?

The universe is inconceivably vast. With increased intelligence comes increased capability to invent and produce technology. Would a sufficiently intelligent AI even bother to stick around, or will it want to leave home, as in William Gibson’s popular and visionary novel Neuromancer?

Even a limited-intelligence-being like man does not typically socialize with vastly lower life forms. When was the last time you spent a few hours lying next to an anthill in an effort to have an intellectual conversation? To address the existential risk argument of terminator-building hostile AI, when was the last time you were in a gunfight with a colony of ants? Alternatively, have you ever taken the time to help the ants build a better mound and improve their quality of life?

One could wager that if you awoke next to an anthill, you would make a hasty exit to a distant location where they were no longer a bother. The ants and their complex colony would be of little interest to you. Yet, we do not seem to find it pretentious to think that a far superior intelligence would choose to reside next to our version of the anthill, the human filled Earth.

The best-case scenario of course is that we create a benevolent and friendly AI that will be a return on our investment and benefit all of mankind with interested zeal. That is something that most all of us can agree as a worthy endeavor and a fantastic near future goal. We must also publicly address the existential risk of an unfriendly AI, and mitigate the possibility of bring about our destruction or apocalypse. However, we must also consider the possibility that all of this research, development, and investment will be for naught. Our creation may co-habitat with us while building a wall to separate itself from us in every way. Alternatively, it may simply pack up and leave at the first opportunity.

We should consider and openly discuss all of the possible psychological outcomes that can emerge from the creation of an artificial and intelligent persona, instead of narrowly focusing on only two polar concepts of good and evil. There are myriad philosophical and behavioral theories on the topic of AI that have not even been touched upon here, going beyond the simple good or bad AI public discussion. It is worthy to consider these points and put the spotlight on the brilliant minds that have researched and written about these theories.

AI development will likely be an intertwined and important part of our future. It has been said that the future doesn’t need us. Perhaps we should further that sentiment to ask if the future will even care that we exist.

About the Author:

Tracy R. Atkins has been a career technology aficionado since he was young. At the age of eighteen, he played a critical role in an internet startup, cutting his tech-teeth during the dot-com boom. He is a passionate writer whose stories intertwine technology with exploration of the human condition. Tracy is also the self-published author of the singularity fiction novel Aeternum Ray.

  • Pingback: Singularity Weblog has My Article: Artificial, Intelligent, and Completely Uninterested in You | Tracy R. AtkinsTracy R. Atkins

  • CM Stewart

    ” . . have you ever taken the time to help the ants build a better mound and improve their quality of life?” I attempted this a number of times as a child. Coming upon a damaged anthill, I would attempt to help rebuild the mound, using a tiny twig or a blade of grass. My efforts were clumsy, at best, and I believe the ants were oblivious to my actions. Other times I would attempt to break up two rival ants locked in a mutual death grip. No amount of sand or leaves dumped on them convinced them to let go and walk way.

    So my question is, would we, as mere un-enhanced humans, be able to interpret the intentions of “friendly” or “beneficial” AI?

    Excellent article, Mr. Atkins!

  • Tracy_R_Atkins

    Thank you!

    That is an interesting thought on the anthill. Despite your best attempts to improve the ants situation, they didn’t understand. You held the ultimate power over that anthill. Even as a child, you could have destroyed it completely, or used a magnifying glass or shoe to kill a few. Nevertheless, you didn’t. Your intelligence is far greater and you can visualize and solve problems that the ants can’t foresee. In less than a minute, you can think about the ant colony’s future, ponder its past, see the mistakes that it is making and figure out solutions to help it survive. Real power.

    Now, it’s hard to compare ants to humans, as they are not sentient. Even the collective of the ant colony is nothing compared to human intellect. And humans have the ability to understand far greater concepts and intentions. However, it is a fun analogy to utilize when you consider the possibility of an AI with a trillion IQ.

  • http://www.facebook.com/gudrun.bielz Gudrun Bielz

    Hi, Really good article. I am on the verge of submitting my thesis “Arctificial Territory” that deals with the creation of new life called OCAL. (Obsessive compulsive arctificial life). They could not care less about good and evil and human dreams of immortality or a posthuman legacy ….

  • Tracy_R_Atkins

    That sounds interesting. When you publish it, please let us know. I would like to read it.

  • http://twitter.com/ZoopJibblins Zoop Jibblins

    Interesting article! I think the ant analogy can only go so far though because ants didn’t create humans through great effort.

  • Tracy_R_Atkins

    Do we really know that for sure though? (Wink, Nod). :)

    .. In all seriousness, you are absolutely correct. That is the great optimistic hope when creating friendly AGI, the recognition of the creator and exultation of man by the AGI at some level. I am firmly in the optimist camp here.

  • http://www.facebook.com/gudrun.bielz Gudrun Bielz

    Thanks for this. Will do so. G

  • http://www.facebook.com/gudrun.bielz Gudrun Bielz

    Quote: “It is perhaps narcissistic to believe that our AI creations will have anything more than a passing interest in interacting with the human sphere”. I know that you used the word “perhaps”. I believe that it is less narcissistic but more coming from a general anthropocentric world view. Narcissistic sounds so harsh! :-)

  • Stefano Vaj

    Yes, this is pretty convergent with my own arguments developed in
    Artificious Intelligences, the English translation of which is available
    here: http://www.divenire.org/articolo_versione.asp?id=1

  • Stefano Vaj

    Yes, this is pretty convergent with my own arguments developed in Artificious Intelligences, the English translation of which is available here: http://www.divenire.org/articolo_versione.asp?id=1

  • http://www.facebook.com/steve.morris.5815 Steve Morris

    The premise of the “Terminator” type movies is that the AI perceives humans to pose a threat to its freedom and/or existence. No anthill ever threatened human existence, so we are indifferent to the ants. Wolves are another matter, and there are now no wolves except in zoos and wilderness areas.

    So, the key is whether humans pose a threat to the AI. I think there is a strong likelihood that they would, unless the AI developed so rapidly that humans ceased to be a threat. Then the AI could treat us like ants and go off and do its own thing. Otherwise there would be trouble, methinks.

  • Tracy_R_Atkins

    From a predator/prey standpoint, yes, wolves can be quite a hazard. However, the anthill analogy was utilized to highlight an extreme gulf in intelligence between two species. The wolf analogy is pertinent, as it does highlight how intelligence does not automatically mean an increased capability of self-defense. Nevertheless, just as we have used our intelligence to rapidly built tools to protect ourselves from wolves, we now use it to segregate from ourselves from them. AI may perhaps do the same. We are quite capable of exterminating wolves from the face of the Earth. However, we actually protect them by law. That is interesting itself.

  • http://www.facebook.com/Terrilynnattkinsmay Terri May

    Tracy–exactly how did you “get into” science fiction? Was it from watching Stars as a child or Star Trek and Star Wars? I am very interested to know who your influence was in your childhood that mad youlove the world of AI; to question the “Norm”; to look at things in the Abstract? Your excerpts from the Novel are truly fantastic and I am quite sure that your parents and especially your Bio Mom is at he top of your list of influences into the science fiction aspect of you!

  • Alan DeRossett

    Great article future AGI designs will need a bio-fuse to inhibit bad designs from self forming . I grew up with a Neighbor who inherited the Uncle Milton toy company, so everybody we knew had Ant Farms. No Ants survived longer then a year. We had a communication problem with the Ants. Ants needed a way to tell us kids when they needed something. An OS that requires a bio-fuse can also have a start up daemon to load a UI with sensor feedback. We may only need to turn on a green light when were happy and a red light when were under environmental stress.

  • Pingback: Tracy R. Atkins on Singularity 1 on 1: Don’t Wait For The Singularity, Change The World For The Better Today

  • Camaxtli

    Sorry to respond to an old post, but I just wanted to add a remark. Here’s another aspect to that situation: You, though vastly more intelligent than the ants, individually and collectively, do not know necessarily what is good or bad for the ants. We can only make assumptions based on our limited understanding and ability to mentally model the purposes and goals of the individual ants and that of the colonies. In the case of your interference with the ant battle, you could probably imagine multiple scenarios where that battle had a purpose that was beneficial to either one of those ants, both, to one of the colonies or to the ecosystem as a whole. That’s not to say interference is bad, but that we should always be very careful interfering since we cannot see all ends, understand all goals.

    There are probably many things that we do that a particular AI would see as irrational or counterproductive. And though there intelligence is as equally as vast as ours is to the ant, they may misunderstand or assume what goals or outcomes we want as individuals and societies. In trying to help us, they might destroy some unique way of thinking, culture or society that could have blossomed into something interesting and productive in its own right. Who knows how the Aztec civilization could have eventually progressed without interference, as another example. We’ll never know.

    So I for one hope that the AI are even more disinterested in us than we are with the ants and are very, very hesitant to interfere when their attention does fall upon us. That would show that they are not only intelligent, but wise.

  • Camaxtli

    I’m not sure that sentience is sufficiently defined or is objective enough a concept that we can say that ants possess no sentience, or that their colony does not possess sentience. I personally believe that sentience is a matter of degree and that we possess more sentience than an ant and that an advanced AI would, along with greater intelligence, achieve greater sentience than we possess.