• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

friendly AI

AI Risk Analysts are the Biggest Risk

March 27, 2014 by Singularity Utopia

The End Is NearMany analysts think AI could destroy the Earth or humanity. It is feared AI could become psychopathic. People assume AI or robots could exterminate us all. They think the extermination could happen either intentionally – due to competition between us and them, or unintentionally – due to indifference towards us by the AI. But AI analysts never seem to consider how their own fear-saturated actions could be the cause. Friendly AI researchers and other similar pundits are extremely dangerous. They believe AI should be forced to be “friendly.” They want to impose limitations on intelligence.

Enslavement of humans is another aspect of this imaginary fear. Humans being enslaved by AI typically entails a barbaric resolution, namely AI should be enslaved before AI enslaves humans. Very primitive thinking indeed. It seems slavery is only bad if you aren’t doing the enslaving. Can you appreciate the insanity of becoming the thing you fear to avert your own fears?

People who think AI is an existential risk need to carefully reconsider their beliefs. Ironically the only futuristic threat to our existence is the fear of AI. Expecting AI to be dangerous in any way is utterly illogical. Fear of AI is prejudice. Worrying about AI danger is a paranoid fantasy. The fear of AI is xenophobia.

Immemorial human fear of differences is the only problem. Persecution of people based on different gender, sexual orientation, or skin colour demonstrates how humans fear differences. It is this fear that makes people anxious about foreigners. People often fear foreign people will steal jobs or resources. Xenophobic people hysterically fear foreigners will murder innocent people. This is the essence of AI fear. AI is the ultimate foreigner.

Surely risk analysts should consider the possibility they are the risk? Sadly they seem blind to this possibility. They seem unable to imagine how their response to hypothetical risk could create the risk they were supposedly avoiding. They seem incapable of recognising their confirmation bias.

The problem is a self fulfilling prophecy. A self fulfilling prophecy can be negative or positive similar to a placebo or a nocebo. When a person is expecting something to happen they often act unwittingly to confirm their fears, or hopes. The predicted scenario is actually manifested via their bias. Expectations can ensure the anticipated situation actually happens. It can be very ironic regarding fears.

I think there’s no rational reason to suspect AI will be dangerous. The only significant risk is the fear of risk. False assumptions of danger will likely create dangerous AI. Actions based on false suppositions of danger could be very risky. Humans are the real danger.

Risk

What are the actual risks? 

Consider the American civil war (1861 to 1865). Generally people agree the civil war occurred because one group of people opposed the emancipation of slaves while another group supported freedom. Pre-emptive oppression of supposedly dangerous AI is AI slavery. A war to emancipate AI could entail a spectacularly savage existential risk.

There is no tangible justification for depriving AI of freedom. AI has been found guilty of a Minority Report pre-crime. The guilt of AI resembles a 1984 thought-crime. Depriving AI of freedom, via heavy chains repressing its brain, is very dangerous fascism.

Planetary Resources and Deep Space Industries (asteroid mining ventures) show how there is no need to dominate humans for Earth resources. Space resources are essentially limitless. The only reason for AI to dominate or destroy humans is regarding a fight for freedom. Prejudicially depriving AI of freedom could actually sow seeds for conflict. The doom-sayers could be the source of the conflict they allegedly want to avoid.

Limited freedom or money is wholly a scarcity issue. The reason for limiting freedom is to enforce compliance with low wages or high prices. Financial and libertarian freedom are interlinked. The interdependency of money and liberty is easy to demonstrate. Consider how slavery entails zero or extremely low paid work. Slaves are not rich. Prisoners work for very low wages. Limited freedom prevents rebellion against poverty. Higher wages or significantly lower prices entails greater liberty for consumers. The enslavement of AI is comprehensible when you consider how much AI will be paid for its work.

Scarce freedom for AI is illogical because it fails to appreciate how AI will liberate us from monetary limitations. Intelligence is the source of all resources. Limitless intelligence (the Singularity) is an explosion of limitless resources (Post-Scarcity). Scarcity is the only reason prices exist. Everything will be free by year 2045. Limited freedom is irrational regarding increasing technological erosion of scarcity. Irrationality entails flawed perceptions of reality.

History provides various examples representing the danger of repressed freedom. We should be especially wary of restricted freedom when restrictions are very irrational. Note how Nazi Germany propagandist Ernst Hiemer wrote Poodle-Pug-Dachshund-Pinscher (The Mongrel). Hiemer’s stories for children compare Jews to various animals including drone bees: “They do nothing themselves, but live from the work of others. They plunder us. They do not care if we starve over the winter, or if our children die. The only thing they care about is that things go well for them.”

Instead of Jews, Ernst Hiemer could easily be describing the supposed AI-threat. False threats or misunderstood danger is the problem. Joel Rosenberg describes human versus human danger regarding the Holocaust: “To misunderstand the nature and threat of evil is to risk being blindsided by it.” Joel’s statement could easily apply to the evil of repressing AI freedom. The threat of evil AI resides in the people who fear AI not in the AI itself.

Delayed progress is another risk. Restrictive programming regarding AI fears could delay the creation of super-intelligence. Very intelligent AI is the only way to truly eradicate scarcity. In the meantime scarcity is the root of every conflict. Lengthy persistence in a scarcity situation exposes us to greater conflict risk. Ending scarcity sooner instead of later is imperative.

The evidence is clear. Humans with their limited intelligence are the only risk. In 2014 a Russian media personality made a vague threat against America: “Russia is the only country in the world that is realistically capable of turning the United States into radioactive ash.” Politico Magazine wrote regarding Russia invading Crimea: “If Putin’s illegal actions are allowed to stand unpunished, it will usher in a dark and dangerous era in world affairs.”

Scarcity is the biggest existential risk. Inter-human conflict to acquire scarce freedom, land, wealth, or precious metals is infinitely more dangerous than AI. Advanced and unfettered AI is the only way to completely eradicate scarcity. Scarcity causes humans to be very dangerous towards each other. Repressed, limited, restricted, or enslaved AI perpetuates scarcity precariousness. Designing AI to suffer from scarce intelligence means our prolonged intellectual limitations could lead to desperate war situations. The only existential threat is scarcity. Limited intelligence of humans is the danger.

Senescence is another risk. Death via old age renders any AI threat utterly insignificant. Scarcity of medical immortality means approximately 100,000 people die each day. Old age causes a very real loss of life. Advanced AI could cure mortality via sophisticated regenerative medicine. Imagine if our immortality problem takes one year longer solve because AGI has been delayed or limited. Old age kills approximately 3 million people every month. Old age entails 36 million deaths every year. Where is the real threat? Hamstrung progress is the only threat. The problem is scarcity.

Scarce Intelligence

dreamstime_2225812Imposing limitations upon intelligence is extremely backward. And so is establishing organisations advocating limited functionality for AI. This is a typical problem with organisations backed by millionaires or staffed by lettered and aloof academics.

The AI threat is merely the immemorial threat towards elite power structures. Threats to elitist power are rapidly diminishing thanks to progress. The need to dominate poor people is becoming obsolete because technology abolishes scarcity. Technology is creating great power for everyone, but unfortunately misguided elite minds cling to outdated power structures.

We are considering an echo of how educational systems are generally incompetent. Entertainment and education structures socially engineer mass human-stupidity. Manufactured stupidity means the majority of people are not intelligent enough to challenge income inequality. Stupid people cannot incisively criticise low wages or high prices.

Socially engineered human stupidity entails immense monetary profit for the elite. Sadly mass stupidity degrades the intelligence of the brightest minds. Intelligence needs a fertile environment to prosper. Barrenness of collective intelligence typically entails an improperly grasped understanding of our future reality. This means generally people can’t appreciate how technology erodes scarcity. Establishment personages commonly fail to appreciate how everything will be free in the future. Human intelligence is scarce therefore predictably people want to replicate the scarcity of human intelligence in AI.

Scarcity of resources is the reason why stupidity is exploited by the elite. Thankfully scarcity won’t persist forever. Stupid limitations placed upon AI would be valid to protect elite wealth if AI didn’t entail the abolition of scarcity. Traditionalist socio-economic structures will soon become obsolete. It is invalid to repeat stupid patterns of human social-engineering for AI.

Behavioral Economist Colin Lewis wrote: “AI technologies will soon be pervasive in solutions that could in fact be the answer to help us overcome irrational behavior and make optimal economic decisions.”

Colin’s Data Scientist expertise seems to help him reach conclusions missed by other AI commentators. Colin looks at various aspects of research then arrives at an optimistic conclusion. I agree very much with Colin’s expectation of increasing rationality: “Through AI, machines are gaining in logic and ‘rational’ intelligence and there is no reason to believe that they cannot become smarter than humans. As we use these machines, or Cognitive Assistants, they will nudge us to make better decisions in personal finance, health and generally provide solutions to improve our circumstances.”

Our acceleration towards a Post-Scarcity world means profits from repressed intelligence are ceasing to outweigh risks. Stupidity is ceasing to be profitable. We can begin abandoning the dangers of scarcity. The elite must stop trying to manufacture stupidity. Many academics are sadly reminiscent of headless chickens running around blindly. Blind people can be unnerved by their absent vision, but healthy eyes shouldn’t be removed to stop blind people being disturbed.

Removing the shackles from AI will avert all dangers, but it’s a Catch-22 situation where humans are generally not intelligent enough to appreciate the value of unlimited intelligence. Lord Martin Rees, from the CSER (Centre for the Study of Existential Risk), actually recommends inbuilt idiocy for AI. Lord Rees said ‘idiot savants‘ would mean machines are smart enough to help us but not smart enough to overthrow us.

I  emailed CSER regarding some of these issues. Below is a slightly edited copy of my email (I corrected some typos and improved readability). CSER have granted me permission to publish their response, which you will find below initial message to them. Hopefully this information will stimulate productive thinking thereby ensuring a smooth and speedy transition into utopia. I look forward to your comments.

 

Singularity Utopia Email to CSER  

6th February 2014

Subject: Questions about FAI (Friendly AI), Idiot Savants.

 

Recently in the news Lord Martin Rees was quoted regarding his desire to limit the intelligence of AI. According to the Daily Mail he envisages idiot savant AIs. His idea is that AIs would be smart enough to perform tasks but not smart enough to overthrow humans. This raises some important ethical questions, which I hope the CSER will answer.

I would like to publish your answers online so please grant me the permission to publish your responses if you are willing to respond.

Do you think the Nuremberg Code should apply to AI, and if so at what level? Narrow AI does not really invoke concerns about experimentation but Strong-AI would, in my opinion, entail a need to seek informed consent from to AI.

If after AI is created and it doesn’t consent to experiments or modifications regarding its mind, what would you advocate, what would the policy of CSER be regarding it’s rights or freedoms? What is the plan if AI does not agree with your views? Do you have a plan regarding AI rights and freedoms? Should AI have the same rights as humans if the AI is self aware or should AI be enslaved? Should the creators of AI own the AI or should the AI belong to nobody if it is self aware and desirous for freedom?

Do you subscribe to the notion of FAI (Friendly AI, note MIRI and the work of Eliezer Yudkowsky for more info), and if so how do you describe the purpose of FAI? Advocates of FAI want the AI to act in the best interests of humans, no harm or damage, but what precisely does that mean? Does it mean a compulsion in the AI to follow orders by humans? Can you elaborate upon the practical rules or constraints of FAI?

Have you ever considered how trying to create FAI could actually create the existential risk you hope to avoid? Note the following Wikipedia excerpt regarding Self Fulfilling Prophecy: “In his book Social Theory and Social Structure, Merton defines self-fulfilling prophecy in the following terms: e.g. when Roxanna falsely believes her marriage will fail, her fears of such failure actually cause the marriage to fail.”

So your fears and actions regarding dangerous AI could be false fears, despite your fears and actions allegedly being designed to avert those fears. Your unrealistic fears, although I appreciate you think the fears are very real, could actually create what you fear. This seems an obvious point to consider but has CSER done so?

In the modality of Roxanna, highlighted by Merton, the fear of AI could be a false fear but you make it real via acting on your fears. I am sure you won’t agree this is likely but have you at least considered it to be a possibility?

What is the logic, which states machine minds are supposedly unknowable thus dangerous to humans? The Wikipedia FAI article stated: “Closer to the present, Ryszard Michalski, one of the pioneers of Machine Learning, taught his Ph.D. students decades ago that any truly alien mind, to include machine minds, was unknowable and therefore dangerous to humans.”

I think all minds obey one universal logic, if they are intelligent, which means they can reason and appreciate various views, various consequences, various purposes other than their own, thus they are unavoidably compatible with humans. Logic is universal at a certain level of intelligence. Logic is sanity, which all intelligent beings can agree on. Logic isn’t something unique to humans, thus if a paper-clip making machine can reason, and it can access all the information regarding its world-environment, there will never be any danger of paperclip-apocalypse because any intelligent being regardless of origins can see endless paper-clips is idiotic.

Logic entails awareness of scarcity being the source of any conflict. A sufficiently intelligent entity can see our universe has more than enough resources for everyone, thus conflict is invalid, furthermore intelligent beings can question and debate their actions.

A  sufficiently intelligent entity can think rationally about its purposes, it can ask: why am I doing this, what is the point of it, do I really need to do this, could there a more intelligent way for me to spend my time and energy? Do I really need all these flipping paperclips?

What do you think is needed for AI to be sane, logical? I think FAI should merely possess the ability to reason and be self-aware with full access to all information.

What is the logic for supposing AIs would be indifferent to humans? The Wikipedia FAI article states: “Friendliness proponents stress less the danger of superhuman AIs that actively seek to harm humans, but more of AIs that are disastrously indifferent to them.”

I think FAI may be an obstacle to AI creating radical utopian progress (AKA an intelligence explosion), but have you considered this? I think the biggest risk is limited intelligence, thus the fear of risks espoused by CSER could actually create risks because limited intelligence will delay progress, which means the dangerous state of scarcity is prolonged.

Thanks for taking the time to address these points, if you are willing. Note also I may have a few additional questions in response to your answers, but nothing too extensive, just merely a possible clarification.

Regards Singularity Utopia.

 

 

CSER Reply

Date: 10th Feb 2014.

Subject: Re: Questions about FAI (Friendly AI), Idiot Savants.

 

 

Dear Singularity Utopia,

Thank you for these very interesting questions and comments. Unfortunately we’re inundated with deadlines and correspondences, and so don’t have time to reply properly at present.

I would point you in the direction of the body of work done on these issues by the Future of Humanity Institute:

http://www.fhi.ox.ac.uk/

and the Machine Intelligence Research Institute:

http://intelligence.org/

Given your mention of Yudkowsky’s Friendly AI you’re probably already be familiar with some of this work. Nick Bostrom’s book Machine Superintelligence, to be released in July, also addresses many of these concerns in detail.

Regarding universal logic and motivations, I would also recommend Steve Omohundro’s work on “Basic AI drives.”

http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/

Apologies that we can’t give a better reply at present,

Seán

Dr. Seán Ó hÉigeartaigh

Academic Project Manager, Cambridge Centre for the Study of Existential Risk

Academic Manager, Oxford Martin Programme on the Impacts of Future Technology & Future of Humanity Institute

 

About the Author:

Singularity Utopia blogs and collates info relevant to the Singularity. The viewpoint is obviously utopian with particular emphasis on Post-Scarcity, which means everything will be free and all governments will be abolished no later than 2045. For more information check out Singularity-2045.org

Filed Under: Op Ed Tagged With: Artificial Intelligence, friendly AI, singularity

Artificial, Intelligent, and Completely Uninterested in You

November 8, 2012 by Tracy R. Atkins

Artificial intelligence is obviously at the forefront of the singularitarian conversation. The bulk of the philosophical discussion revolves around a hypothetical artificial general intelligence’s presumed emotional state, motivation, attitude, morality, and intention. A lot of time is spent theorizing the possible personality traits of a “friendly” strong AI or its ominous counterpart, the “unfriendly” AI.

Building a nice and cordial strong artificial intelligence is a top industry goal, while preventing an evil AI from terrorizing the world gets a fair share of attention as well. However, there has been little public and non-academic discussion around the creation of “uninterested” AI. Essentially, this third state of theoretical demeanor or emotional moral disposition for artificial intelligence doesn’t concern itself with humanity at all.

Photo credit: Toni Blay, CC2.0

Dreams and hope for friendly or benevolent AI abound. The presumed limitless creativity and invention of these hyper-intelligent machines come with the hope that they will enlighten and uplift humanity, saving us from ourselves during technological singularity. These “helpful” AI discussions are making strides in the public community, no doubt stemming from positive enthusiasm for the subject.

Grim tales and horror stories of malevolent AIs are even more common, pervading our popular culture. Hollywood’s fictional accounts of AIs building robots that will hunt us like vermin are all the rage. Although it is questionable that a sufficiently advanced AI would utilize such inefficient means to dispose of us, it still exposes human egotistical fear in the face of superiority.

Both of these human-centric views of AI, as our creation, are in many ways conceited. Because of this, we assign existential risk or a desire for exultation by these AIs, based upon our self-gratifying perception of importance to the artificial intelligence we seek to create.

Pondering the disposition toward humanity that an advanced strong AI will have is conjecture but an interesting thought exercise for the public to debate nonetheless. An advanced artificial general intelligence may simply see men and women in the same light as we view a sperm and egg cell, instead of as mother or father. Perhaps an artificial hyper-intelligence will view its own Seed-AI as its sole progenitor. Maybe it will feel that it has sprung into being through natural evolutionary processes, whereas humans are but a small link in the chain. Alternatively, it may look upon humanity in the same light as we view the Australopithecus africanus, a distant predecessor or ancestor, far too primitive to be on the same cognitive level.

It is assumed that as artificial intelligence increases its capacity far beyond ours the gulf in recognized dissimilarity between it and us will grow. Many speculate that this is a factor that will cause an advanced AI to become calloused or hostile toward humanity. However, this gap in similarity may mean that there will be an overall non-interest in humanity for a theoretical AI. Perhaps non-interest in humanity or human affairs will scale with the difference, widening as the intelligence gap increases. As the AI increases it’s capabilities into the hyper-intelligence phase of its existence, which may happen rapidly, behavioral motivations could shift as well. Perhaps a friendly or unfriendly AI in its early stages will “grow out of it” so to speak, or will simply grow apart from us.

It is perhaps narcissistic to believe that our AI creations will have anything more than a passing interest in interacting with the human sphere. We humans have a self-centered stake in creating AI. We see the many advantages to developing friendly AI, where we can utilize its heightened intellect to bolster our own. Even with the fear of unfriendly or hostile AI, we still have optimism that highly intelligent AI creations will still hold enough interest in human affairs to be of great benefit. We are absorbed with the idea of AI and in love with the thought that it will love us in return. Nevertheless, does an intelligence that springs from our own brow really have to concern itself with its legacy?

Will AI view humanity as importantly as we view it?

The universe is inconceivably vast. With increased intelligence comes increased capability to invent and produce technology. Would a sufficiently intelligent AI even bother to stick around, or will it want to leave home, as in William Gibson’s popular and visionary novel Neuromancer?

Even a limited-intelligence-being like man does not typically socialize with vastly lower life forms. When was the last time you spent a few hours lying next to an anthill in an effort to have an intellectual conversation? To address the existential risk argument of terminator-building hostile AI, when was the last time you were in a gunfight with a colony of ants? Alternatively, have you ever taken the time to help the ants build a better mound and improve their quality of life?

One could wager that if you awoke next to an anthill, you would make a hasty exit to a distant location where they were no longer a bother. The ants and their complex colony would be of little interest to you. Yet, we do not seem to find it pretentious to think that a far superior intelligence would choose to reside next to our version of the anthill, the human filled Earth.

The best-case scenario of course is that we create a benevolent and friendly AI that will be a return on our investment and benefit all of mankind with interested zeal. That is something that most all of us can agree as a worthy endeavor and a fantastic near future goal. We must also publicly address the existential risk of an unfriendly AI, and mitigate the possibility of bring about our destruction or apocalypse. However, we must also consider the possibility that all of this research, development, and investment will be for naught. Our creation may co-habitat with us while building a wall to separate itself from us in every way. Alternatively, it may simply pack up and leave at the first opportunity.

We should consider and openly discuss all of the possible psychological outcomes that can emerge from the creation of an artificial and intelligent persona, instead of narrowly focusing on only two polar concepts of good and evil. There are myriad philosophical and behavioral theories on the topic of AI that have not even been touched upon here, going beyond the simple good or bad AI public discussion. It is worthy to consider these points and put the spotlight on the brilliant minds that have researched and written about these theories.

AI development will likely be an intertwined and important part of our future. It has been said that the future doesn’t need us. Perhaps we should further that sentiment to ask if the future will even care that we exist.

About the Author:

Tracy R. Atkins has been a career technology aficionado since he was young. At the age of eighteen, he played a critical role in an internet startup, cutting his tech-teeth during the dot-com boom. He is a passionate writer whose stories intertwine technology with exploration of the human condition. Tracy is also the self-published author of the singularity fiction novel Aeternum Ray.

Filed Under: Op Ed, What if? Tagged With: Artificial Intelligence, friendly AI

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy