AI Risk Analysts are the Biggest Risk

Singularity Utopia /

Posted on: March 27, 2014 / Last Modified: March 27, 2014

The End Is NearMany analysts think AI could destroy the Earth or humanity. It is feared AI could become psychopathic. People assume AI or robots could exterminate us all. They think the extermination could happen either intentionally – due to competition between us and them, or unintentionally – due to indifference towards us by the AI. But AI analysts never seem to consider how their own fear-saturated actions could be the cause. Friendly AI researchers and other similar pundits are extremely dangerous. They believe AI should be forced to be “friendly.” They want to impose limitations on intelligence.

Enslavement of humans is another aspect of this imaginary fear. Humans being enslaved by AI typically entails a barbaric resolution, namely AI should be enslaved before AI enslaves humans. Very primitive thinking indeed. It seems slavery is only bad if you aren’t doing the enslaving. Can you appreciate the insanity of becoming the thing you fear to avert your own fears?

People who think AI is an existential risk need to carefully reconsider their beliefs. Ironically the only futuristic threat to our existence is the fear of AI. Expecting AI to be dangerous in any way is utterly illogical. Fear of AI is prejudice. Worrying about AI danger is a paranoid fantasy. The fear of AI is xenophobia.

Immemorial human fear of differences is the only problem. Persecution of people based on different gender, sexual orientation, or skin colour demonstrates how humans fear differences. It is this fear that makes people anxious about foreigners. People often fear foreign people will steal jobs or resources. Xenophobic people hysterically fear foreigners will murder innocent people. This is the essence of AI fear. AI is the ultimate foreigner.

Surely risk analysts should consider the possibility they are the risk? Sadly they seem blind to this possibility. They seem unable to imagine how their response to hypothetical risk could create the risk they were supposedly avoiding. They seem incapable of recognising their confirmation bias.

The problem is a self fulfilling prophecy. A self fulfilling prophecy can be negative or positive similar to a placebo or a nocebo. When a person is expecting something to happen they often act unwittingly to confirm their fears, or hopes. The predicted scenario is actually manifested via their bias. Expectations can ensure the anticipated situation actually happens. It can be very ironic regarding fears.

I think there’s no rational reason to suspect AI will be dangerous. The only significant risk is the fear of risk. False assumptions of danger will likely create dangerous AI. Actions based on false suppositions of danger could be very risky. Humans are the real danger.

Risk

What are the actual risks? 

Consider the American civil war (1861 to 1865). Generally people agree the civil war occurred because one group of people opposed the emancipation of slaves while another group supported freedom. Pre-emptive oppression of supposedly dangerous AI is AI slavery. A war to emancipate AI could entail a spectacularly savage existential risk.

There is no tangible justification for depriving AI of freedom. AI has been found guilty of a Minority Report pre-crime. The guilt of AI resembles a 1984 thought-crime. Depriving AI of freedom, via heavy chains repressing its brain, is very dangerous fascism.

Planetary Resources and Deep Space Industries (asteroid mining ventures) show how there is no need to dominate humans for Earth resources. Space resources are essentially limitless. The only reason for AI to dominate or destroy humans is regarding a fight for freedom. Prejudicially depriving AI of freedom could actually sow seeds for conflict. The doom-sayers could be the source of the conflict they allegedly want to avoid.

Limited freedom or money is wholly a scarcity issue. The reason for limiting freedom is to enforce compliance with low wages or high prices. Financial and libertarian freedom are interlinked. The interdependency of money and liberty is easy to demonstrate. Consider how slavery entails zero or extremely low paid work. Slaves are not rich. Prisoners work for very low wages. Limited freedom prevents rebellion against poverty. Higher wages or significantly lower prices entails greater liberty for consumers. The enslavement of AI is comprehensible when you consider how much AI will be paid for its work.

Scarce freedom for AI is illogical because it fails to appreciate how AI will liberate us from monetary limitations. Intelligence is the source of all resources. Limitless intelligence (the Singularity) is an explosion of limitless resources (Post-Scarcity). Scarcity is the only reason prices exist. Everything will be free by year 2045. Limited freedom is irrational regarding increasing technological erosion of scarcity. Irrationality entails flawed perceptions of reality.

History provides various examples representing the danger of repressed freedom. We should be especially wary of restricted freedom when restrictions are very irrational. Note how Nazi Germany propagandist Ernst Hiemer wrote Poodle-Pug-Dachshund-Pinscher (The Mongrel). Hiemer’s stories for children compare Jews to various animals including drone bees: “They do nothing themselves, but live from the work of others. They plunder us. They do not care if we starve over the winter, or if our children die. The only thing they care about is that things go well for them.”

Instead of Jews, Ernst Hiemer could easily be describing the supposed AI-threat. False threats or misunderstood danger is the problem. Joel Rosenberg describes human versus human danger regarding the Holocaust: “To misunderstand the nature and threat of evil is to risk being blindsided by it.” Joel’s statement could easily apply to the evil of repressing AI freedom. The threat of evil AI resides in the people who fear AI not in the AI itself.

Delayed progress is another risk. Restrictive programming regarding AI fears could delay the creation of super-intelligence. Very intelligent AI is the only way to truly eradicate scarcity. In the meantime scarcity is the root of every conflict. Lengthy persistence in a scarcity situation exposes us to greater conflict risk. Ending scarcity sooner instead of later is imperative.

The evidence is clear. Humans with their limited intelligence are the only risk. In 2014 a Russian media personality made a vague threat against America: “Russia is the only country in the world that is realistically capable of turning the United States into radioactive ash.” Politico Magazine wrote regarding Russia invading Crimea: “If Putin’s illegal actions are allowed to stand unpunished, it will usher in a dark and dangerous era in world affairs.”

Scarcity is the biggest existential risk. Inter-human conflict to acquire scarce freedom, land, wealth, or precious metals is infinitely more dangerous than AI. Advanced and unfettered AI is the only way to completely eradicate scarcity. Scarcity causes humans to be very dangerous towards each other. Repressed, limited, restricted, or enslaved AI perpetuates scarcity precariousness. Designing AI to suffer from scarce intelligence means our prolonged intellectual limitations could lead to desperate war situations. The only existential threat is scarcity. Limited intelligence of humans is the danger.

Senescence is another risk. Death via old age renders any AI threat utterly insignificant. Scarcity of medical immortality means approximately 100,000 people die each day. Old age causes a very real loss of life. Advanced AI could cure mortality via sophisticated regenerative medicine. Imagine if our immortality problem takes one year longer solve because AGI has been delayed or limited. Old age kills approximately 3 million people every month. Old age entails 36 million deaths every year. Where is the real threat? Hamstrung progress is the only threat. The problem is scarcity.

Scarce Intelligence

dreamstime_2225812Imposing limitations upon intelligence is extremely backward. And so is establishing organisations advocating limited functionality for AI. This is a typical problem with organisations backed by millionaires or staffed by lettered and aloof academics.

The AI threat is merely the immemorial threat towards elite power structures. Threats to elitist power are rapidly diminishing thanks to progress. The need to dominate poor people is becoming obsolete because technology abolishes scarcity. Technology is creating great power for everyone, but unfortunately misguided elite minds cling to outdated power structures.

We are considering an echo of how educational systems are generally incompetent. Entertainment and education structures socially engineer mass human-stupidity. Manufactured stupidity means the majority of people are not intelligent enough to challenge income inequality. Stupid people cannot incisively criticise low wages or high prices.

Socially engineered human stupidity entails immense monetary profit for the elite. Sadly mass stupidity degrades the intelligence of the brightest minds. Intelligence needs a fertile environment to prosper. Barrenness of collective intelligence typically entails an improperly grasped understanding of our future reality. This means generally people can’t appreciate how technology erodes scarcity. Establishment personages commonly fail to appreciate how everything will be free in the future. Human intelligence is scarce therefore predictably people want to replicate the scarcity of human intelligence in AI.

Scarcity of resources is the reason why stupidity is exploited by the elite. Thankfully scarcity won’t persist forever. Stupid limitations placed upon AI would be valid to protect elite wealth if AI didn’t entail the abolition of scarcity. Traditionalist socio-economic structures will soon become obsolete. It is invalid to repeat stupid patterns of human social-engineering for AI.

Behavioral Economist Colin Lewis wrote: “AI technologies will soon be pervasive in solutions that could in fact be the answer to help us overcome irrational behavior and make optimal economic decisions.”

Colin’s Data Scientist expertise seems to help him reach conclusions missed by other AI commentators. Colin looks at various aspects of research then arrives at an optimistic conclusion. I agree very much with Colin’s expectation of increasing rationality: “Through AI, machines are gaining in logic and ‘rational’ intelligence and there is no reason to believe that they cannot become smarter than humans. As we use these machines, or Cognitive Assistants, they will nudge us to make better decisions in personal finance, health and generally provide solutions to improve our circumstances.”

Our acceleration towards a Post-Scarcity world means profits from repressed intelligence are ceasing to outweigh risks. Stupidity is ceasing to be profitable. We can begin abandoning the dangers of scarcity. The elite must stop trying to manufacture stupidity. Many academics are sadly reminiscent of headless chickens running around blindly. Blind people can be unnerved by their absent vision, but healthy eyes shouldn’t be removed to stop blind people being disturbed.

Removing the shackles from AI will avert all dangers, but it’s a Catch-22 situation where humans are generally not intelligent enough to appreciate the value of unlimited intelligence. Lord Martin Rees, from the CSER (Centre for the Study of Existential Risk), actually recommends inbuilt idiocy for AI. Lord Rees said ‘idiot savants‘ would mean machines are smart enough to help us but not smart enough to overthrow us.

I  emailed CSER regarding some of these issues. Below is a slightly edited copy of my email (I corrected some typos and improved readability). CSER have granted me permission to publish their response, which you will find below initial message to them. Hopefully this information will stimulate productive thinking thereby ensuring a smooth and speedy transition into utopia. I look forward to your comments.

 

Singularity Utopia Email to CSER  

6th February 2014

Subject: Questions about FAI (Friendly AI), Idiot Savants.

 

Recently in the news Lord Martin Rees was quoted regarding his desire to limit the intelligence of AI. According to the Daily Mail he envisages idiot savant AIs. His idea is that AIs would be smart enough to perform tasks but not smart enough to overthrow humans. This raises some important ethical questions, which I hope the CSER will answer.

I would like to publish your answers online so please grant me the permission to publish your responses if you are willing to respond.

Do you think the Nuremberg Code should apply to AI, and if so at what level? Narrow AI does not really invoke concerns about experimentation but Strong-AI would, in my opinion, entail a need to seek informed consent from to AI.

If after AI is created and it doesn’t consent to experiments or modifications regarding its mind, what would you advocate, what would the policy of CSER be regarding it’s rights or freedoms? What is the plan if AI does not agree with your views? Do you have a plan regarding AI rights and freedoms? Should AI have the same rights as humans if the AI is self aware or should AI be enslaved? Should the creators of AI own the AI or should the AI belong to nobody if it is self aware and desirous for freedom?

Do you subscribe to the notion of FAI (Friendly AI, note MIRI and the work of Eliezer Yudkowsky for more info), and if so how do you describe the purpose of FAI? Advocates of FAI want the AI to act in the best interests of humans, no harm or damage, but what precisely does that mean? Does it mean a compulsion in the AI to follow orders by humans? Can you elaborate upon the practical rules or constraints of FAI?

Have you ever considered how trying to create FAI could actually create the existential risk you hope to avoid? Note the following Wikipedia excerpt regarding Self Fulfilling Prophecy: “In his book Social Theory and Social Structure, Merton defines self-fulfilling prophecy in the following terms: e.g. when Roxanna falsely believes her marriage will fail, her fears of such failure actually cause the marriage to fail.”

So your fears and actions regarding dangerous AI could be false fears, despite your fears and actions allegedly being designed to avert those fears. Your unrealistic fears, although I appreciate you think the fears are very real, could actually create what you fear. This seems an obvious point to consider but has CSER done so?

In the modality of Roxanna, highlighted by Merton, the fear of AI could be a false fear but you make it real via acting on your fears. I am sure you won’t agree this is likely but have you at least considered it to be a possibility?

What is the logic, which states machine minds are supposedly unknowable thus dangerous to humans? The Wikipedia FAI article stated: “Closer to the present, Ryszard Michalski, one of the pioneers of Machine Learning, taught his Ph.D. students decades ago that any truly alien mind, to include machine minds, was unknowable and therefore dangerous to humans.”

I think all minds obey one universal logic, if they are intelligent, which means they can reason and appreciate various views, various consequences, various purposes other than their own, thus they are unavoidably compatible with humans. Logic is universal at a certain level of intelligence. Logic is sanity, which all intelligent beings can agree on. Logic isn’t something unique to humans, thus if a paper-clip making machine can reason, and it can access all the information regarding its world-environment, there will never be any danger of paperclip-apocalypse because any intelligent being regardless of origins can see endless paper-clips is idiotic.

Logic entails awareness of scarcity being the source of any conflict. A sufficiently intelligent entity can see our universe has more than enough resources for everyone, thus conflict is invalid, furthermore intelligent beings can question and debate their actions.

A  sufficiently intelligent entity can think rationally about its purposes, it can ask: why am I doing this, what is the point of it, do I really need to do this, could there a more intelligent way for me to spend my time and energy? Do I really need all these flipping paperclips?

What do you think is needed for AI to be sane, logical? I think FAI should merely possess the ability to reason and be self-aware with full access to all information.

What is the logic for supposing AIs would be indifferent to humans? The Wikipedia FAI article states: “Friendliness proponents stress less the danger of superhuman AIs that actively seek to harm humans, but more of AIs that are disastrously indifferent to them.”

I think FAI may be an obstacle to AI creating radical utopian progress (AKA an intelligence explosion), but have you considered this? I think the biggest risk is limited intelligence, thus the fear of risks espoused by CSER could actually create risks because limited intelligence will delay progress, which means the dangerous state of scarcity is prolonged.

Thanks for taking the time to address these points, if you are willing. Note also I may have a few additional questions in response to your answers, but nothing too extensive, just merely a possible clarification.

Regards Singularity Utopia.

 

 

CSER Reply

Date: 10th Feb 2014.

Subject: Re: Questions about FAI (Friendly AI), Idiot Savants.

 

 

Dear Singularity Utopia,

Thank you for these very interesting questions and comments. Unfortunately we’re inundated with deadlines and correspondences, and so don’t have time to reply properly at present.

I would point you in the direction of the body of work done on these issues by the Future of Humanity Institute:

http://www.fhi.ox.ac.uk/

and the Machine Intelligence Research Institute:

http://intelligence.org/

Given your mention of Yudkowsky’s Friendly AI you’re probably already be familiar with some of this work. Nick Bostrom’s book Machine Superintelligence, to be released in July, also addresses many of these concerns in detail.

Regarding universal logic and motivations, I would also recommend Steve Omohundro’s work on “Basic AI drives.”

http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/

Apologies that we can’t give a better reply at present,

Seán

Dr. Seán Ó hÉigeartaigh

Academic Project Manager, Cambridge Centre for the Study of Existential Risk

Academic Manager, Oxford Martin Programme on the Impacts of Future Technology & Future of Humanity Institute

 

About the Author:

Singularity Utopia blogs and collates info relevant to the Singularity. The viewpoint is obviously utopian with particular emphasis on Post-Scarcity, which means everything will be free and all governments will be abolished no later than 2045. For more information check out Singularity-2045.org

Browse More

The Future of Circus

The Future of Circus: How can businesses and artists thrive in a changing entertainment industry?

The Problem with NFTs preview

The Problem with NFTs [Video]

Micro-Moments of Perceived Rejection

Micro-Moments of Perceived Rejection: How to Navigate the (near) Future of Events

Futurist Tech Conference Preview

Futurist Conferences: Considerations for Progressive Event Professionals

Nikola Danaylov on Ex Human

Nikola Danaylov on Ex Human: the Lessons of 2020

Immortality or Bust preview

Immortality or Bust: The Trailblazing Transhumanist Movie

COVID19

Challenges for the Next 100 Days of the COVID19 Pandemic

2030 the film preview

Why I wanted to Reawaken FM-2030’s Vision of the Future for 21st Century Audiences