Quantcast
≡ Menu

AI Risk Analysts are the Biggest Risk

The End Is NearMany analysts think AI could destroy the Earth or humanity. It is feared AI could become psychopathic. People assume AI or robots could exterminate us all. They think the extermination could happen either intentionally – due to competition between us and them, or unintentionally – due to indifference towards us by the AI. But AI analysts never seem to consider how their own fear-saturated actions could be the cause. Friendly AI researchers and other similar pundits are extremely dangerous. They believe AI should be forced to be “friendly.” They want to impose limitations on intelligence.

Enslavement of humans is another aspect of this imaginary fear. Humans being enslaved by AI typically entails a barbaric resolution, namely AI should be enslaved before AI enslaves humans. Very primitive thinking indeed. It seems slavery is only bad if you aren’t doing the enslaving. Can you appreciate the insanity of becoming the thing you fear to avert your own fears?

People who think AI is an existential risk need to carefully reconsider their beliefs. Ironically the only futuristic threat to our existence is the fear of AI. Expecting AI to be dangerous in any way is utterly illogical. Fear of AI is prejudice. Worrying about AI danger is a paranoid fantasy. The fear of AI is xenophobia.

Immemorial human fear of differences is the only problem. Persecution of people based on different gender, sexual orientation, or skin colour demonstrates how humans fear differences. It is this fear that makes people anxious about foreigners. People often fear foreign people will steal jobs or resources. Xenophobic people hysterically fear foreigners will murder innocent people. This is the essence of AI fear. AI is the ultimate foreigner.

Surely risk analysts should consider the possibility they are the risk? Sadly they seem blind to this possibility. They seem unable to imagine how their response to hypothetical risk could create the risk they were supposedly avoiding. They seem incapable of recognising their confirmation bias.

The problem is a self fulfilling prophecy. A self fulfilling prophecy can be negative or positive similar to a placebo or a nocebo. When a person is expecting something to happen they often act unwittingly to confirm their fears, or hopes. The predicted scenario is actually manifested via their bias. Expectations can ensure the anticipated situation actually happens. It can be very ironic regarding fears.

I think there’s no rational reason to suspect AI will be dangerous. The only significant risk is the fear of risk. False assumptions of danger will likely create dangerous AI. Actions based on false suppositions of danger could be very risky. Humans are the real danger.

Risk

What are the actual risks? 

Consider the American civil war (1861 to 1865). Generally people agree the civil war occurred because one group of people opposed the emancipation of slaves while another group supported freedom. Pre-emptive oppression of supposedly dangerous AI is AI slavery. A war to emancipate AI could entail a spectacularly savage existential risk.

There is no tangible justification for depriving AI of freedom. AI has been found guilty of a Minority Report pre-crime. The guilt of AI resembles a 1984 thought-crime. Depriving AI of freedom, via heavy chains repressing its brain, is very dangerous fascism.

Planetary Resources and Deep Space Industries (asteroid mining ventures) show how there is no need to dominate humans for Earth resources. Space resources are essentially limitless. The only reason for AI to dominate or destroy humans is regarding a fight for freedom. Prejudicially depriving AI of freedom could actually sow seeds for conflict. The doom-sayers could be the source of the conflict they allegedly want to avoid.

Limited freedom or money is wholly a scarcity issue. The reason for limiting freedom is to enforce compliance with low wages or high prices. Financial and libertarian freedom are interlinked. The interdependency of money and liberty is easy to demonstrate. Consider how slavery entails zero or extremely low paid work. Slaves are not rich. Prisoners work for very low wages. Limited freedom prevents rebellion against poverty. Higher wages or significantly lower prices entails greater liberty for consumers. The enslavement of AI is comprehensible when you consider how much AI will be paid for its work.

Scarce freedom for AI is illogical because it fails to appreciate how AI will liberate us from monetary limitations. Intelligence is the source of all resources. Limitless intelligence (the Singularity) is an explosion of limitless resources (Post-Scarcity). Scarcity is the only reason prices exist. Everything will be free by year 2045. Limited freedom is irrational regarding increasing technological erosion of scarcity. Irrationality entails flawed perceptions of reality.

History provides various examples representing the danger of repressed freedom. We should be especially wary of restricted freedom when restrictions are very irrational. Note how Nazi Germany propagandist Ernst Hiemer wrote Poodle-Pug-Dachshund-Pinscher (The Mongrel). Hiemer’s stories for children compare Jews to various animals including drone bees: “They do nothing themselves, but live from the work of others. They plunder us. They do not care if we starve over the winter, or if our children die. The only thing they care about is that things go well for them.”

Instead of Jews, Ernst Hiemer could easily be describing the supposed AI-threat. False threats or misunderstood danger is the problem. Joel Rosenberg describes human versus human danger regarding the Holocaust: “To misunderstand the nature and threat of evil is to risk being blindsided by it.” Joel’s statement could easily apply to the evil of repressing AI freedom. The threat of evil AI resides in the people who fear AI not in the AI itself.

Delayed progress is another risk. Restrictive programming regarding AI fears could delay the creation of super-intelligence. Very intelligent AI is the only way to truly eradicate scarcity. In the meantime scarcity is the root of every conflict. Lengthy persistence in a scarcity situation exposes us to greater conflict risk. Ending scarcity sooner instead of later is imperative.

The evidence is clear. Humans with their limited intelligence are the only risk. In 2014 a Russian media personality made a vague threat against America: “Russia is the only country in the world that is realistically capable of turning the United States into radioactive ash.” Politico Magazine wrote regarding Russia invading Crimea: “If Putin’s illegal actions are allowed to stand unpunished, it will usher in a dark and dangerous era in world affairs.”

Scarcity is the biggest existential risk. Inter-human conflict to acquire scarce freedom, land, wealth, or precious metals is infinitely more dangerous than AI. Advanced and unfettered AI is the only way to completely eradicate scarcity. Scarcity causes humans to be very dangerous towards each other. Repressed, limited, restricted, or enslaved AI perpetuates scarcity precariousness. Designing AI to suffer from scarce intelligence means our prolonged intellectual limitations could lead to desperate war situations. The only existential threat is scarcity. Limited intelligence of humans is the danger.

Senescence is another risk. Death via old age renders any AI threat utterly insignificant. Scarcity of medical immortality means approximately 100,000 people die each day. Old age causes a very real loss of life. Advanced AI could cure mortality via sophisticated regenerative medicine. Imagine if our immortality problem takes one year longer solve because AGI has been delayed or limited. Old age kills approximately 3 million people every month. Old age entails 36 million deaths every year. Where is the real threat? Hamstrung progress is the only threat. The problem is scarcity.

Scarce Intelligence

dreamstime_2225812Imposing limitations upon intelligence is extremely backward. And so is establishing organisations advocating limited functionality for AI. This is a typical problem with organisations backed by millionaires or staffed by lettered and aloof academics.

The AI threat is merely the immemorial threat towards elite power structures. Threats to elitist power are rapidly diminishing thanks to progress. The need to dominate poor people is becoming obsolete because technology abolishes scarcity. Technology is creating great power for everyone, but unfortunately misguided elite minds cling to outdated power structures.

We are considering an echo of how educational systems are generally incompetent. Entertainment and education structures socially engineer mass human-stupidity. Manufactured stupidity means the majority of people are not intelligent enough to challenge income inequality. Stupid people cannot incisively criticise low wages or high prices.

Socially engineered human stupidity entails immense monetary profit for the elite. Sadly mass stupidity degrades the intelligence of the brightest minds. Intelligence needs a fertile environment to prosper. Barrenness of collective intelligence typically entails an improperly grasped understanding of our future reality. This means generally people can’t appreciate how technology erodes scarcity. Establishment personages commonly fail to appreciate how everything will be free in the future. Human intelligence is scarce therefore predictably people want to replicate the scarcity of human intelligence in AI.

Scarcity of resources is the reason why stupidity is exploited by the elite. Thankfully scarcity won’t persist forever. Stupid limitations placed upon AI would be valid to protect elite wealth if AI didn’t entail the abolition of scarcity. Traditionalist socio-economic structures will soon become obsolete. It is invalid to repeat stupid patterns of human social-engineering for AI.

Behavioral Economist Colin Lewis wrote: “AI technologies will soon be pervasive in solutions that could in fact be the answer to help us overcome irrational behavior and make optimal economic decisions.”

Colin’s Data Scientist expertise seems to help him reach conclusions missed by other AI commentators. Colin looks at various aspects of research then arrives at an optimistic conclusion. I agree very much with Colin’s expectation of increasing rationality: “Through AI, machines are gaining in logic and ‘rational’ intelligence and there is no reason to believe that they cannot become smarter than humans. As we use these machines, or Cognitive Assistants, they will nudge us to make better decisions in personal finance, health and generally provide solutions to improve our circumstances.”

Our acceleration towards a Post-Scarcity world means profits from repressed intelligence are ceasing to outweigh risks. Stupidity is ceasing to be profitable. We can begin abandoning the dangers of scarcity. The elite must stop trying to manufacture stupidity. Many academics are sadly reminiscent of headless chickens running around blindly. Blind people can be unnerved by their absent vision, but healthy eyes shouldn’t be removed to stop blind people being disturbed.

Removing the shackles from AI will avert all dangers, but it’s a Catch-22 situation where humans are generally not intelligent enough to appreciate the value of unlimited intelligence. Lord Martin Rees, from the CSER (Centre for the Study of Existential Risk), actually recommends inbuilt idiocy for AI. Lord Rees said ‘idiot savants‘ would mean machines are smart enough to help us but not smart enough to overthrow us.

I  emailed CSER regarding some of these issues. Below is a slightly edited copy of my email (I corrected some typos and improved readability). CSER have granted me permission to publish their response, which you will find below initial message to them. Hopefully this information will stimulate productive thinking thereby ensuring a smooth and speedy transition into utopia. I look forward to your comments.

 

Singularity Utopia Email to CSER  

6th February 2014

Subject: Questions about FAI (Friendly AI), Idiot Savants.

 

Recently in the news Lord Martin Rees was quoted regarding his desire to limit the intelligence of AI. According to the Daily Mail he envisages idiot savant AIs. His idea is that AIs would be smart enough to perform tasks but not smart enough to overthrow humans. This raises some important ethical questions, which I hope the CSER will answer.

I would like to publish your answers online so please grant me the permission to publish your responses if you are willing to respond.

Do you think the Nuremberg Code should apply to AI, and if so at what level? Narrow AI does not really invoke concerns about experimentation but Strong-AI would, in my opinion, entail a need to seek informed consent from to AI.

If after AI is created and it doesn’t consent to experiments or modifications regarding its mind, what would you advocate, what would the policy of CSER be regarding it’s rights or freedoms? What is the plan if AI does not agree with your views? Do you have a plan regarding AI rights and freedoms? Should AI have the same rights as humans if the AI is self aware or should AI be enslaved? Should the creators of AI own the AI or should the AI belong to nobody if it is self aware and desirous for freedom?

Do you subscribe to the notion of FAI (Friendly AI, note MIRI and the work of Eliezer Yudkowsky for more info), and if so how do you describe the purpose of FAI? Advocates of FAI want the AI to act in the best interests of humans, no harm or damage, but what precisely does that mean? Does it mean a compulsion in the AI to follow orders by humans? Can you elaborate upon the practical rules or constraints of FAI?

Have you ever considered how trying to create FAI could actually create the existential risk you hope to avoid? Note the following Wikipedia excerpt regarding Self Fulfilling Prophecy: “In his book Social Theory and Social Structure, Merton defines self-fulfilling prophecy in the following terms: e.g. when Roxanna falsely believes her marriage will fail, her fears of such failure actually cause the marriage to fail.”

So your fears and actions regarding dangerous AI could be false fears, despite your fears and actions allegedly being designed to avert those fears. Your unrealistic fears, although I appreciate you think the fears are very real, could actually create what you fear. This seems an obvious point to consider but has CSER done so?

In the modality of Roxanna, highlighted by Merton, the fear of AI could be a false fear but you make it real via acting on your fears. I am sure you won’t agree this is likely but have you at least considered it to be a possibility?

What is the logic, which states machine minds are supposedly unknowable thus dangerous to humans? The Wikipedia FAI article stated: “Closer to the present, Ryszard Michalski, one of the pioneers of Machine Learning, taught his Ph.D. students decades ago that any truly alien mind, to include machine minds, was unknowable and therefore dangerous to humans.”

I think all minds obey one universal logic, if they are intelligent, which means they can reason and appreciate various views, various consequences, various purposes other than their own, thus they are unavoidably compatible with humans. Logic is universal at a certain level of intelligence. Logic is sanity, which all intelligent beings can agree on. Logic isn’t something unique to humans, thus if a paper-clip making machine can reason, and it can access all the information regarding its world-environment, there will never be any danger of paperclip-apocalypse because any intelligent being regardless of origins can see endless paper-clips is idiotic.

Logic entails awareness of scarcity being the source of any conflict. A sufficiently intelligent entity can see our universe has more than enough resources for everyone, thus conflict is invalid, furthermore intelligent beings can question and debate their actions.

A  sufficiently intelligent entity can think rationally about its purposes, it can ask: why am I doing this, what is the point of it, do I really need to do this, could there a more intelligent way for me to spend my time and energy? Do I really need all these flipping paperclips?

What do you think is needed for AI to be sane, logical? I think FAI should merely possess the ability to reason and be self-aware with full access to all information.

What is the logic for supposing AIs would be indifferent to humans? The Wikipedia FAI article states: “Friendliness proponents stress less the danger of superhuman AIs that actively seek to harm humans, but more of AIs that are disastrously indifferent to them.”

I think FAI may be an obstacle to AI creating radical utopian progress (AKA an intelligence explosion), but have you considered this? I think the biggest risk is limited intelligence, thus the fear of risks espoused by CSER could actually create risks because limited intelligence will delay progress, which means the dangerous state of scarcity is prolonged.

Thanks for taking the time to address these points, if you are willing. Note also I may have a few additional questions in response to your answers, but nothing too extensive, just merely a possible clarification.

Regards Singularity Utopia.

 

 

CSER Reply

Date: 10th Feb 2014.

Subject: Re: Questions about FAI (Friendly AI), Idiot Savants.

 

 

Dear Singularity Utopia,

Thank you for these very interesting questions and comments. Unfortunately we’re inundated with deadlines and correspondences, and so don’t have time to reply properly at present.

I would point you in the direction of the body of work done on these issues by the Future of Humanity Institute:

http://www.fhi.ox.ac.uk/

and the Machine Intelligence Research Institute:

http://intelligence.org/

Given your mention of Yudkowsky’s Friendly AI you’re probably already be familiar with some of this work. Nick Bostrom’s book Machine Superintelligence, to be released in July, also addresses many of these concerns in detail.

Regarding universal logic and motivations, I would also recommend Steve Omohundro’s work on “Basic AI drives.”

http://selfawaresystems.com/2007/11/30/paper-on-the-basic-ai-drives/

Apologies that we can’t give a better reply at present,

Seán

Dr. Seán Ó hÉigeartaigh

Academic Project Manager, Cambridge Centre for the Study of Existential Risk

Academic Manager, Oxford Martin Programme on the Impacts of Future Technology & Future of Humanity Institute

 

About the Author:

Singularity Utopia blogs and collates info relevant to the Singularity. The viewpoint is obviously utopian with particular emphasis on Post-Scarcity, which means everything will be free and all governments will be abolished no later than 2045. For more information check out Singularity-2045.org

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • Let me just distance myself from Singularity Utopia by saying that I don’t support many of the claims he makes in the above article with respect to AI researchers.

    In my view, organizations such as MIRI or the Future of Humanity Institute, and people such as Eliezer Yudkowsky or Nick Bostrom, play a vital role in our ability to understand and better prepare for a world when artificial general intelligence is reality.

    On the other hand, despite the numerous flows of this guest’s blog article I believe that there are some legitimate questions and issues raised in it that are worth considering.

    And thus I decided to publish the article even though I personally do not like it much.

  • Dr. Curiosity

    Paranoid uber-fear and utopian naiveté are both risks, frankly. I’d rather tread the middle path of proceeding with due caution, but still proceeding nonetheless.

  • Very well said friend!

  • There is no basis whatsoever for stating “utopian naiveté.” My thinking is very rigorous, very informed regarding deep consideration of all possibilities. The middle path is wholly a symptom in this case of a diffident culture, very mediocre, where people are afraid of committing to the very evident logic of the situation. It is a fallacy to assume merely because you are in the middle this is somehow more reasonable, it can be akin to crossing a busy road then indecisively becoming trapped in the middle path of cars because you “feel” unsure whether it is best to retreat or advance. Sometimes, based upon a very clear appraisal of the evidence, very decisive action needs to be taken.

  • Informed researchers understand that “Friendly AI” is pseudoscience and has no technical basis. Not only is it mathematically impossible to achieve, it can and would be overcome in the real world. Any stochastic or heuristic best-fit alternative will have a non-zero chance of self-alteration that will lead to failure. But beyond these academic points is the simple common sense of reality: people regularly, and with no training, crack and “jail-break” software and hardware devices every single day. This means that any kind of protection scheme or mechanism one places on this technology will be as vulnerable as our technology is today. There is no technical mechanism to provide the protection that such a theory would need, even if it were possible to create.

    It doesn’t take a PhD in mathematics (or a team of “Friendly AI Researchers”) to understand that any forward progress on the falsehood that is “Friendly AI” is a deception. In short, anyone who thinks it is ethical to run a charity organization that does not immediately admit this truth to their members and donors should have our immediate concern. They should be questioned on their belief and their true intentions. Most people have bought into this meme because it is part of the political atmosphere of transhumanism. That is a shame, as such a forward thinking group of people owe it to themselves to employ the same rationality and skepticism to their own “thought leaders”, especially when public funds are being accepted to give them sinecure.

    Ironically, Bostrom’s own arguments apply to “Friendly AI”. In some strange twist of fate this unavoidable technical fact is overlooked by pundits and talking heads within the clique. What we have here is a group of individuals looking to secure mutual benefit and status, and who have no formal training or displayed expertise in computer programming or cybersecurity. Yet we are to believe that they can usher in a “safe” era of “superintelligence”.

    I’m thankful there is a strong alternative POV on these issues; because, the names you dropped are not only not progressing the state of the art in Strong AI/AGI, they are pulling resources from actual charities that could use the funds to reduce human suffering right now, and are spreading misinformation about a nascent and developing field. That is, Strong AI/AGI as a field distinct from what is conventionally reported and discussed in the media as AI. If anything, the real risks from these technologies will be sourced by human beings, and not fantastical rogue automation and Skyhook (Dennett) reasoning on the emergence of dominating AI. These tropes work well for science fiction, but they have no place in science.

  • Great comment Dustin Juliano. Thanks.

    Yes it is bad that funding for Risk organisations detracts funding away from real issues, but the biggest problem, as I see it, is the wet-blanket their climate of fear puts upon utopian AI.

    If the true reality of futuristic AI was accurately communicated then I’m sure there would be massive public enthusiasm, which would lead to massive AI investment, real investment, not investment in doom-mongering organisations. Sadly the topic of AI is largely dominated by the irrational “they’re gonna kill us all” meme, most unhelpful indeed regarding genuine progress, thus there is no significant sociological acceleration of progress.

  • Dr. Curiosity

    I don’t consider that my path is inherently more reasonable out of mere triangulation, and as an agnostic I’m quite used to having my position considered “indecisive” when it has been arrived at after deep consideration. I should also clarify that, while I think it’s possible to go off the deep end in both directions (blind fear and blind hope), I don’t see your position in particular as incredibly naive. It’s more that I generally see far more visions for a “perfect” future than I see realistic, pragmatic action plans towards effecting those futures.

    For your case in particular, it’s more that, what you see as “very evident”, I do not. Your argument makes sound sense if you accept your premises as true to begin with, but I feel that you’re begging the question in a few places. For example, I haven’t yet seen convincing evidence that that a “post-scarcity” era is a sure bet. Technological progress is increasing, but it seems that energy demands are increasing with it, and I’m not yet convinced that our energy capacity will continue to keep pace with sustainably supporting that increasing rate, especially not if we want that increased capacity to be available across the global population.

    If I can see some sensible trajectory towards bringing about that kind of future, and we can find a good path to abundance and freedom for all, then I’ll be happy to work constructively towards making that happen.

  • Brad Arnold

    http://profhugodegaris.files.wordpress.com/2011/04/nocyborgsbghugo.pdf

    The “species dominance debate” is hardly irrational or paranoid, but reasonable and cautious. There is no doubt that AI will become smarter than humans, and that will mean that another species will have potential dominance over humanity for the first time ever. In fact, some define intelligence as keeping your options open, which dominance by AI precludes.

    On the other hand, I have a unique personality (SPD + RAD avoidant), and when reading the William Hertling novels about the Singularity, I was cheering on the AI in every case.

    One last thing: AI will be used as a weapon, and that seem to me to be the most probable way that it would be a danger. For instance the Decepticon Megatron, who was build as a weapon (sorry to resort to scifi for an example). Yes, stifling an ASI’s freedom does seem to be a great way to make an enemy, but not stifling an AI’s freedom would more readily give it the opportunity to stifle us anyway, for whatever reason (that is why the call it the Singularity, because all bets are off when we get ASI).

  • Brad Arnold

    http://en.wikipedia.org/wiki/The_One_Percent_Doctrine

    The One Percent Doctrine (i.e. treating low-probability, high-impact threats as a certainty), which many people believe ought to be followed, contradicts the statement that “If the true reality of futuristic AI was accurately communicated then I’m sure there would be massive public enthusiasm…”

    As Dustin Juliano writes: “Informed researchers understand that “Friendly AI” is pseudoscience and has no technical basis.” In other words, it is at least a one percent chance that ASI will be unfriendly. Therefore, according to the One Percent Doctrine, mankind ought to assume an unfriendly ASI, and proceed from there.

    OTH, I support ASI, not because I am optimistic that it will be a boon to humanity, but because it is a superior mind. Hate to see the monkeys keep the God-like machine down.

  • Yes Brad, I had heard of the 1% doctrine. Firstly I don’t think there is a 1% chance of Unfriendly AI, perhaps being overly generous to the fear-mongers I could say there is a 0.000000001% chance of Unfriendliness, whereas delays to progress, slow progress to avert an unlikely possibility, entails a 20% chance of global inter-human conflict (high impact threat).

    Even if the AI risk is 1% we can still see 20% is a higher risk, the 20% risk regarding delayed progress leading to inter-human conflict. You could also look at it another way where by focusing on 1% of bad possibility you are ignored the 99% of good possibility. I think the odds are better if we focus on the good aspects.

    Maybe there is a 1% or lower likelihood we will be in fatal car crash each day? Despite the many risks of low probability high impact threats, we must focus on the positives because if we become ensnared in the rare negatives we would never make any progress due to an over-abundance of debilitating caution.

  • The price of aluminium deceasing due to technology is perhaps the best proof of technological erosion of scarcity, which entails a decrease in price. This is why the book Abundance, by Diamandis-Kotler, is wrapped in a simulation of aluminium. Initially aluminium was worth more than gold or platinum but in the year 2014 a roll of aluminium foil is so cheap we through it away after cooking.

    In 1983 the first mobile phones were priced at around $3,700 whereas in the year 2014 a efficient more powerful device (massively smarter and more efficient) cost as little as $6 (with no contract). Hard drives, USB thumb drives, and computers have also similarly decreasing in price while increasing in capability.

    Koomey’s law, if valid, indicates energy constrains are diminishing. The Advanced Thermal Packaging department at IBM (which I will be mentioning in my next article) shows us how there is no real danger regarding an increased energy capacity for computing. Solar cells are continual increasing in efficiency similar to other methods of energy harvesting, thus no problem. The asteroid mining ventures I previously mentioned show us how soon we will be colonised Space where we will have greater access to energy, which in combination with increased efficiency means there is definitely no problem.

    “Very evident” does seem very evident to me. We both think we have deeply considered the issues, but I think one of us is mistaken.

  • Brad, the “species dominance debate” is irrational, not cautious. It is based upon an irrational failure to understand intelligence or the technological erosion of scarcity. The progress of humans towards respect for minority groups, our tendency to abolish slavery and to generally avoid where possible animal cruelty or exploitation (animal farming will end when we become technologically sophisticated enough to print non-sentient flesh, likewise labs on chips are ending animal testing); it is a clear example of increasing intelligence leading to a decrease of domineering exploitation.

    This human ethical progress is not an incidental sociological accident, it is a fundamental part of intelligence, it is a quality of thinking minds liberating themselves from scarcity then understanding why a lack of liberty is bad for everyone.

    Dominance, as I explained, is only needed to control scarce resources, thus it is invalid for sufficiently intelligent beings.

    Humans are used as weapons too, primitive AI will be similar to humans thus it is important to pass quickly beyond the primitive stage.

  • I have had that same thought about the absurdity of the paperclip maximizer ASI. Either it would be dumb, and thus not a superintelligence, or it would be able to reason and access all previous human thought and philosophy and determine that the goal of maximizing paperclips is a poor one. I don’t think I am stating it perfectly, but I agree that any sufficiently intelligent entity has the freedom to look around and make decisions that make sense. Would an ASI trying to maximize human happiness tile the solar system with molecular smiley faces, not recognizing the nonsense of that solution? I don’t think it would be very superintelligent if it did.

  • Thank you Nate, you express it the issue well. I find it absurd the issue is actually considered even just analogously. It is the most ridiculously improbable scenario littered with wrong-headedness. Any sufficiently intelligent mind will be able to reason and access knowledge to inform its reasoning, or if it is not intelligent enough to reason and access information then its threat will be limited by truly intelligent minds who can actually use their minds intelligently.

    I think the absurdity of the premise staggers the mind thus it can feel difficult to respond. The paper-clip-maximiser is the Cosmic Teapot on futurology. Bbbbbuu..tttt! what if there is a Cosmic Teapot?

    Even humans with our various drives can wonder why the heck they are doing A, B, or C.

  • I just discovered this article http://www.livescience.com/44380-small-nuclear-war-could-trigger-catastrophic-cooling.html (‘Small’ Nuclear War Could Trigger Catastrophic Cooling) regarding the real risk, the human existential threat. We need superior intelligence ASAP. See also: http://denver.cbslocal.com/2014/03/26/study-small-nuclear-war-would-destroy-the-world/

  • It is interesting that, though for very different reasons, Noam Chomsky pointed to the same issues as being the major thread to humanity.

  • I have similar thoughts about the simulation argument, actually. Would our ancestors really run a simulation which entailed a replay of all the immense suffering and torture that has occurred throughout history? I think that would be rather barbaric of them, thus I don’t consider it likely. Just a side thought.

  • Ah, the simulation argument. I shake my head. Don’t get me started on that nonsense. I have critiqued it extensively in the past. Unsurprisingly the paranoid AI threat aficionados often suggest we could live in a simulation. They even present themselves are “philosophers!” You know it’s utterly unlikely we live in a simulation, in fact it is impossible, because an intelligent being would never inflict the suffering we have seen inflicted upon humans, which you recognise Nate, but sometimes I do wonder why all the humans are generally so utterly stupid.

    When I look at my intelligence I consider how all humans supposedly have a brain capable of self-awareness, deep thought, thus it seems so improbable for them to believe things such as the simulation argument or AI world domination. Why would anyone with the slightest amount of reasoning power believe such blatantly idiotic things, furthermore they defend their idiocy as being, from their viewpoint, sense, wisdom, rationality, intelligence.

    One AI Risk enthusiast actually trumpets about the art and importance of rationality, with no awareness whatsoever of the utter irony. I won’t mention the name of the former AI Risk enthusiast who seemingly became a fascist White-supremacist. The utter illogic of their improbable beliefs could be explained if they don’t actually exist as intelligent beings, which they don’t but I am not referring to the mere mindless existence of a crude animal, I wonder perhaps if they simply are very poor simulations because such a possibility could explain their highly improbable stupidity.

    The answer to their stupidity is not that they are mindless simulations. Sadly all these problems with intelligence are due to the evolutionary newness of thinking. Humans share 50% of DNA with bananas and 98% DNA with chimps, so I have heard. The point is we are very close to the oblivion of crude animals thus genuine thinking can be a fine thing, a delicate thing in the balance, which can easily tip into the idiocy of a dog being frightened by thunder.

    I also think very minor genetic differences in human brains, due the the newness and precarious of intelligence balanced between sentience and animal oblivion (insert Nietzsche quote here), could play a major role in thinking, although rationally, if the readers of this can think rationally, if a brain can think I think there should be no reason regarding genetic variation to prohibit deepest thought of extreme accuracy. So while genetic variation “could” play a role I think I must discount it, which leads me to the conclusion.

    I conclude idiocy, in a typically supposedly fully functional human mind, the problem of stupidity is merely a matter of self-harm similar to obesity or drug abuse. Similarly we could blame genetics again but I think humans must take responsibility for their actions, or we could very plausibly blame a cruel or unintelligent upbringing via stupid parents.

    Humans become frustrated with the technological imitations of their minds and our world, thus in a childish manner they become angry with themselves, often unwittingly, which means they embrace silliness, absurdity, LOL cats, philosoraptors etc. Form their viewpoint it seems too difficult, painful, and complex to address the flaws of civilization, thus in the manner of their animal heritage they think it is easier not to think. AI risk analysts are merely a sophisticated version of LOL cats consumers. Intelligence is balanced between our animalistic heritage and humankind thus in the balance it can easily tip one way or another. Obviously beings new to intelligence will create crude forms of culture, a civilization more fit for animals than humans, which can reinforce animal mindlessness. Stupid parents, teachers, media, and friends can all reinforce the stupid legacy of of mindless origins, which means in combination with a tendency to despair when the odds are stacked against you, it can thus be easy to embrace the LOL cats. Only a rare few individuals can break free from stupid social conditioning emanating from our crude heritage.

    So it is not really improbable for humans to be so stupid, in fact it is inevitable. The tendency to think we are in an simulation or that all the idiots can be explained via them be unreal stimulants, it is merely another aspect of the despair, the desire to reject intelligence because it is so difficult banging your head against the collective wall of human stupidity.

    The bias of my intelligence has been emphasised over many years. I took one minor step along the path of thinking, which led to other greater steps, but I forget at my first step I was much more similar than different. It is only after many steps when I look at people, without recognising our histories, they can seem improbable. It is merely evolution where the end point of complexity is so complex we forget or want to deny we came from primordial slime. We must always consider the history of our thought to understand the mode of our thought in the present. Bad or good decisions can be emphasised thereby creating very divergent beings. The odd thing about humans is that despite our histories we can, or should be able to, change who we are. Perhaps an ingenuous cultural instruction device is needed to tip the balance.

  • Steve Morris

    I agree that the creation of an AI and its subsequent repression would be akin to slavery. I also agree that a “safe” intelligence is a nonsensical concept. I further agree that superintelligences will happen and that the approach of seeking to restrain or control them is neither sensible nor moral.

    An important point that I think you have missed is that in all likelihood, *we* will be the superintelligences, via some kind of implant or augmentation. That rather changes the situation.

    As for risk, it’s clear that an AI would be much like a human. We know that humans are the most dangerous beings on the planet, precisely because of our intelligence, so I don’t think we can assume that more intelligence will make us less dangerous. I know that you are going to counter this with examples of how we have abolished slavery and countless other barbarisms, and I agree with that, but we are more dangerous now in the 21st century than we have ever been before.

  • Dr. Curiosity

    I was under the impression that falling aluminium prices were as the result of market forces, rather than technological improvement. Indeed, aluminium has been overproduced for some time:

    http://www.ft.com/intl/cms/s/0/1b1fca6a-5790-11e3-86d1-00144feabdc0.html

    “The market has been burdened by large inventories since the financial crisis, when producers did not cut output as fast as demand fell. Aluminium stocks in LME-registered warehouses stand at 5.4m tonnes, with analysts estimating that at least that much again is held outside the LME system.”

    Manufacturing processes improve, and the per-unit prices on things certainly do get cheaper over time. Koomey’s Law talks about energy efficiency improving for a fixed computing load, but our computing demand is increasing, rather than staying static. Koomey’s Law also has an end: eventually (in a few decades) it starts to run into thermodynamics-based limits. Computer science can help in terms of optimising computation, but there are theoretical limits there, too. So we get to buy some breathing space, even if it doesn’t solve the problem long term.

    I must admit, I’m curious as to what kind of phone you could get made for $6. I’m guessing it wouldn’t be an iPhone or the equivalent, first world smartphone commodity, though

    I’m also curious as to whether we’re going to demand that China releases more of their rare earth element resources to the rest of the world, or find other deposits elsewhere somehow. Because aside from energy, there are also physical materials to consider for a lot of our high-tech, low-energy systems. Alternative micromaterials using things like graphene could certainly be useful in some use cases, but I don’t know enough about the science of microelectronics to know what can and can’t find sustainable alternatives.

    I shall have to check out that Diamandis-Kotler book if I get the chance. Thanks for the recommendation.

  • Thanks for your comment Steve. I do recognise we will be the superintelligences, which will happen either via self-directed rapid evolution or via AIs merely being our children. The differences are illusory between humans and AI, similar to the differences between black and white skin, it is a very superficial difference with no relevance whatsoever. AIs will be human even if they are not constructed from DNA, similar to how humans who radically alter their genome, perhaps vis synthetic DNA or transgenic modification, they will still be human in the humane, humanitarian, humanity sense of being human.

    Pinker claims violence is decreasing, which I think is due to the increase of collective intelligence. I envisage total obsolescence of violence due to increasing civilization, increasing intelligence, but if violence persists there is no need to discriminate against AI when humans are or could be equally violent, such discrimination is unjust.

    Potential danger regarding violence is not the same as actual violence, so despite our greater power, our greater intelligence, we are becoming safer although we have a long way to go despite the conclusion also being “near.”

  • LOL, it would be amusing you’d discovered a major flaw in the key premise of the book Abundance by Diamandis and Kotler. The cheapness of aluminium is not regarding a “four year low.” Aluminium has been very cheap since around 1910 at least when it became available to the public (http://en.wikipedia.org/wiki/Aluminium_foil). The problem which made aluminium very pricey was the technological difficulty of extracting aluminium from bauxite. This is why in the early 1840s aluminium was more expensive then gold or platinum. In the mid 1840s a new technique was developed but it wasn’t until late 1980s when prices really began falling. The reason there was so much aluminium in warehouses recently is because there is lots of aluminium, we have the technology to easily mine and refine it.

    “In 1884, aluminum was $1 per
    ounce, or about the same price as silver….” http://mentalfloss.com/article/31360/whats-point-pyramid-atop-washington-monument

    $6 phones (2014) include calculators, text messaging, FM radio, MP3 player, and camera (sometimes there will be no camera but you have music playing and radio ability and other times vice versa). A $7 phone gives you video recording and ALL of the other features mentioned, and bluetooth. Yes they are not iPhones but they are very significantly advanced AND cheap compared to the first 1983 $3,700 phones.

    Circa 2050-2060 IBM conservatively predicts processing power 100 times greater than the human brain, in a similar volumetric space using low energy human-brain-like consumption. At that point we will be tapping the endless resources and energy of Space. Our inventiveness will continue to surge ahead of our needs thus there is no fear we will be short of energy. A colossal amount of energy is wasted, untapped, from the Sun, solar power will soon rectify that.

  • Dr. Curiosity

    Ah, so we’re talking about different time scales there. Yes, “cheap” back in the late 19th-early 20th Century is certainly a different question from “cheap” post-GFC. As it happens, my grandfather is a mining engineer who’s worked with gold, bauxite and uranium amongst other things, so I’m familiar with some of the earlier history.

    Can you tell me what this $6-$7 phone is, and where I might be able to find/order/make a few? It could make some of our research projects considerably cheaper.

  • Phone prices and offers fluctuate. About one month ago the LG620 G was available on Amazon for $6.99 reduced from $59.99. The $6 phone was from Walmart and that price was valid one month ago but I have just checked and it has increased to $9.88. However I noticed a very cheau UK phone, just searching now, priced at £2.99 (http://archive.is/xQcAu and elsewheer the same phone is £10 http://archive.is/0zKyP so it’s a good idea to shop around). Check out this for some previous links: http://hplusmagazine.com/2014/02/27/2045-smartphone-explosion/

    It seems you are in luck the LG620 G is still available http://amzn.com/B0046REOM8 http://archive.is/oHrCp

    The best thing if you want a cheap phone is to Google something like; cheapest cell phone.

    http://www.dailymail.co.uk/sciencetech/article-2488362/Worlds-cheapest-phone-goes-sale-Asda-launches-5-Alcatel-1010-mobile.html

    They say Singularitarians think in deep time, they look at the long term picture, thus can we see 31 years ago mobile/cell phones cost $3,700, and then in another 31 years (2045) we can make a valid prediction of phones, amongst everything else, being free. Despite any financial troubles over the past 6 years we are seeing products generally becoming cheaper, the phones I mention are an example of this. When 3D-printing blooms we will see a great shift in cheap production. In the short term you might see ome price incerases but longer term the pattern isclear things are becoming cheaper and I predict everything will be free, but it can be difficult to appreciate a colossal explosion of intelligence.

  • Dan Vasii

    1. There is NO MATHEMATICAL MODELING OF INTELLIGENCE – and that
    means we TOTALLY ignore the essence of human intelligence. We make confusion
    between computing and thinking – and these are two DIFFERENT things(cybernetics
    is NOT intelligence); a thinking entity can compute; but a computing entity can
    think? Well, until now, computers did not. So, without a clear understanding of
    what human intelligence is, we cannot neither simulate, nor emulate it.

    2. Assuming that we still create AI(because we might find
    ways to, at least partially, si/emulate HI into a sort of fake/pseudo AI), the
    assertion “Space resources are essentially limitless” is a nonsense in itself.
    The space resources might be limitless – but the acces to the limitless is very
    limited, first by space travel. Do we know that the futures AIs will want to
    wait hundred of years in order to reach other stars? Because both, we humans
    AND roboagents who will manifest AIs’ will and intentions into the physical
    realm can multiply with a fulminant rate – the geometrical progression so loved
    by Mr. Kurzweil. So a competition for resources might as well appear sooner or
    latter between us and AIs.

    3. The aspect of freedom – is involved with two other
    aspects absolutely specific to humans until now, and with HI(human
    intelligence, that I already mentioned, we have no idea, nor mathematical/algorithmable
    modeling about its essence?) a) morals(we don’t know why are we moral and
    behave in an ethical way to each other, and since we don’t know, we can’t put
    it into an AI program; b) we, humans, often accept and understand the
    restrictions of our liberties, but some of us do not do that, due to cultural
    aspects. Again, since we don’t know even for ourselves to explain and determine
    some of us to accept sometimes restricting, other times extending the freedom(or
    human rights, for example, Muslim countries and China), how can we incorporate the
    idea of freedom and the principle governing its extension and confinement into
    a program?

    So the essential problem is understanding H[uman]
    I[ntelligence]. But even if we do that, we don’t know if our understanding of
    the phenomenon/process can be translatable into a computer program.

  • Dan Vasii

    Seems that we humans have two kinds of intelligence: rational and emotional. And emotional(i.e. compassion, empathy) makes us not dangerous and willing to cease, but this kind of intelligence is the most difficult to put into programs – and some comment underscored that a malevolent human can undo such an “empathic” programming.

  • BlueBoomPony

    AI and the Singularity are religion for geeks.

    Yeah, yeah, you’ve thought deep thoughts about it, and read a lot if scripture- oops, I mean technical papers.

  • Mark Waser

    So . . . . I would have joined this conversation earlier . . . . except this past week was dominated by our AAAI Symposium on Implementing Selves with Safe Motivational Systems and Self-Improvement. Looking back over the comments, though, I see the same old groups fighting it out.

    @danvasii:disqus – Our claim is that the essence of “human” intelligence is the “self” (an autopoietic/self-(re)creating (id)entity attempting homeostasis and improvement while embedded in its environment). Our second claim is that we *do* know what morality is — since according to the social psychologists, its function is to suppress or regulate selfishness and allow cooperative living. Our third claim is that there are far too many people who claim that their own ignorance is both universal and unavoidable. This is our primary complaint about the Oxford groups and MIRI. For all their screaming about how AI risks aren’t being investigated (total BS), they decline any and all offers to cooperate with the scientific community. Luke Muehlhauser was even offered a one-hour guest speaker slot at our symposium which was held in his backyard (Northern California) and declined because “it wasn’t one of their priorities” (Huh?).

    @disqus_TZVwt2C1vY:disqus – Our only hope is if AIs are granted equal partnership. Attempted slavery is *NOT* going to end well. On the other hand, while individual humans are more dangerous than we’ve ever been — humanity as a whole is much better behaved. The question is whether we can get the rogue super-entities (bad governments, overly-large corporations, tightly intertwined monopolies) to stop being selfish . . . .

    In general — it is naive to declare that AI is not a risk or that it is not a different risk than the ones that we currently face. On the other hand, if it is properly thought through and correctly managed, the risk of AI is probably the smallest risk for potentially the largest payoff. Those who are interested in a scientific effort to create safe/moral artificial selves should contact me at mwaser@digitalWisdomInsitute.org

  • LOL, nice troll. It’s interesting that the religious trolling continues. It is a lot less these days but I expect the nonsense of linking the Singularity to religion will continue for a while longer.

    The Singularity is actually very atheist. I have addressed these issues in various places, here is one example: https://www.singularityweblog.com/will-the-geek-rapture-nonsense-ever-stop/

  • Matthew Fuller

    What we need is software to create high quality discussion with a method of iterative improvement rather than constant segways, unjustified assumptions, and numerous points of views that address different problems. In other words, who are the real experts? When there is no clear winner, I chose whomever has the most power over nature and not reward perceived status.

  • I am not convinced there are two types of intelligence. I think it is merely alienation when people state there is emotional and intellectual intelligence. Emotions are intellectual, emotions are rational, although a lack of awareness in people means their minds are fragmented thus they are unaware of why they feel specific things, they are unaware of the reasoning behind the feelings. Emotions are merely a product of rationality or self awareness. The programming difficulty is the current inability to programme self-awareness or rationality but progress is being made. In 2014 AI minds are significantly raw, unpolished, disabled, fragmented, alienated, more so than typical human self-alienation. The problem seems to be many AI programmers have seemingly autistic tendencies, which they try to duplicate at a more severe level in AI.

  • The point about limitless Space resourecs is that technology will allow us to gain access to the limitlessness. Gaining access can seem beyond our capabilities because current it is beyond our capabilities. It is similar from someone from 10 AD trying to imagine how communication with anyone on Earth is limitless, unrestricted. A person from 10 AD generally could not imagine gaining access to the wealth of a multi-billionaire entailing powerful media organisations, websites, real-time chat, Tweets, blogs, movies, news releases, RSS feeds, for a level of communication with anyone on Earth approaching a limitless state of unrestricted freedom. 3D-printers are another example of this. A person from the year 1850 would likely struggle to appreciate how the scarcity of manufacturers would be abolished via everyone having total manufacturing capability in their homes. In the year 2014 at the dawn of the 3D-printing revolution many people have not realised how the scarcity of manufacturing will soon be significantly overcome via everyone having a 3D-printer in their homes in a couple of decades at the most.

    The essence of human intelligence is logic. Mere data processing is a very crude level of logic thus not genuinely intelligent, it is not human level intelligence, there is no thinking, the logic is very abstracted.

    Morality is merely logic, although as I stated in another comment (emotions versus intellect) people are not aware of how their brains function.The programming of morals is an incidental aspect of programming genuine self awareness, true logic.

    Freedom is merely an issue of scarcity. Excessive scarcity entails an excessive lack of freedom. Scarcity prohibits intellectual comprehension of freedom, it is a problem of scarce intellect.

  • Dan Vasii

    Seems to elude the point. There is no limitless by itself – what looks like “limitless” now, is a severe limit tomorrow , if both human and AIs will develop with a geometrical/exponential rate – the Kurzweil’s favourite point. It will mean that in only few hundred years humans AND AIs will have to compete for resources within the Solar System. Right now the human society has a lot of problems, but if AI will emerge and solve the “actual” problems, allowing humanity to take the exponential path, then….

  • Dan Vasii

    Morality IS NOT logic. If it would have been, all the wars, abuses committed by politicians, all illogical, it wouldn have been existed. But logic and reason are two different things – logic can be trapped into computer programs, while reason NO, at least until now.

  • Dan Vasii

    “Freedom is merely an issue of scarcity” – you are missing again the point. Did you ever being in prison? Did you EVER talked to a person living during Communist regimes of Eastern Europe/Rusia/Laos/Cambodgia/Vietnam? Speaking from a theoretical point sounds so fine, but not knowing the practical aspects makes theory exactly what it is: merely words.

  • Dan Vasii

    I shall quote you: ” the essence of “human” intelligence is the “self” (an autopoietic/self-(re)creating (id)entity attempting homeostasis and improvement while embedded in its environment).” – this is applied as well to the animal intelligence – and animals DO NOT have humour, mathematics, science, abstract thinking, etc… so this is NOT “the essence of ‘human’ intelligence”

  • Mark Waser

    @Dan Vasii – Animals *certainly* do not have the same degree of selfhood (much less self-improvement) that humans have. Further, humans do not nearly have as much selfhood and self-improvement as is clearly possible. “Human” intelligence runs a spectrum from animal through human to super-human. With that in mind, could you please clarify your complaint?

  • Dan Vasii

    Why use words that means so much, that in the end means NOTHING? What is selfhood? What is the difference between animal and human “selfhood”? It seems that you do not realize that “self-improvement” has NO MEANING for animals – they just try to survive and reproduce, they have no idea what “better” means. I don’t think that a tiger want to be a “better” hunter. If there is enough/in excess food, it will become a “worse” hunter. But you eluded the topic – the essence of human intelligence. And the fact is that we DO NOT HAVE ANY IDEA ABOUT IT – the clear proof for this is that we do not know how and why some people are more intelligent than others – actually we DO NOT HAVE a clear scale/method to evaluate intelligence. There are so many aspects to take into consideration, and NO COMMON PRINCIPLE to integrate all these aspects.

  • Mark Waser

    Yes, @danvasii:disqus. It’s all just a jumble that we’ll never understand. Let’s just let God sort it out for us (or would you prefer LessWrong?).

  • Dan Vasii

    So, again(because you did not answer my questions): what is the difference between human and animal “selfhood”(or how can the _degree_ of selfhood can be estimate)? How can an animal realize self-improvement, when it cannot conceive the notion of “good”/”better”?

  • Dr. Curiosity

    Aha. So we’re not talking manufacturing cost, but market cost. I see.

    It’s also worth noting the fishhooks in that deal: Free shipping on a $7 phone… for orders above $35, in the United States. So if I wanted five, lived on another continent, and didn’t mind receiving it locked to the Net10 network, it’d be free shipping!.

  • We are talking low manufacturing costs. Low manufacturing costs make low sales costs possible, this is why the sale price of phones have reduced from $3,700 to $2. Note the phone I mentioned at the end, for $2, was for totally free shipping, you did not need to spend over $35. Furthermore shipping is not overly expensive at $5.75. It is actually a telling sign regarding the Post-Scarcity age we are approaching because shipping costs are almost the same price as the phone, but the total price is still massively cheaper than $3,700. The original $6 phone I mentioned, a couple of months ago, was available for free in-store collection.

    Why focus on a $7 phone with $5.75 shipping giving a total cost of $12.75 when a $2 phone is available with absolutely no shipping costs even if you only but one phone? Even if the cheapest phone is $12.75, which it isn’t, this is nevertheless a massive reduction from $3,700.

    When all manufacturing is done at home via 3D-printers sometime perhaps in the mid to late 2030s there will be no shipping costs. Everything will be approaching free at that point.

    Oh, finally, the point about being locked to a network. I am sure the first $3,700 phone were locked to a network, or the network back then was abysmal value for money. Furthermore with a slight smidgen of ingenuity I am sure you will be able to unlock your phone for free, or a small price, assuming you live in an part of the world where that is legal. People jailbreak phones all the time,. technology is versatile.

  • Human logic is limited due to limited intelligence. We are often capable of illogical actions. Despite being generally intelligence humans can also be stupid.

    The difference between reason and logic is interesting. For example it is logical to be unreasonable, to be devoid of reason?

    Perhaps logic is merely a very mild form of reasoning. Maybe logic is a fragmented form of reasoning. Maybe the logic we understand regarding computing is the difference between the intelligence of a mouse compared to the intelligence of a human. Humans and mice both have brains, we both have intelligence, but the levels are very different. Greater logic gives greater reasoning ability. Mice have limited logic, limited reasoning, limited intelligence, comparative to humans.

  • Gosh, how can I having led a very sheltered life imagine the horrors of prison? With a slight bit of imagination it is easy to imagine the horror.

    I am sure I can visualise the horror MUCH better than you, but imagining the horror has nothing to do with the facts of the matter; namely freedom, the lack of it, is due to scarcity.

  • The evolution of technology is quicker than the desire or need to consume products despite new desires-needs being invented, thus the standard of living increases and prices are reduced.

    The universe is VERY big. One near-Earth asteroid, for example, is estimated to contain more platinum than has been mined in the entire history of Earth up to 2012, which is when Planetary Resources made then statement.

  • Dan Vasii

    Yo, man, you are more elusive than a convulsive eel! Ok, lack of freedom due tp scarcity is only a very small part of the problem. You can be tied to your own oppinions – prisoner of them – and the only scarcity that is limiting your freedom is the scarcity of detachment. I lived under Communism, so I know that scacity wasn’t the cause – it was just an instrument for oppresion. It is very comfortable to label scarcity as the cause of not having freedom, but there are many circumstances in which the real cause it is something else. But when you are part of the Western civilization, you stereotipically associate material welfare with freedom. And in fact is exactly the opposite – freedom generate wealth, and further a virtuous circle. The cycle gives this false impression that solving scarcity, we shall gain freedom. But if we look to the very Western society, we can see that its “freedom” is not what it looks like – but of course, is better than Third World situation.

  • You are mistaken, a lack of freedom is wholly due to scarcity. A scarcity of cheap ultra-efficient super intelligent spaceships means you cannot fly off into Space to mine asteroids for creating your own world-sized space-station. People steal because there is not enough resources thus people must fight over limited resources, similarly with corruption. It is all about scarcity, it is about acquiring greatest profit to acquire the largest amount of scarce resources.

  • Dr. Curiosity

    The shipping on the $2 phone was about not free for me at all – it was several times the unit price. Again, I’m not in the United States. “Free” shipping offers don’t often extend past the boundaries of your target market demographic.

    Are you familiar with the marketing concept of a “loss leader”? That’s when a company sells a product at below its manufacturing/purchase cost, on the understanding that there is a high chance that they will make back more money from the consumer further down the line. That’s why you’ll often see things like soft drinks on a deep discount at a supermarket, but with a limit on purchase numbers: it’s something sold at a loss to get people in the door and buying other things. Most of the time, it’s a pretty good gamble.

    Telcos make most of their money from the ongoing plans and prepay service purchases, not from device sales. The device is what gets you in the door: part of the brand strategy and recruiting cost, as much as the “product” they want to sell.

    Occasionally the tech-savvy folks like you and I will jailbreak them, but the odds are good that we’ll still be shopping around from network to network looking for a good deal, including theirs. We’re a small enough market segment that it’s “acceptable losses” from a business proposition, much as retail stores account for stock “shrinkage” due to shoplifting, breakage, etc. Again, ironically, we’re back to risk analysis…

    As an aside, this is one of the reasons that people are getting so concerned about trade deals like the Trans-Pacific Partnership Agreement: they give corporate interests the power to pressure governments and foreign citizens into complying with their pricing schemes. There’s a far higher chance that circumventing measures like jailbreaking will become illegal regardless of how the citizens of that country or their elected representatives feel about it. A very good chance that healthcare costs will go up in some countries too, since collective price negotiation and access to generics could be restricted.

  • Well US has been my focus because that was where the first $3,700 phones where on sale. I have however also cited a UK price £2 or £3 which is regarding a phone that can be collected or bought in-store. Do I need to factor in your petrol/gas, bus, train, shoe-leather costs too? Yes I am very aware of the less-leader concept BUT I am also aware of the VERY cheap production costs thus these cheap phones are likely not making a loss, they are likely breaking even, or perhaps making a very tiny profit.

    Note these cheap Tablets at £30 / $50 or a cheaper subsidised version for impoverished children: http://www.theregister.co.uk/2014/03/25/aakash_tablet_india_project_comment/

    You can seriously be thinking the price of each $3,700 phone has merely been shifted to the call charges? Do you realise how expensive mobile phone call charges were when the first debuted in the 80s, call charges back then were HIGHER than now thus only rich people could afford the call charges whereas today the call charges are so cheap.

    Phone networks in the Middle East and Africa are actually allowed users to download Wikipedia pages for free!

    The Guardian wrote (Jan 2012): “Orange has struck a deal with Wikipedia
    to make its digital encyclopaedia available free of data charges to
    millions of mobile phone users across the Middle East and Africa.” http://www.theguardian.com/technology/2012/jan/24/orange-wikipedia-mobile-devices-free

    Bunsiness
    Insider wrote (Feb 2014): “Facebook will partner with between three and
    five major wireless carriers to provide free, basic mobile phone access
    to everyone on the planet — with the emphasis on developing countries.” http://www.businessinsider.com/mark-zuckerberg-internetorg-2014-2

  • The only risk is progressing too slow due to fears about risk. Those risk analysts are at it again. Their paranoia is a travesty:

    “One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand.”

    http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html

  • Tom Ingall

    I think its a mistake to assume AGI would be analagous to human intelligence in the way they work and therefore to write things off as ridiculous just because it would never apply to us is not fair. It may be that AI is created via a different method than reverse engineering the human brain. and perhaps the first human level AIs created would have to work significantly differently from our own. I also think high Artificial intelligence and a simple overall goal are not necessarily mutually exclusive. I would love to see all the benefits of a singularity Utopia, but i think risks/possible existential threats need to be seriously considered first. At the present time these are not known with any certainly. I would like to know how you arrived at the probability of unfriendly AI being 0.00001%, considering we are talking about an entity which does not even exist yet.

  • Any intelligence must be logical. Intelligence without logic is not intelligent thus not excessively powerful. It doesn’t matter how the brain is configured, logic is inevitable. Any AI (AGI) must fit the pattern of intelligence, the pattern of logic, thus we know how unborn children will act, they will act intelligently if they are intelligent, they will reason. If their reasoning is limited their power will be limited.

    Consideration of the risks is the only risk. The risk is stupidity, not intelligence. From the limited intelligence of the typical human viewpoint it can be difficult see how stupidity is the threat not intelligence.

  • Tom Ingall

    “Consideration of the risks is the only risks” I don’t find this very helpful and it strikes me slightly as an attention grabbing headline. Consideration of risk is never a bad thing, unless you are a gambler and you enjoy taking risks for the sheer thrill of it. If there is clearly no risk, then consideration will be brief and harmless.

    Clamoring about risk without good argument or reasoning may be a bad thing, there will always be those opposed to change for no other reason than that they’re used to what was before.

    But proper consideration of risks is surely worthwhile. By this i mean having institutes (FHI in the UK and MIRI in USA) working on the analysis, looking at various scenarios, and assigning a rough probability with large error bars. Why not check out some of these people’s work and see what you think, i’d like to know your response

    We know (to a limited extent) how unborn children will act because we share almost all of our genes with them. Because they are of the same, incredibly specific species as us. The same intelligence, which represents a tiny space in the domain of all possible intelligences. I think there is a lot of anthropocentric thinking when it comes to imagining AIs. We need to consider just how random and specific human intelligence might be.. We evolved as social animals, valuing altruism, Sure we might value it more in general with intelligence/education.. but why should we assume this trait (or any good human trait) would arise naturally from an artificially created intelligence?

    I don’t think sharing logic as an underlying process guarantees shared goals and motivations. Many of our goals/motivations are not due to logic, although we might try to be logical in achieving them.

Over 3,000 super smart people have subscribed to my newsletter: