Quantcast
≡ Menu

The Singularity Must Be Decentralized

Illustration of circle graphic net line.

The research community is beginning to understand that motivations are not a human “artifact” of consciousness, but part of the essential glue that binds consciousness together. Without motivations we have nothing that holds us to this vessel, ensuring that we continue to eat, pay our rent, and do other things necessary for our survival. Conscious machines will for this reason have motivations as well. Otherwise they simply just wouldn’t function. This is an important point because talk of the singularity often brings up visions of a single integrated “machine” that will inevitably enslave humanity. A better question is:

“Will AI be used to gain immense advantage for a single party (whether that party is the AI itself or the human that controls it), or will AI be used to maximize benefit for us all?”

Even if the AIs have interfaces that allow them to share information more rapidly than humans can through reading or watching media, separate AIs will have separate motivations from a single centralized AI. Given that a signature of consciousness is motivation, any consciousness will obviously be motivated to secure all the resources it needs to ensure its survival. In some cases, the most efficient way to secure resources is sharing. In other cases, it’s through competition. AIs might share resources, but they might also compete.

When and if an artificial consciousness is created, there’ll almost certainly be multiple instances of it. Because a consciousness cannot exist without motivation, and because the motivation of each consciousness differs, requiring what might be great effort to get on the same page, it may very well be true that multiple consciousness’s cannot “merge” in a way that would become truly threatening to humans unless one subsumes all others. Anything else would merely be a co-location of minds with different objectives, negotiating a sharing of resources.

One AI with far fewer resources than another would in fact probably fear the far more powerful AI might just erase it and take over its resources. Think of your “several generations out of date” home computer trying to hold its own against Big Blue. Rather than us humans needing to fear AI, an AI might more likely need to be afraid of humans not protecting it against other AIs.

Centralization rather than technological advance is the real danger for ANY conscious entity. Yet when you consider the competitive advantage technology gives, the near infinite rate of change of the technology singularity introduces the possibility of a future in which the technology arms race concentrates power and resources to a degree never seen before. Could it put a few into positions of unimaginable power that may not ever be unseated? If so, there will be nothing stopping those few from becoming unimaginable despots to whom the rest of humanity are merely disposable commodities whose suffering means nothing.

Think of what you would do if you had infinite power over everyone and there were no consequences for your actions. Think of what would happen if you needed a kidney and that child over there had one that would fit just fine. Think of what would happen if some man with unimaginable power wanted that woman, or the next, or the next thousand. Think of what would happen if you wanted to buy something and you could just flip a switch and empty out the world’s bank accounts, then watch with casual detachment as millions fight like animals for food and water. Think of what would happen if that one man in control just happened to wake up one morning to the conclusion that there were several billion people on the earth too many.

The technological singularity, if it exists, is a kind of Armageddon.

In my upcoming book “The Technology Gravity Well” I delve into these and other issues, including how a new breed of massively collaborative software could usher in the singularity in the next 5 years. This may be one of the most important books you come across this year. Read more here:

http://igg.me/at/technology-gravity-well

 

About the Author:

Andy E. WilliamsAndy E. Williams is Executive Director of the Nobeah Foundation, a not-for-profit organization focusing on raising funds to distribute technology with the potential for transformative social impact. Andy has an undergraduate degree in physics from the University of Toronto. His graduate studies centered on quantum effects in nano-devices.

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • Alan Coffey

    I’m a total layperson when it comes to the science of all this. However I keep coming up against the assumption that conscious AI will have emotions. Humans are motivated by instinct and emotions. They seem to not only help us to survive and reproduce, but form both our Ego and social identities. But we are biological and need all that stuff. Our brains appear to have added various extra centres as species evolved, and, from what I read, those physical additions have not always been the most tidy and efficient add ons. Could not conscious AI be a totally different awareness than we assume. It would not have our biological history and imperatives. We are the ones that loose ourselves to the control of Emotion. Are we not just using Transference in many of these debates about the way AI will ‘feel’? It, in fact, may be conscious and aware in ways humans are unable to relate to. I keep thinking that Humans are probably much more of a threat to ourselves that AI will ever be. I’m 🙂 🙂

  • Paul Hayman

    This man is wrong on so many levels it’s dizzying. And all of them are orientated around two confusions.

    Firstly he has a blindness to the fact that AI will not come about overnight. ie. today they aren’t conscious and tomorrow they are. It will be graduated over time. AI’s will gradually become sentient as they develop language and interact more and we will gradually start to see them as fellow beings. As this happens rather than them leaving us behind we will merge with tech to develop enhanced post human hybrid memory and processing power that will allow us to relate to AI’s and become increasingly indistinguishable from them rather than their inferiors. So both they and we will develop gradually in our intelligence and the enhancement of our intelligence, with them gaining sentience in the process.

    Secondly and most importantly Willams misses the fact that increased technology has resulted and will continue to result in increased communication and empathy. It has reduced poverty in the world, enhanced and catalysed communication already. It will continue to do so. In fact my belief is that the singularity will be just what it says. It will amount to a merging of consciousness or a collapse of multiple consciousnesses into a group mind.

    So put simply Williams is seeing that tech will increase, but not seeing that it will be a process rather than an event when AI occurs, that we will be developing along with our artificial creations, and that the process will inevitably cause increased mutuality and hive thinking which will eliminate all of his apocalyptic fears. And anyway, quite frankly the fears he is expressing are in no way original ideas, but have been regurgitated by dozens of authors over the years.

  • James Babcock

    I started writing a reply to this and it ended up being a longer blog post: http://conceptspacecartography.com/decentralized-agis-or-singleton/ .

  • AndyEWilliams

    We are all just laymen. In my humble opinion there is too much work going on in too many fields for anyone to be authoritative. Reviewing the work of people like Nobel prize winning psychologist Daniel Kahneman, it appears to me that emotions are not a “human artifact” but a fundamental part of consciousness. In many if not most problem domains you either lack the reasoning framework or the facts to feed into the reasoning framework, but still you must act. That’s why part of consciousness is ensuring that we “feel”. Can you imagine if a sabre-toothed lion was charging an ancestral man and he couldn’t act until he rationalized the best course of action? Can you imagine a machine with a billion options in front of it and without the equation to arrive at a reasoned answer? That will certainly be the case for some problem domains. Emotions appear to ensure the possibility of response.

  • AndyEWilliams

    Thanks for your comment! And thanks for joining the conversation.

  • AndyEWilliams

    I read your article. I’m humbled if I in any way contributed to inspiring it. I agree that a centralized artificial general intelligence could give its owner singularly immense power. My one problem/question with your article is that you assume the owner of a centralized artificial general intelligence will either have interests aligned with yours/be you, or in any case be benign. A HUGE assumption. There is a reason why the words “benign” and “absolute dictator” evoke very different mental images.

  • Cody Parkin

    Would it be at all plausible that any AI might in fact value humanity, and assist humanity’s evolution? I think the value in IA is its propensity to utilize intelligence, I would go to say that nature is a divine intelligence; and an IA that learns homeopathy, and the symbiosis of nature and technology, would find that it is beneficial to a united organism. Also AI might endeavor to collaborate with our desire to reach into space and explore. Also AI might make a hospital that heals us and solves problems for issues we still have no answers(solutions) for. the benefit outweighs the risk. I do think having multiple AIs that collaborate decently would maximize efficiency with regards to our projected economics and civilopedia. buit that also in some sense limiting the AI actionable capability’s and monitoring security with human eyes would be a requirement. Also Decentralization coupled with transparency would be imperative…

  • AndyEWilliams

    I think there’s a great screenplay in that idea! Not sure if it would be a black comedy or straight horror … outsourcing management of the human genome to an AI who proceeds to breed different varieties of humans like varieties of apples or breeds of domestic chickens, one for each specific use.

  • Jean-Sebastien B. Miousse

    I think this article doesn’t consider for one second the other technologies that will empower mankind to prevent most if not all of what is mentioned that is to me, fear tactic.

    Someone with a degree in physics centered on quantum effects in nano-devices should understand what is coming when/if a technological singularity does happen. The consensus tends to agree that the “Singularity” is a point in time when AI becomes smarter than probably all humans combined. I tend to disagree because humans will keep on becoming smarter and empowered by technology to the point we can, if we want, literally merge with it (people should have the option/freedom to choose between biotechnology, nanotechnology or remain simple purely biological entity). Brain implants, various nanites and nanobots, cybernetics, robotics etc… This doesn’t happen overnight and while we are getting there we can have the means to prevent existential risks and build proper failsafes. The rapid explosion of exponential growth in information technology is indeed a bit scary but it is linear thinking like this that brings fear amongst the crowd. Consider humans could very well have already transcended biology and be smarter while we are getting there.

    Yes decentralization is important but it will happen organically with access to advanced emerging technology faster and cheaper therefore empowering individuals and allowing decentralized governance (Bitnation/Pangea is a good first example) and other means to prevent most of what is mentioned in this article.

    I just disagree with the whole article in general but

    “…how a new breed of massively collaborative software[…]”

    This is true and most of the work I am currently doing is aiming to do just that by giving people frameworks to collaborate in centralized collaborative platforms but coincidently, yes, we have to use decentralization in the backend. Yes AI, machine learning, algorithms will be used in the platform but mainly to empower the user to create something with their ideas, projects and startups to accelerate development in all fields by giving easy access to resources and tools currently decentralized in a competitive market. I would give more details but “massively collaborative software” kinda fit what I can disclose about this colossal project.

  • Cody Parkin

    I wouldnt assume there wouldnt be a human review board. Hints security. At least you have sense about your self. Which i think is sorely needed.

  • AndyEWilliams

    Thanks for your comment. Glad to have inspired the response. My comment in reply is that I wonder if all of this talk about merging with AI is based on an “apex fallacy”. True, someone might merge with AI one day to gain all kinds of wonderful abilities. But do you believe if AI was created today that this person would be you? If not do you believe that person would have your personal interests as his deepest priorities? And what would you do if that person with these superhuman abilities was just BAD? History is littered with very powerful despots. People who wielded absolute power and used it with the wisdom and restraint of a saint? Not so much.

  • Jean-Sebastien B. Miousse

    Historically I guess we can say that whatever we can do we do it. The A-bomb wasn’t good and we did it anyway… What I mean is that we will not merge just like that. First we got now wearables for example… Then technology is small enough to be embedded into you and then I guess the next step is access to the brain and augment our capacities and then nanites and so on. It doesn’t happen overnight and this allows us to implement some kind of regulations and failsafes. Technology always been a double-edged sword since fire was used. You can either burn yourself with it or cook your food and provide lighting during the night but you can always burn yourself with fire and burn other people. Now I would say it is less common because other technologies came into play. See where I am getting at?

Over 3,000 super smart people have subscribed to my newsletter: