My new book Surviving AI: The promise and peril of artificial intelligence argues that in the next few decades we are facing the possibility of not one, but two singularities. The one which is familiar to most people reading this blog is the technological singularity. Its most common definition is the moment when the first artificial general intelligence (AGI) becomes a superintelligence and introduces change to this planet on a scale and at a speed which un-augmented humans cannot comprehend. The term was borrowed from maths and physics, and the central idea is that there is an event horizon beyond which the future becomes un-knowable.
A lot of people have been somewhat wary of the term. It became associated with a naïve belief that technology, and specifically a superintelligent AI, would magically solve all our problems, and that everyone would live happily ever after. Because of these quasi-religious overtones, the singularity has frequently been satirised as “rapture for nerds”.
When Nick Bostrom’s seminal book Superintelligence was published last year, a lot of things changed. Influential people like Stephen Hawking, Elon Musk and Bill Gates read pre-publication copies, and they spoke out about the existential threat which AGI represents. That introduced the idea of the singularity to a much wider audience. It also made it harder for people to retain a Panglossian optimism about the impact of AGI.
For time-starved journalists, “good news is no news” and “if it bleeds it leads”, so the comments of Hawking et al were widely mis-represented as doomsaying, and almost every article about AI carried a picture of the Terminator. There is now a backlash against the backlash, with AI researchers and others lining up to warn us not to throw the baby of AI out with the bathwater of unfriendly superintelligence. The pendulum is still swinging, and the debate is more nuanced.
So for me at least, the term singularity no longer seems quite so ticklish. If you take seriously the idea that a superintelligence may soon be with us (in decades rather than years, probably, but quite likely this century), it is hard to avoid the conclusion that life will become dramatically different – for better or for worse. If the term can shed its quasi-religious connotations, it does a good job of expressing the gravity (pun intended) of what may be coming.
In which case, I think it can reasonably be applied to another event which is likely to take place well before the technological singularity. This is the economic singularity. We are hearing a lot at the moment about AI automating jobs out of existence. There is widespread disagreement about whether this is happening already, whether it will happen in the future, and whether it is a good or a bad thing. For what it’s worth, my own view is that it’s not happening yet (or at least, not much), that it will happen in the coming three decades, and that it can be a very good thing indeed if we are prepared for it, and if we manage the transition successfully.
A lot of people believe that a Universal Basic Income (UBI) will solve the problem of technological unemployment. UBI is not an easy fix: it is going to be hard to gain acceptance for it in the USA, where resistance to anything that smacks of socialism is visceral – almost religious. Martin Ford’s otherwise excellent book The Rise of the Robots almost fizzles out at the end: he seems daunted by the scale of the opposition that UBI will face in his home country.
But to my mind, UBI is not the real battle. In Europe we are used to the idea of a safety net of welfare programmes which prevent the economically unsuccessful from falling into absolute penury. Unlike some of my American friends, I believe the people of that great country will quickly accept the need for UBI if and when it becomes undeniable that the majority of them are going to be unemployable.
The real problem, it seems to me, is that we will need more than UBI. We will need an entirely new form of economy. I see great danger in a world in which most people rub along on state handouts while a minority – perhaps a small minority – not only own most of the wealth (that is pretty much true already) but are the only ones actively engaged in any kind of economic activity. Given the advances in all kinds of technology that we can expect in the coming decades, this minority would be under immense temptation to separate themselves off from the rest of us – not just economically, but cognitively and physically too. Yuval Harari, author of the brilliant Sapiens: A Brief History of Humankind, says that humanity may divide into two classes of people: rather brutally, he calls them the gods and the useless. [See the end of his TED talk http://snglrty.co/1XlhZ1r]
Many people will disagree with the following statement, but I believe that capitalism and liberal democracy have served us incredibly well in the last couple of centuries. I am not convinced they will continue to do so in a post-automation world, but it is very hard to see what they should be replaced with, and how that could be achieved without turmoil. This sounds like a singularity to me – an economic singularity.
Perhaps Nikola should re-name his blog and podcast “Singularities 1 on 1”?!…
About the author:
Calum Chace is a writer of fiction and non-fiction, primarily on the subject of artificial intelligence. His latest book, Surviving AI, is a review of the past, present and future of our most powerful technology. Ben Goertzel described it as “a great starting-point into the literature” and added, “It’s rare to see a book about the potential End of the World that is fun to read without descending into sensationalism or crass oversimplification.” Hugo de Garis said “you cannot afford NOT to read Chace’s book.”
Earlier this year, Calum published Pandora’s Brain, a techno-thriller about the arrival of the first conscious machine. He is also a regular speaker on artificial intelligence and related technologies and runs a blog on the subject at www.pandoras-brain.com.