• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

Steve Morris

Will technological unemployment impoverish us?

January 4, 2016 by Steve Morris

The popular view

It’s common in contemporary sci-fi movies to see the future portrayed as a place of vast inequalities, where a tiny elite enjoys advanced technology and a life of leisure, while the masses slave away in poverty, in a polluted world stripped of resources. But sci-fi is very often not a vision of the future, but a mirror of our present-day concerns. So it is the case with concerns over inequality and fears of technological unemployment.

The worlds portrayed in these dystopian movies are filled with contradictions. Often the world’s resources have been depleted, and yet advanced technology exists, capable of generating cheap, unlimited energy. Often, this technology makes human work obsolete, yet the masses are depicted scrabbling in the dirt for a living. And the elite have access to unparalleled health, longevity and synthetic enhancements, while the masses live short brutal lives, without even twentieth-century standards of healthcare and comforts.

Blade_Runner_spinner_flyby

Of course, this is Hollywood, so conflict and injustice are the cornerstones of the plot. Inconsistencies are not the director’s prime concern. But these logical contradictions persist in people’s thinking. They prevent us seeing clearly where our world is headed.

Technology creates wealth

If we want to find real-world examples of a tiny elite living a pampered life, while the huddled masses toil in poverty, we can find them in the past, not the future. The defining feature of pre-industrial societies was their lack of technology. This led to inefficiency, requiring most of the population to engage in hard manual work for long hours, and with little to show for it, except poverty and subsistence.

Our present world is characterised by its extensive use of technology. Technology allows humans to achieve more with less work. It improves efficiency, reducing costs and creating wealth.

But whenever new technology reduces work, it necessarily puts workers out of jobs. This is the point of technology, after all – getting machines to do work, so we don’t have to. In the short term, for those who have lost their job, this is clearly a problem. In the longer term, and if we consider the whole of society – not just those who lost their jobs – the effect is universally beneficial, but those wider, long-term benefits are harder to see than the immediate suffering of the newly unemployed. If we focus all our attention on the problems, we may form the view that technology creates wealth for the few and poverty for the masses. In reality, the reverse is true – technology creates wealth for the masses, and its economic downsides are short-term.

The link between technological development and unemployment has been noted for centuries. Adam Smith, in his classic economics treatise of 1776, Wealth of Nations, wrote that, “a workman unacquainted with the use of machinery employed in pin-making could scarce make one pin a day, and certainly could not make twenty, but with the use of this machinery he can make 4,800 pins a day.” By implication, the invention of pin-making machinery rendered 4,799 pin-makers unemployed for every pin worker who kept his job. A queue of 4,799 unemployed men desperately looking for work is highly visible. What is less visible is the benefit to the rest of society of reducing the cost of pin-making by a factor of 4,800. The cost of living is reduced, and the money saved is spent on other goods and services, creating new jobs in other industries. It should be obvious that the net benefit to society as a whole is vastly positive, and that what is happening is a shift of employment from one industry to another, not permanent unemployment.

Technological Unemployment

The history of agriculture in the United States is an excellent example of this process at work. The US Census reported that in 1870, agricultural workers comprised half of all workers and that agricultural technology was characterized by manual labor and horse-drawn machinery. By 1950 less than a fifth of all workers were agricultural workers, and tractors and electrical machinery had largely replaced the horse. Now, in the twenty-first century, hired farmworkers make up less than 1 percent of all US wage and salary workers. It is too obvious to point out that this has not led to vast unemployment, nor has it impoverished ordinary workers – quite the opposite.

In 1900, the average American family spent more than 40% of its total income on food. Now it is less than 15%. Food is cheaper, the money saved on food is spent on other purchases, and ordinary Americans are vastly wealthier as a result of technological progress in agriculture.

Is it different this time?

One of the most influential writers on technological unemployment in recent years, Martin Ford, in his books The Lights in the Tunnel, and Rise of the Robots, seems to accept that technological advancement creates wealth, but advocates that this time, things are different. Ford’s primary argument is that for the first time in human history, technological advancement threatens to make a large fraction of the human workforce permanently obsolete.

Looking at how the agricultural revolution rendered half of all US jobs obsolete, and how the industrial revolution had effects of similar magnitude, Martin Ford’s argument that “This time is different” seems difficult to justify. The principles of economics do not change. If Ford was writing in 1800, as steam-powered machines began to do the work of hundreds of men, or in 1900, as the introduction of tractors and other technology destroyed 30% of all American jobs in 50 years, he might well have arrived at the same conclusion. Indeed, Eleanor Roosevelt clearly felt the same when she wrote in 1945, “We have reached a point today where labor-saving devices are good only when they do not throw the worker out of his job.”

When it comes to economics, “This time is different” is a common refrain from down the centuries, but the evidence doesn’t support human obsolescence now, any more than it has done in the past.

Ford cites as evidence the stagnation of wages in the Unites States since the 1970s. Yet wages aren’t necessarily a useful statistic if we want to examine real wealth. Wages are affected by inflation, immigration, and other factors. Instead, if we look again at the proportion of income spent on food, this has continued to fall steadily throughout the twentieth century, and did not stop in 1970. The proportion of US personal income spent on food was 13.9% in 1970, falling to 13.2% in 1980, 11.5% in 1990, 9.8% in 2000, and 9.6% in 2010. Another indicator of American wealth is the proportion of families owning cars. In 1969, 80% of American households owned at least one car; 32% owned two or more. By 2009, car ownership had increased to 92% of households, with 58% owing two or more cars. This is a story of steadily growing affluence, not declining wealth, and is exactly what we would expect form the rapid developments in technology over the past 40 years.

The pace of change

What is undoubtedly true is that the pace of technological progress is faster than ever before and is continuing to accelerate. Thus, both the long-term benefits and short-term problems are increasing. For most people, in their everyday lives, it is the short-term problems that are most clearly visible, especially if they or someone they know loses their job.

Fortunately, we are well placed in the modern world to help with the transitions needed when jobs are made obsolete by technology. In Victorian London, the working classes were left to sink or swim as machines took their jobs. In the 1930s, society was able to do little to help those who lost their livelihoods in the Great Depression. Now, due almost entirely to the increased wealth that technology has given us, society has the means to offer transitional help to those who need it.

Inequality

Perhaps our real concern isn’t unemployment, but inequality. The fear is not that robots will do our work for us (which would be a good thing), but that unemployment will cut off our means of earning a living. In short, we are conditioned to see ourselves as workers, under the power of a wealthy elite. A popular narrative tells us that inequality is growing, and that capitalism, globalization and technology are the driving forces for this.

I beg to differ.

Technological efficiency creates wealth and raises living standards for the poorest in society as well as the rich – perhaps for the poorest most of all. It always has done and it always will. After all, in ancient times it was not Kings and Queens who had to carry water from the well, or plough the fields, but ordinary people. Mechanization benefits those who do the work.

Kings and Queens have always enjoyed leisure time and luxuries – now ordinary people do too. Our ancestors toiled in the field, and starved whenever droughts or famine hit. Now you and I are well fed (I presume), and have leisure time to read and write articles such as this.

An ordinary American born in 1900, worked an average of 60 hours a week and had a life expectancy of 47 years, whereas a modern American works 40 hours a week and can expect to live to 78.

The fact is that wealth becomes more available to more people as time goes on. Augustus Caesar (63BC – 14AD) ruled over an empire that accounted for 30% of the world’s economy, but he didn’t have access to even basic healthcare. Genghis Khan (1162 – 1227) ruled a kingdom that spread from China to Europe, but he didn’t use a cell phone, fly in a plane, or watch TV – activities that all of us take for granted. My point? That real wealth increases inexorably as time passes, for the poor more than for the rich, who have always enjoyed the privileges of luxury and leisure. This increase in the wealth of ordinary people is driven by technology.

If we’re really interested in the effects of technology on inequality, we should look beyond developed nations, and compare ourselves with those in the poorest countries. The big picture is not one of increasing, but of decreasing inequality.

In fact, the advancement of technology during the twentieth and twenty-first centuries has led to startling (and generally under-reported) changes in wealth, health and equality.

  • In the past two decades, the number of people living in extreme poverty (at or below $1.25 a day) has halved from 2 billion to 1 billion.
  • Life expectancy is increasing in every single country in the world. Even in the poorest nations, like South Sudan and Angola, in the past three decades, life expectancy has risen from 40 years to over 50 years, despite the HIV epidemic.
  • One of the most encouraging statistics is that infant mortality is falling rapidly all around the world, and the fall is most rapid in the very poorest countries such as Afghanistan, Somalia and Sierra Leone.

The usual narrative of growing inequality that we’re constantly fed fails to capture what’s actually happening in the world. And as always, technology is the driving force for good.

Thanks to technology, poverty is diminishing at an unprecedented rate around the world, life expectancy is increasing, diseases are being eradicated, and more people than ever before have access to clean water. These are all exponential trends.

Parliamentary Under Secretary of State, Stephen O'Brien MP, on a recent visit to Ethiopia

Accelerating technology, falling costs

A striking feature of new technology is that it benefits the poorest in society, not just the richest. In fact, it’s the most powerful force known for reducing wealth inequality. One of the myths perpetuated in Hollywood movies is that life-enhancing technology will be prohibitively expensive and available only to the elite. All the evidence points to the opposite being true.

Moore’s law that says computer processing power doubles every two years has a flipside – processing power halves in cost every two years. And Ray Kurzweil’s familiar graphs (after Hans Moravec) of exponential increase in technological capability can equally be used to show its exponential fall in cost.

Already, one of the most remarkable trends of the twenty-first century is how cheap and universal the latest technology is becoming. As of January 2014, 90% of American adults owned a cell phone. And as of October 2014, 64% of American adults owned a smartphone.

Globally, the picture is perhaps even more dramatic. As of May 2014, there were nearly 7 billion mobile subscriptions worldwide, equivalent to 95.5 percent of the world population.

Mobile_phone_subscribers_1997-2014_ITU.svg

Technology drives social change

Technology isn’t merely an engine for creating material wealth. It’s a powerful catalyst for social change. It’s no coincidence that the enormous growth in wealth in recent centuries has brought profound changes in social attitudes and human rights.

  • The printing revolution was a necessary precursor to the rise of education, secularism and democracy in our modern world.
  • The invention of machines for sowing seeds and harvesting crops made the abolition of slavery in 1833 a political possibility.
  • Birth control and washing machines liberated women in the 1960s just as much as progressive social attitudes.

Again and again, technology has helped to break down social barriers, giving rights to minorities and the oppressed. It is the friend of the poor and the disadvantaged, not the enemy.

Capital and inequality

A lot has been written about capitalism, and how it must be overthrown, or superseded. If we fail to do this, it is said, in the future a tiny elite will control all the world’s wealth, while technological unemployment will leave the rest of us living in poverty. But that is not a vision of the future, it is Karl Marx’s vision of the industrial revolution in Britain in the nineteenth century.

Marx was wrong about the industrial revolution. Rural peasants flocked to the cities not to be exploited, but because the economic opportunities created by new technology made their lives better than before. While agricultural jobs were destroyed by technology, new industrial and commercial jobs were created. The factories raised income levels and life expectancy for the poorest in society. The murder rate plummeted. Famine became a thing of the past. The cost of essentials such as food and clothing fell dramatically. And social discrimination began to diminish, with women participating in the workforce in increasing numbers, and the fledgling feminist movement beginning amongst the middle classes, becoming a potent force for change as the nineteenth century progressed. The driving force for all these changes was technology.

Annie_Kenney_and_Christabel_Pankhurst

Marx was wrong then, and predictions of future poverty are wrong too. It is the same error again. Marx saw the world divided into rich and poor. In his world view, the elite controlled the wealth and the means of production. Marx focussed his attention on how technology gave the elite ever greater power. But what Marx failed to see was how technology gave everyone more power over their lives, lowering costs for everyone, most especially the poor and disadvantaged. Now, in the twenty-first century, the means of production is just as likely to be held by a work-from-home tech entrepreneur or a mom-and-pop business as by a mega corporation. In the future, the means of production might actually be free.

Perhaps it already is, in some cases. My 12-year-old son is making and publishing his own YouTube videos. My 16-year-old son is learning how to code smartphone apps. The technology that enables this is empowering, and it is completely free.

Everywhere, barriers are falling. Want to write and publish a book? Traditional barriers to publishing have gone, and online tools enable anyone to publish their own e-book or paperback for zero cost. More music is being created and shared now than ever before. Crowd funding is unleashing a tsunami of creativity, and barriers to entry are being swept aside in the flood.

Marx was wrong – the means of production are becoming available to everyone, even the poorest and disadvantaged. The luddites were wrong – technology liberated the masses, not enslaved them. And today’s heralds of doom are wrong too – increased efficiency will create wealth and opportunity, and lift us all out of poverty.

In the Middle Ages, the wealthy elite owned land, and ownership of land was the means by which they exerted control over the population. In the nineteenth century and the first half of the twentieth century, control of mines, steelworks, railroads and oil gave the elite their power. Now, in the first part of the twenty-first century, knowledge is power, and corporations like Google, Apple, Microsoft and Amazon are key players in this knowledge-based economy. When knowledge is power and ideas are wealth, they can more easily be spread and shared. As long as we are careful not to create barriers to sharing, such as patents and IP protection, the source of wealth in the future can be made accessible to all.

What if robots do everything?

In the short- to medium-term, rapid advances in technology will make us more efficient, eliminate more jobs, create new ones to replace them, lower costs, improve standards of living, and reduce global inequality. Computers and robots will do more and more work, until ultimately, little or no human work will be necessary. Eventually a day may come when computers and robots really can do everything for us and jobs will be destroyed and not replaced. What then?

 

'80sRobot

Extrapolating current trends too far is fraught with pitfalls. Remember poor Malthus, who extrapolated population and food production trends to predict global starvation in the eighteenth century? Let’s not make the same mistake again.

Errors come about by extrapolating certain trends (the tendency for jobs to be destroyed by new technology) and ignoring others (the tendency for new technology to reduce the overall cost of living). A dramatic increase in technological unemployment would necessarily go hand in hand with an equally dramatic reduction in the need for people to earn money.

Let’s consider an example of how this works in practice. The production of books by scribes was once a very slow and time-consuming process. Few books were produced, and the ownership of books was restricted to a wealthy elite. The invention of the movable type printing press by Gutenberg put those scribes out of work, but greatly increased the availability of books. And the unemployed scribes soon found new work in the ever-expanding printing industry, which continued to innovate in the centuries that followed. Waves of new printing technologies destroyed jobs, and created new ones, all the while bringing down the cost of books and increasing their availability.

Now, in the twenty-first century, those books, which were once available only to the privileged few, are now available to literally everyone. Much of what we now read (this essay, for example) is actually free. Imagine how that would play out if all goods and services were rendered essentially free.

This is the world we must try to imagine if we are to glimpse the future.

A common objection to this is to counter that the world’s resources are finite, and that as time goes by, their costs will rise because of scarcity. But again, this ignores the effects of technology. In particular, it ignores the fact that technology creates resources. Coal only became a resource once the technology to harness it was invented. Oil became a resource later, once the technology to drill, pipe and refine it became available. Solar power, computing power and healthcare are examples of resources that are currently growing exponentially. So new technology creates new resources, rendering old ones obsolete. That’s why peak firewood and peak horse are in the distant past, and peak coal has perhaps already been reached.

It’s also the reason why new industries are increasingly software-dependent, with zero marginal cost to produce and sell their wares. As writers like Ray Kurzweil have noted, zero marginal cost industries are poised to explode into the real world during the twenty-first century, reducing the cost of living in unprecedented ways, and probably upsetting governments whose economic and monetary systems are founded on a faith in relentless inflation.

Those who argue that automation will make 99% of the population unemployed, and concentrate all of the wealth in the 1% who own the robots, are making simple errors. They ignore the fact that total automation will make the cost of goods and services tend towards zero, eliminating the need for money all together. In any case, the scenario is a logical contradiction. If working people lose their productive power by being replaced by machines, they will have zero purchasing power, and the greedy capitalists will have no source of income.

Projections of future poverty due to technological unemployment are fallacies. Instead, imagine a future where robots and computers do all the work. They grow, mine, harvest and collect all the raw materials necessary for this. The robots also repair themselves, and manufacture more robots when needed. They even design new and improved versions of themselves, so that ever more work can be done. In this future, nobody needs to work. Nobody is paid to work. Nobody has any money, because none is needed. People can spend their time doing whatever they want.

What will people do when they no longer have to work? Technology doesn’t render people useless – it enhances their creative potential, and expands their options enormously. That’s actually what technology is for.

Technological unemployment? Yes! Problem? No! Fantasy? Not necessarily.

Working towards a better future

Technology is making our lives materially better, and this trend is accelerating. Whatever happens next, it is almost certainly unstoppable. But we could still make the wrong policy decisions that will cause the future to be worse than it might otherwise be. The right policies will enable technology to deliver its benefits to all, without creating barriers to its adoption or ownership. They will also support those in need during the inevitable transition and turmoil as the world changes.

Transitional welfare payments will be needed to support those who cannot work, or whose skills have been rendered obsolete. Some economists have argued for the introduction of universal welfare. But logically, universal welfare would only become necessary if unemployment is universal. And in that case, who would the government tax to pay for the welfare? So perhaps our current system is the right way forward.

Old forms of wealth, such as agriculture and mining were resource-based, and tended to promote inequality. New wealth is knowledge-based, and can easily be shared and taught. To ensure that technological advances benefit everyone equally, we will want to ensure that they result in the cost of living falling to zero, or near-zero. A knowledge-based economy can reduce the costs of goods and services only as long as an efficient and competitive market operates.

So we will need to eradicate protectionism and barriers to trade. We will want to minimize intellectual property rights, so that knowledge becomes common and available to all. We will want to encourage industry standards and open source systems that promote competition and drive down costs, rather than proprietary software and operating systems that create artificial scarcity and inflate prices. We will want to disrupt monopolies wherever they occur, either in the private or in the public sectors. And we will want governments to refrain from imposing intrusive regulation that protects vested interests and discourages competition – such as in the banking, telecom and energy sectors, and most recently illustrated by attempts to regulate the Uber taxi phenomenon.

Many economists have long argued that we need all those things already.

Thanks to technology, the present is better than the past, and the future looks set to be better still. Unemployment isn’t our real fear. Poverty is. And the solution to poverty is technology. If universal technological unemployment does eventually happen, it will not be something to fear. It will mark the end of poverty, and bring to a close this chapter in human history.

 

About the author:

steve-morris-thumb111Steve Morris studied Physics at the University of Oxford and spent ten years working for the UK Atomic Energy Authority. He now runs tech review site S21.

 

 

Related articles
  • Marshall Brain on Singularity 1on1: We’re approaching humanity’s make or break period
  • Martin Ford on Singularity 1on1: Technological Unemployment is an Issue We Need To Discuss
  • Will Work For Free: A Doc About Technological Unemployment
  • Economic Possibilities for our Grandchildren by John M. Keynes

Filed Under: Op Ed Tagged With: technological unemployment

Is science a heresy?

December 5, 2014 by Steve Morris

It’s true that science and religion haven’t always rubbed along well together, and in fact Galileo Galilei was tried for heresy, but in this article I’m considering a broader issue, not just a religious one. I’m asking whether science is now an established part of mainstream cultural thinking or if it’s a subversive, radical activity that seeks to continually undermine the status quo.

Let’s do some dictionary work to start. Here’s what my copy of the Concise Oxford Dictionary has to say:

Science: Systematic and formulated knowledge based mainly on observation, experiment and induction, or deductions from self-evident truths.

Heresy: Opinion contrary to the accepted doctrine on any subject.

So, at first glance, science doesn’t appear to be a heresy. It’s practically the opposite. Adjectives like ‘radical’ and ‘subversive’ don’t appear anywhere in its definition. It’s systematic and self-evident and consistent with the world we observe around us. Except that’s not my understanding of science at all.

While I agree that science consists of systematic and formulated knowledge, I don’t agree that it is based on induction or deduced from self-evident truths. I’m not even convinced that it’s derived from observation.

For instance, the sun is observed to rise in the east and set in the west. It should therefore be self-evident that the sun circles the Earth, as observed. Except it doesn’t.

Similarly, time marches forward at a steady, unvarying rate, as is completely obvious to everyone and confirmed by observation. Except it doesn’t.

Even simple physical laws like Newton’s first law of motion (a restatement of Galileo’s Law of Inertia) are completely at odds with everyday observations and experiments, not to mention the time-honoured theories of Aristotle. Until this outrageous proposal, it seemed obvious that moving objects tend to come to a state of rest unless a force acts on them. Equally, it seemed obvious that heavenly bodies like stars and planets were governed by different celestial laws, as befitted their eternal, unchanging nature. It took the genius of Galileo and Newton to realise that it’s the invisible force of friction that’s responsible for bringing objects to rest, that objects will move with constant velocity if no external force is present, and that the same set of laws that describe how apples fall from trees also apply to stars and comets.

This nicely illustrates how science really works. Galileo and Newton observed the world, certainly, and they derived inspiration from it. But they did not infer laws directly from their everyday observations. Their laws apparently contradicted those observations. Instead they imagined new, universal laws that were capable of describing a range of phenomena, including both everyday objects and celestial bodies. They invented (not discovered) a set of laws that could describe the motion of all things without exception.

What an arrogant thing to do! To overturn not just the ideas of Aristotle and the Church, but to deny ordinary everyday experience! And yet that is how all good science works.

The greatest scientists play the role of heretics, imagining shockingly unorthodox visions of reality, in gross contradiction to observation and derived not from self-evident truths but created by their own egotistical imaginations. The science invented in this manner can then be tested by experiment, but only if you know what to look for. If you’re not expecting time to slow down as your speed increases (as Einstein predicted in his Special Theory of Relativity), you can perform a million experiments and never observe the effect.

Theories and speculation dictate what we go looking for in the first place. Scientists don’t simply observe everything around them indiscriminately. Indeed, most experiments are designed carefully to screen out all the effects that aren’t relevant to a measurement.

The theory comes first, out of a scientist’s mind. Then, if confirmed by experiment, it becomes orthodox. Only afterwards does it appear to be self-evident and based on observation.

So, yes, I do think that science is a heresy. And I think that the role of scientists is to shun the cultural mainstream and to live on the intellectual fringe, pushing at boundaries and undermining commonly-held beliefs and assumptions. Fortunately, these days heretics are no longer burned at the stake.

One of the problems with the relationship between science and mainstream culture is that most people don’t understand this. Without a scientific background, how could they? And without an understanding of technology, how can people come to terms with the kinds of technologies that are discussed on this website?

Technology is undermining the status quo, just like science. It’s transforming society at a pace that’s growing exponentially. Again, most people underestimate the speed of change, and can’t perceive progress beyond the linear. Technology does not just challenge our understanding of what it means to be human. It changes what it means to be human. It’s probably the greatest heresy ever conceived.

 

About the author:

steve-morris-thumb111Steve Morris read Physics at University College, Oxford and graduated in 1989 with first class honours. He spent ten years working for the UK Atomic Energy Authority at Harwell, Oxfordshire, before starting his own internet company. He now writes for technology review website S21 and blogs about science in his spare time.

Filed Under: Op Ed, What if? Tagged With: heresy, science

It’s Future Day: Happy Tenth Millennium!

March 1, 2014 by Steve Morris

Future-Day-logoThe Ancient Greeks had a word for the largest number that they could conceive of. It was the myriad and it meant literally ten thousand. Coincidentally, that’s roughly the age of human civilization. So today, on Future Day I’m celebrating civilization’s first myriad.

Ten thousand years ago was the beginning of the Neolithic period, sometimes called the Anthropocene period – the era in which humans would begin to exert a permanent and profound influence over their environment.

Prior to this, the fate of the planet was controlled by natural cycles of warming and cooling, by Ice Ages and by sudden catastrophic events like volcanoes, earthquakes and meteorites. Now, for the first time, the building of cities, the clearing of forests for agriculture and the rising human population would start to become significant factors influencing the biosphere.

We are genetically almost unchanged since that time, but our culture has been changing and growing at lightning speed.

Ten thousand years ago marked the invention of agriculture in Mesopotamia – probably the key invention that transformed our world. The world’s oldest surviving buildings also date from around this period. Copper smelting began around 7,500 years ago, closely followed by the invention of the wheel and the first examples of proto-writing. 6,000 years ago the horse was domesticated, and soon after the first city appeared at Eridu in Mesopotamia. 5,000 years ago the Bronze Age began, to be followed by writing and the rise of Classical civilization. From there it was a short 2,000 year period of rapid scientific, technological and economic progress to the present day.

Of course, that whirlwind tour of history and pre-history hides a lot. It’s only in the past 200 years that we moved from a predominantly agricultural rural-based society to an urban one. It’s only in the past half century that ordinary people, and women in particular, have become empowered to live their lives in relative freedom and prosperity. And it’s only in the past couple of decades that the developing world has started to catch up with the West.

Looking forward, what can we expect in the next ten millennia?

I’m not going to try to make specific, qualitative predictions. But what about a quantitative prediction of the total value of human culture? How can we put a figure on growth? Technological inventions? Scientific discoveries? Broader cultural and societal improvements? These are all hard to quantify. We could instead use economic growth as a catch-all measure of human progress, incorporating technological, cultural and other factors, as well as population growth.

In the past one thousand years, GDP has grown approximately 350-fold, which equates to an average sustained GDP growth of 0.6%. If growth continues at this rate, the economic increase of the past 10,000 years will take place again in the next 120 years.

This is a conservative estimate – in the past century, world economic growth has averaged closer to 3%. At this rate, the global economy will double in just 25 years!

The wealth that took our ancestors 10,000 years to build will be created again in a single generation. Imagine fast-forwarding all of the knowledge acquired from the building of the first city, to metal-working, to the industrial revolution and the computer and information revolutions and packing that growth into our own lifespan. That is what those of us living in the 21st century are experiencing right now.

Where will we be in another 10,000 years?

evolutionI’m not even going to attempt an answer. I hope it will be clear from this discussion that we are developing so rapidly now that another 10,000 years could see changes that will transform our culture, our world and our very selves unrecognisably. But even that will be just a short blip in the 200,000 year history of Homo sapiens sapiens, the subspecies of Homo sapiens that includes all modern humans.

In a sense we are still near the beginning of our great adventure in civilization. Like a child growing towards adulthood, we’ve learned a lot about ourselves and the tiny corner of the universe we live in. We’ve explored a little of the big wide world, but there’s a lot out there that we can’t even guess at. We’ve made a lot of mistakes and have stumbled and hurt ourselves, but we’re steadily learning how to do things better. In time, we’ll mature and won’t make so many mistakes. When we look back, in another myriad of years, we’ll be amazed at just how far we’ve come.

In the meantime, let’s remind ourselves how much we’ve grown already, and wonder at all the things we’ve achieved in our civilization’s short life.

 

About the Author:

Steve-Morris-thumbSteve Morris writes for consumer electronics site S21.com and blogs about science, technology and culture at Blog Blogger Bloggest.

 

Filed Under: Op Ed Tagged With: Future Day

Why The Future Will Be Funnier Than You Think

January 8, 2014 by Steve Morris

What do you think the future will be like? It might be a shiny utopia where human suffering no longer exists and we are free to live meaningful, creative lives limited only by the power of our imaginations:

??????????Why did the post-human superintelligence cross the road?
I can’t say. You wouldn’t comprehend the answer.

Or things might take a horrible turn for the worse:

Knock, knock!
Who’s there?
Armageddon!
Armageddon who?

Armageddon out of here!

No one knows. But I bet that the future will be funnier than a lot of people think. In fact, I believe that humour is accelerating exponentially. There probably wasn’t a lot of humour around during the Black Death for instance:

The Black Death, you say? You want to avoid that like the … well, just try not to get it.

Why is humour growing? For one thing, people have more leisure time now to worry about their fears and neuroses. And as our technical capability and scientific knowledge grows, then the number of things we know we don’t know also grows. That’s right – ignorance is growing exponentially, in parallel with knowledge. And where ignorance leads, humour is quick to follow:

“We don’t allow faster-than-light neutrinos in here,” says the bartender.
A neutrino walks into a bar.

One of the assumptions often made in discussions about the Technological Singularity is that if a super intelligent AI is ever built, it will immediately start work designing an even more intelligent version of itself, resulting in an exponential increase in capability. But what if the super AI doesn’t feel like making itself obsolete as its first and final act? What if it would rather do something else? Like hosting Fox News, or writing a history of the world in rhyming couplets, or just cruising Vegas, counting cards and picking up hot chicks?

In short, what if the future is not how scientists, geeks and nerds imagine, but more like real life? What if it’s more Douglas Adams than Arthur C Clarke?

Douglas-Adams-president-quote

Research shows that a good sense of humour is highly correlated with intelligence. For example, if I say the word “fart” do you snigger loudly? That means you are really smart. Humour requires knowledge, understanding and the ability to subvert expectations. It is often predicated on contradictions and double meanings. Many of the script writers on shows like the Simpsons have PhDs in mathematics. That’s because math jokes are really hilarious. After all, any subject that contains within it a proof that it is true but unprovable will appeal to lovers of the absurd.

A Roman walks into a bar, holds up two fingers, and says, “Five beers, please.”

In a previous article on this site, I proved mathematically that the Technological Singularity is inevitable. Surprisingly, nobody pointed out any serious errors in my proof. That’s worrying. Maybe people on this site aren’t as smart as I thought, in which case you won’t get the following joke:

An infinite number of mathematicians walk into a bar.
The first mathematician says, “Half a pint of beer, please.”
The second asks for a quarter of a pint.
The third asks for an eighth of a pint.
The fourth asks for a sixteenth, and so on.
The barman says, “That’ll take forever. I’ll pour you one pint and that’s your limit!”

It probably helps if you know something about the limit of an infinite series.

One popular view of the Singularity is that it is the point at which the future becomes unknowable. Hello? That’s like now, surely. Anyway, the thing about singularities is that they always involve infinities. Infinities get weird very quickly. For example, the Singularity may be near, but if it’s cloaked in an event horizon it might take infinitely long to reach it.

Let’s return now to our super intelligent AI. The futurist Hugo de Garis likes to refer to this kind of entity as an Artilect. But that’s such a terrible name. Instead, let’s call it Justin. De Garis predicts that the invention of a super AI will result inevitably in an Artilect War in which billions of people will die. Would billions really be willing to die for Justin? Possibly. But what if Justin just wants to sing songs and make people happy? War would be avoided. At least until someone builds a rival super AI called Miley. Especially if Miley has better moves. Then things could turn nasty quite quickly.

Well, you know what they say: “Women are from Omicron Persei 7, men are from Omicron Persei 9.”

Another concern is that robots will simply take over and kill everyone. Just for the hell of it. Or for some other reason that I haven’t thought of. But awesomely superior robotic intelligence doesn’t necessarily mean that you can always get your own way. Sometimes things take an unexpected turn.

robot-failure-to-conquer-universe

Live and let live is a wiser policy in the long run. And as long as every Terminator-style robot is equipped with a menu system that includes, “F*ck you, a**hole!” as an option, I think things will probably turn out OK.

terminator-robot

One more point before I leave you. They say that genius is close to madness. But also, it takes a true genius to be genuinely stupid. And as we all know, stupidity is an endless source of comedy:

A biologist, a chemist, and a statistician are out hunting.
The biologist shoots at a deer and misses five feet to the left.
The chemist takes a shot and misses five feet to the right.
The statistician yells, “We got it!”

So remember. The future’s bright. The future’s going to be hilarious.

 

About the author:

Steve-Morris-thumb111Steve Morris writes for tech review site S21.com and blogs about seemingly random topics in his spare time, but is always willing to consider an alternative career, such as rock star or sex god. Please contact him with any suitable offers of employment via Twitter or Google Plus.

Filed Under: Funny, Op Ed, What if? Tagged With: Funny Future, Futurism

Mind Uploading and Identity

September 10, 2013 by Steve Morris

Like many of you here, I saw the recent interview on Singularityweblog with Dr Natasha Vita-More about her idea of the Whole Body Prosthetic. The article raises many interesting questions, not least what it would mean to have an uploaded mind and a physical body, whether natural or prosthetic. In this article I want to explore some of these ideas, in particular what identity means for an uploaded mind.

Identity and continuity

Identity wordThe idea of uploading a mind to a supercomputer is commonplace in singularity discussions. Assuming that a “mind map” could be extracted from a biological brain and transferred to a computational substrate without loss of information, what would this mean for the individual? If the self is just a pattern, and uploading preserves the pattern, then is the uploaded mind still you, or is it merely a copy? After all, if we can create one copy, we can create many copies. Is each copy the same person, or they just replicas? What is the self in such a scenario?

Identity is a tricky issue to untangle even without the complication of uploading. Are you the same person now that you were when you were 18? Clearly not. The pattern of your mind has changed as a result of your experiences. And yet you are still you. Your identity has changed, but it persists. So is uploading any different?

The key to preserving identity is continuity. Your 18 year old self changed continuously until you reached your current self. Although the current you is different to the old you, they are the same you. You just changed over time.

If that continuity is broken by uploading, then you are no longer you. The uploaded you is just a copy, even though it may be functionally identical to the old you. “You” are still in the body. You could make a copy of your 18 year old mind and freeze it in time. Now that uploaded self is an exact copy of you, whereas the real you has become quite different with the passing of the years. So which is really you?

If this sounds too abstract to worry about, try the following thought experiment.

The black box uploader

Uploading illustrationPicture this. One day in the not-too-distant future you accompany your best friend to have his mind uploaded to a supercomputer. He’s really excited as he steps into a big black box. “See you on the other side!” he calls. Into the box he goes and the technician presses a big red button. Seconds later, his face appears on the screen in front of you. “Awesome, dude!” he says, “I’ve been uploaded.”

You’re about to leave, when you notice the technician sweeping some ashes out of the black box and into the waste. “What’s that?” you ask.

“Oh,” he says. “That’s just the waste left over from the uploading.”

These ashes bother you. You’re worried about this uploading process. You reach for your smartphone and call your friend. Sure enough, his face appears on your phone. He’s all smiles. He appears to be on some kind of virtual beach, drinking a virtual cocktail. He seems happy. You ask him some questions that only he could possibly know the answer to, and he answers them correctly. It’s definitely him. But what about the heap of ashes left over in the black box. Was that him too?

You ask him about the uploading process and how it felt. He says it felt good. “Was there any pain?” you ask. He says not. And yet, somehow your friend got turned into waste. He’s now in the garbage pile. You saw that with your own eyes. So who is this guy on the screen who claims to be your friend?

Sleep tight

All this thinking has made you tired. You switch out the light and try to sleep. But a nagging thought won’t go away. What happens when you go to sleep, you wonder? Continuity of consciousness is broken. When you wake up, you’ll feel like a new person. But will you in fact be a new person? The old one – did he die in the dark hours? Is the lifespan of the average human less than 24 hours? Have you already died a thousand deaths before? Are we a species of replicants?

Today is the day for your own uploading. Somehow, your concerns about the process have evaporated with the arrival of the new day. You head off to the lab and soon you’ve joined your friend in cyberspace. The process didn’t hurt at all. Soon the technician is sweeping away another pile of ashes.

Identity & multiplicity

What I’ve described is the usual concept of uploading. I’m sure you can see the problem. Would you step into the black box? I’m pretty sure I wouldn’t.

But there are other more interesting scenarios we can explore.

In her interview with Socrates, Dr. Vita-More suggests the possibility of a future mind being capable of inhabiting multiple bodies (or substrates) simultaneously. She suggests a central mind and sub-minds.

But let’s take this idea further. Why not have many minds of equal status? After all, the human brain resembles a collection of semi-independent systems working to create a whole.

The man with two brains

Instead of uploading a biological mind into a computer, why not start by enhancing the biological brain with additional capacity in some kind of artificial substrate? We don’t necessarily need to carry this hardware around with us. Some kind of wireless interface would do the trick.

So now you have your original biological brain plus an artificial one that you could use for storing data or running extra thought processes. It could be massively more powerful than the biological component and distributed in the cloud. No loss of identity is involved here. Provided that the two brains can communicate, like the two hemispheres of the biological brain, all is well. Once you get used to the experience, you might even find that the artificial brain starts to feel like the real you.

But why stop at two brains? From here, expansion into multiple biological and computational substrates is a trivial and logical next step. Multiple brains, multiple bodies, but a single distributed mind. And a single identity too.

Of course, life in the real world can be dangerous and unpredictable. Bodies can get lost or damaged. They may not all be able to communicate with each other all the time. But a well-designed network should be able to handle this. If one part of the mind goes offline, the rest of the system would have to manage without it for a time, and then synchronize again when it comes back online. That’s a little like what happens now when we go to sleep.

Having several bodies would be a good insurance policy against disaster. It would also enormously expand our capabilities and experiences. Some bodies could be male, some female, and others distinctly non-human. They could carry out different tasks at the same time, or work together as a team. And all the really hard thinking could be done on a cloud-based computational substrate.

Of course, this is not a human mind I’m describing, but a network of semi-autonomous super-intelligences. But if handled correctly, it could still be you.


About the author:

Steve-Morris-thumb11

Steve Morris studied Physics at the University of Oxford and is now managing editor of tech review website, S21. He blogs about science, technology and life in general at Blog Blogger Bloggest.

 

Related articles
  • The Final Moments of Karl Brant: Short Sci Fi Film about Mind Uploading
  • Natasha Vita-More on Whole Body Prosthetic

Filed Under: Op Ed, What if? Tagged With: identity, mind uploading

You and I, Robot

April 26, 2013 by Steve Morris

Isaac Asimov in his novel I, Robot, famously proposed the three laws of robotics:

handshake with robot

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

But I don’t think this is a promising way forward (and presumably neither did Asimov, since his novel highlights a fatal flaw in the rules). Rules and laws are a weak way of ordering society, partly because they always get broken. Our entire legal system seems to be based not on laws, but on dealing with the consequences when laws are broken.

In the Bible, God gave Moses Ten Commandments. I’m willing to bet that they were all broken within a week.

In the Garden of Eden there was only One Rule, and it didn’t take long before that was lying in tatters.

You may think that some rules can’t be broken. You may tell me that 1 + 1 = 2. Really? And what if I tell you that 1 + 1 = 3? What are you gonna do about it?

You see, rules only work when everyone agrees with them. They need to be bottom-up, not top-down. If people don’t like rules, then rules get broken. I personally believe strongly in the rule that says everyone should drive on the same side of the road and I’ve never broken it. But if I think that 30 mph is a stupid speed limit right out here in the middle of nowhere, then I’m going to put my foot on the accelerator.

On the other hand, I’ve never murdered a single member of my family. And not because the law forbids it, but because I love them. In fact, I would go to extraordinary lengths to protect them, even breaking other rules and laws if necessary.

That’s the kind of strong AI we need. Robots that protect us, nurture us, forgive us and tolerate our endless failures and annoying habits. In short, robots capable of love. Ones that we can love back in return.

 

Steve-Morris-thumb11About the author:

Steve Morris studied Physics at the University of Oxford and now writes about technology at S21.com and his personal blog.

 

Related articles
  • Love and Sex with Robots: The Next Step of the Relationship between Man and Machine?

Filed Under: Op Ed Tagged With: laws of robotics, Three Laws of Robotics

A Mathematical Proof of the Singularity

April 1, 2013 by Steve Morris

Proof theory Disciplines ConceptThe author, inventor and futurist Ray Kurzweil has written about the technological singularity, a time when he predicts things will change so rapidly that he likens it to a mathematical singularity. In particular, he postulates the invention of an artificial intelligence capable of re-designing itself, which will inevitably lead to ever-faster progress.

To back up his theory, Kurzweil likes to show exponential curves representing the ever-faster development of computer processor performance, the price of transistors, DNA sequencing costs and such like.

But just how mathematically rigorous is this theory? Is it right to speak of a singularity occurring some time in the middle of the 21st century?

Students of cosmology know that genuine singularities are places where scary stuff happens. Especially naked singularities, which I cannot illustrate here for the sake of public decency. It is not a term to be treated lightly. In this article, I am going to prove that if Kurzweil’s prediction of strong AI is true, then a genuine singularity will occur, and in a surprising way, quite unlike popular thinking on the matter.

Let us suppose that there are an infinite number of inventions that people could ever invent. And by people I mean not just humans, but also alien civilisations, robots and any other class of inventive being. Let us number these inventions starting with 1 (a method for starting a fire, perhaps), then 2 (a method for avoiding burning your fingers), and so on.

Now let us assume, as Kurzweil’s exponential graphs indicate, that the time interval between inventions becomes ever smaller as more inventions are made. This is logical, because the more inventions that are available to an inventor, the easier it is to invent something new. Also, the smarter an AI becomes, the easier it is for it to invent an even-smarter version of itself. This is the key assumption in Kurzweil’s theory.

Now, here comes the tricky mathematical part. If the time between inventions becomes ever smaller as the number of inventions increases, then the total time taken to invent all possible inventions is finite. I will call this time T.

The proof of this is analogous to the resolution of Zeno’s paradox of Achilles and the tortoise. Zeno, the Greek philosopher, who believed that change is an illusion, outlined the following thought experiment. The great warrior Achilles is in a race against the tortoise and the tortoise is given a head start. If both runners start at the same time, then by the time Achilles reaches the starting point of the tortoise, the ponderous tortoise will have moved forward some distance less than Achilles, but will still be in front. By the time Achilles reaches the tortoise’s new position, the tortoise will have moved forward again. It will require an infinite number of steps for Achilles to catch up with the tortoise.

Of course, the paradox is easily resolved by realising that the time taken to complete each one of these infinite steps grows progressively shorter.  Simple calculus shows that Achilles will overtake the tortoise after a finite time T.

The situation is precisely analogous to the question of invention. Although the number of possible inventions is infinite, if the time taken between inventions becomes progressively shorter, then after a time T, everything that can be invented will have been invented. This includes all possible books, all conceivable works of art, an infinite number of cat memes and even the flying car. And the time T is not infinitely far in the future, but is finite and in principle calculable.

Such a time has been prophesied by various cultures and religions throughout history. It has been called Ragnarok, The Twilight of the Gods and the End of Days, but in this article I will call it “T time”.

So, after T time there will be literally nothing to do, as everything interesting will already have been done. This is the true singularity, and it is not a time when things are changing ever more rapidly, but when they have changed so much that no further change is possible. In a neatly ironic way, it is a time when Zeno’s belief that change is impossible will become true.

Some people may think of this as a utopia, but it is really just time for taking a quiet nap after T.

 

Steve-Morris-thumbAbout the author: Steve Morris studied Physics at the University of Oxford but discovered that writing about other people’s ideas is easier than having original ones yourself. He now writes about awesome technology at S21 and shares random thoughts at Blog Blogger Bloggest.

Filed Under: Funny, Op Ed Tagged With: mathematical singularity, Technological Singularity, Zeno

Are we on the brink of catastrophe?

February 12, 2013 by Steve Morris

Man on the edge of a boardChoose your favourite catastrophe – climate change, asteroid strike, killer pandemic, nuclear war, malevolent artificial intelligence, a black hole created in the Large Hadron Collider, etc, etc. There are so many potential threats to our existence, one of them is bound to happen, right?

Well, no. The thing is, humans have always lived on the brink of catastrophe. Ice ages, famines, supervolcanoes, the Black Death, world wars, HIV/AIDS. Any one of these could have destroyed civilisation or even humanity. Why didn’t they? It can’t have been just good luck. There must be a reason.

In 1798, Thomas Malthus predicted that the world’s population would inevitably overtake its capacity to feed itself, and that mass starvation and the collapse of civilisation would follow. His prediction was based on the observation that population tends to grow exponentially, whereas agricultural capacity grows linearly at best, as it depends on the amount of land under production. And yet Malthus was wrong. Food production increased enormously in the following century, thanks to improvements in agriculture, more than matching population growth.

Contrast this with the work of Leonardo Da Vinci, who designed flying machines that could not be built. He also conceived the car, the parachute, diving suits, robots and a machine gun, amongst other inventions that could never be realised in his own lifetime. And yet Da Vinci’s ideas later came to fruition and he is recognised as one of the greatest geniuses who ever lived.

Malthus predicted the inevitable and was proven wrong. Da Vinci designed the impossible and was proven right.

What’s the difference? Da Vinci had vision. He recognised problems, but realised they could be solved. Malthus had the foresight to see problems too. But he regarded them as immovable barriers.

So here’s the lesson. Problems are everywhere, but they can be solved. Climate change can be reversed, asteroids deflected, pandemics vaccinated against, nuclear wars prevented.

All problems can be solved, but only if you believe that you can solve them. As Henry Ford put it, “Whether you think that you can or you can’t, you’re usually right.”

 

Steve-Morris-thumb11About the author: Steve Morris studied Physics at the University of Oxford and spent ten years working for the UK Atomic Energy Authority. He now writes about science & technology at tech review site S21 and on his blog.

Filed Under: Op Ed Tagged With: civilization collapse, optimism, pessimism

Is Science The New Latin?!

January 29, 2013 by Steve Morris

High Resolution Latin Text with planetSo the Pope (@Pontifex) is tweeting in Latin. And apparently without any hint of self-mocking irony. It hardly feels like a progressive move.

One of the big problems of Christianity in the Middle Ages was that most copies of the Bible were written in Latin. Although Biblical texts underpinned the prevailing belief system, only a tiny elite of educated people was able to read those texts. If you can’t access the source of your knowledge of the world, you can’t question it and you become enslaved by your beliefs instead of liberated by them.

This situation was transformed when the Bible was first translated into English (and other vernacular languages) and then printed and distributed throughout Europe. This revolution enabled ordinary folk to study and understand the original texts themselves. It was the gateway to the Enlightenment.

We have the same kind of problem now. In the modern era, science has replaced religion as the pivotal belief system of the age. It’s critical to our lives, and it informs nearly every debate, and yet still only a small elite truly has access to the source material underpinning modern science.

Scientific discoveries aren’t written in Latin, but they may as well be. They are written in highly academic jargon and are found mostly in specialist publications out of reach of the public. Most scientists aren’t natural communicators. The few who are, like Richard Dawkins or Carl Sagan can become like High Priests, interpreting science for an ignorant populace. The idea that only a small number of authoritative sources can be trusted for knowledge was exactly what the Enlightenment sought to overcome.

There is a real danger here of scientific idolatry. And idolatry can lead to witch-hunts, superstition and the suppression of free thought. In this environment, creationism and denialism thrive.

As with Christianity before it, science needs to be brought out into the open where it can be understood directly by the general public. That’s why everyone who understands science has a duty to help communicate it to others. To educate, inform and empower them. To explain scientific thinking and scientific limitations.

One day perhaps everyone will have the knowledge to understand the scientific explanation of the world for themselves, and not simply have it interpreted for them by others. Then we will have entered a new Age of Enlightenment.

 

About the Author:

Steve-Morris-thumb1

Steve Morris is dangerously enthusiastic about science and is currently teaching his 9 year old son nuclear physics. He writes about science & technology at S21 and on his blog.

Filed Under: Op Ed Tagged With: Enlightenment, Latin, science

Resources Are Not Something We Consume Like Sweets

January 3, 2013 by Steve Morris

I keep reading that we are using up the world’s resources at an unprecedented rate. We are selfishly consuming and there will be nothing left for future generations. But in fact the opposite is true.

What is a resource? It’s a raw material we can turn into something more useful. We can turn wood into paper. We can turn land into food. We can turn coal into electricity. Resources are fixed and finite, surely? Wrong!

It has famously been said that the Stone Age didn’t come to an end because people ran out of stone. Instead early humans learned how to make better tools out of metal. Hunter gatherers didn’t stop hunting and gathering because they ran out of berries, or hunted all the rabbits. They developed farming and settled down. People didn’t stop using wood fires for heating and cooking because they chopped down all the trees, and we didn’t phase out steam engines because we ran out of coal.

At each stage, a new resource became available. Something that was previously unknown, unavailable or unusable suddenly became a valuable commodity. In other words, key developments in technology created new resources. The quantity of available resources has continued to expand throughout human history.

Resources are still expanding today. It’s true that there’s pressure on land, and that oil is becoming more expensive. But resources like computing power, medicines and knowledge are becoming more and more abundant.

The reason why the total forested area in Europe and North America is increasing year by year is because we no longer need to burn the trees.

One of the most important things to recognise is that each technological breakthrough depended on an existing resource. Water power was needed for the mining revolution that gave us coal. Coal-powered steam engines were used to extract oil. Electricity from burning oil was essential for the development of nuclear power.

The lesson is simple: we have to use today’s resources to create new and more abundant resources for the future. Resources are not something we consume like sweets, but can be turned into something greater. We can create resources as well as consume them.

If you agree with me, you’ll understand why the worst thing we could do for our children and grandchildren would be to slow or halt technological advancement. We need to multiply the available resources so that we can share out more for everyone.

 

About the Author:

Steve Morris studied Physics at the University of Oxford and used to do research in nuclear physics. These days he runs an internet company and writes about consumer technology at S21.com.

Filed Under: Op Ed Tagged With: Abundance, scarcity, Technology

Utopia?! Get real!

December 6, 2012 by Steve Morris

For centuries or even millennia, people have dreamed of Utopia. It’s understandable of course. Who wouldn’t want a better life for themselves and others?

The technological singularity seems to hold out a tantalizing possibility of a Utopia here on Earth sometime in the twenty-first century. But hold on a minute. Get real!

It’s all too obvious that humans simply aren’t equipped to build a Utopia or even to live in one. We just can’t handle perfection. We’ve evolved to live in an imperfect world – one that’s not really suited to our needs. When you’re made out of dirt, perfection is always going to remain out of reach.

Humans do manage to achieve greatness from time to time. But then we mess it up.

We discover how to split the atom, then spend the next 60 years pointing nuclear missiles at each other and shouting “You’re an idiot!” – “No, you’re an idiot!” We launch the Hubble Space Telescope into orbit but have to patch it up with duct tape to get it working properly. We build a global computer network that enables instant communications and then devote half of it to advertising Viagra and the other half to showing Gangnam Style videos.

If humans were invited to a meeting of intergalactic intelligences, then we’d turn up late in a used spaceship, borrow someone else’s pencil to take notes, then blow a raspberry at a crucial moment in the discussion. Everything about human society is cobbled together at the last minute and held together with sticking plasters.

Even if we had a Utopia, it wouldn’t be to everyone’s taste. Someone would still be writing rude words over the walls. After all, one person’s Utopia is another’s Dystopic nightmare.

In my opinion, it’s time for us to grow up as a species and admit that Utopia is never going to happen. Even if the Singularity arrives as predicted it’s going to create just as many problems as it solves. Every opportunity brings a new problem and every silver lining has a cloud, if you look hard enough. And with increased leisure time and longer, healthier lives, there’s going to be plenty of time for looking.

We could have abundance, immortality and super-intelligence and still wish for things to be different. The truth is that being human (or transhuman) means spending the rest of eternity blundering in the dark, cobbling together and muddling through. Personally, I wouldn’t swap it for anything.

About the author:

Steve Morris is looking forward to the coming Singularity, but can’t help wondering if lawsuits, patent disputes and health & safety legislation are also accelerating exponentially to a point at which they will negate any technological advancement. When not worrying about this, he reviews tech products at S21 and rambles aimlessly at Blog Blogger Bloggest.

Related articles
  • Michio Kaku: Can Nanotechnology Create Utopia?
  • Tracy R. Atkins: Don’t Wait For The Singularity, Change The World For The Better Today
  • Scarcity Causes All Wars and Violence

Filed Under: Op Ed Tagged With: singularity utopia, Utopia

The machines are rising! Wtf? Lol!

November 11, 2012 by Steve Morris

Are you a twitterer? Do you tweet? If so, you have surely experienced the phenomenon of tweet overload. If you follow just a couple of dozen active tweeters, you could spend half of your waking life keeping up with them. Follow any more and you’ll have no chance of staying ahead. Only a machine intelligence could follow more than about 100 people on Twitter.

So I started thinking. What if a machine existed that could follow everyone on Twitter? That would be millions of people writing billions of tweets every day. The quantity of information known to that machine would be unparalleled. It would know world news; popular culture and high art; rumour, speculation and gossip. It would see into our very souls.

Such a machine already exists. It’s called Twitter.

So, what if Twitter could be regarded as a machine intelligence? A kind of collective intelligence that knows everything we know? I realise I’m pushing things here a little. I know that Twitter isn’t a sentient AI. Yet. But it’s best to think carefully about this kind of possibility before you wake up one day and find a Terminator-style robot knocking on your front door.

A lot of intelligence gets poured into Twitter. Human intelligence. People like Alain de Botton (@alaindebotton), Stephen Fry (@stephenfry), Pete Cashmore (@mashable). These people know stuff. What would happen if Twitter learned what they say and could somehow act on it? Maybe it could start tweeting itself. Sending out instructions to its followers. Taking over the world in fewer than 140 characters.

Scary, huh? Maybe we should shut it down now before things take a turn for the worse. Sieze the pickaxes! Storm the server rooms! Stop the tweets!

You know, I think it’s no coincidence that Arnold Schwarzenegger became Governor of the State of California. That’s the home of Twitter. Uncanny. Do you think Arnie saw it coming and decided to move to where the real action is? Was this politics thing merely a cover? Is he secretly locked in a titanic struggle against the machines? No more movies. This time it’s for real.

Then again, for every Alain de Botton there’s a Justin Bieber. Every time Stephen Fry offers us a pearl of wisdom, Paris Hilton tells us what she’s going to wear today. So maybe the average intelligence level of Twitter is pretty low after all. Even if the machines could somehow come to life, they’d be split between wanting to lay waste to humanity and being desperate to download the latest Lady GaGa video.

I think we’ll be safe for a little while longer.

About the author:

Steve Morris is a reluctant tweeter and a nervous sci-fi fan with an over-active imagination. He started out as a physicist, but took a wrong turn somehow and now writes about consumer technology at http://www.s21.com/.

Filed Under: Funny Tagged With: Twitter

  • Go to page 1
  • Go to page 2
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Nikola Danaylov @ Frankfurt AI Meetup
  • Gus Hosein on Privacy: We’ve been well-meaning but stupid
  • Francesca Ferrando on Philosophical Posthumanism
  • Kim Stanley Robinson on Climate Change and the Ministry for the Future
  • Matthew Cole on Vegan Sociology, Ethics, Transhumanism and Technology

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • Gadgets
  • Lists
  • Music
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Survey
  • Tips
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 3,500 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, better business and better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your own ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Donate
  • My Gear
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” — Nikola Danaylov

Copyright © 2009-2021 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy