• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

Marco Alpini

Limiting Factors: Is the current economic growth model limited by the speed of light?

September 11, 2017 by Marco Alpini

Economic Growth and the speed of light may sound not to be very much related to each other. How is it possible that the growth model at the basis of the global economy is intrinsically limited by the speed of light?

We all know that the current economic model is based on the expectation that global economy would continue growing steadily in the future, this is what has happened, with some bumps, for the last century, with a global average that has been constantly rising.

With a reasonable approximation we can say that for the last decades it progressed averaging about 3% per year.

Many economists think that such steady growth will decline in the future while others think that it will increase due to the effects of the IT revolution, zero marginal cost revolution, robotics and AI that will improve productivity boosting the economy and unleashing new energies.

However, while 3% per year doesn’t seem much, and it certainly doesn’t seem to represent a speed of growth that light can’t keep up with, this impression is wrong.

I am not going to bore you explaining the characteristics of exponential trends that everyone knows very well, but rather draw your attention to the implications of continuous steady growth when it occurs within a finite environment such as a planet.

Let’s have a look at the global resources consumption for the last 100 years as measured by:

  • Biomass consumption (livestock, deforestation, crops production, fishing)
  • Fossil fuels consumption (coal, oil, gas)
  • Mined materials consumption
  • Construction material consumption such as concrete, asphalt, steel, glass, polymers

The total consumption of these resources has been growing in the last century at an increasing rate that exceeds the economic growth by far:

This trend is accelerating fast, even if the efficiency of our global economy is also improving rapidly. Billions of people are getting access to a standard of living that requires more and more resources.

It is commonly accepted that in the 1970s we have overshot  the earth carrying capacity of our planet and, since then, we are consuming resources at a pace that our planet cannot replenish. See this calculation by the Global Footprint Network:

The current progression shows that by the early 2020s we will need another planet to feed our economy in a sustainable way. However, because we don’t have another planet, we are feeding our economy in a non-sustainable way for almost 50 years.

Even if we consider that in the future we will make huge gains in efficiency, reducing the growth of consumption to only 1.3% per year, by the mid of this century we will need a third planet anyway.

But let’s be wildly optimistic and say that the human race will wake up and realize that more planets are needed to support our development, launching a huge space program aimed to secure new ways for our civilization to keep growing.

Say that, after 100 years of search and scientific development, we will be able to secure 5 new planets for our Civilization to continue thriving and we will develop the necessary technology to get there. This would represent a huge success for humanity but, this new immense wealth of space and resources will already be short of our needs by then.

In little more than another 100 years, we will need one more planet every year to sustain a growth of 1.3% per year.

If consumption grows at 3% per year, as we are now, in less than 500 years we will use up all 1 billion habitable planets that we currently think may exist in our galaxy. Unfortunately, we won’t be able to travel fast enough to reach all those planets by the time we need them because at the speed of light it will take 70,000 years to reach the opposite side of the galaxy.

As a matter of fact, considering that there are about 15,000 stars within 100 light years, with a density of about 280 stars for each cubic light year, in 300 years we will reach a velocity of growth that will exceed the speed of light and we won’t be able to reach enough planets to feed our growing economy.

The following graphs show when the speed of expansion will overtake the speed of light, in two cases of 3% per year and 1.3% per year.

This happens because a steady growth of the economy, however low this growth could be, is an exponential trend, while the planets that can be reached traveling at the constant speed of light increase with the cube of the distance from earth.

It is therefore obvious that even an extremely modest continuous steady growth of natural resources consumption of 1.3% per year cannot be sustained, not only in a finite environment, such as a single planet but it cannot be sustained by the universe. In this respect, the universe, being limited by the speed of light is a finite and confined environment as well.

This situation resembles very much the technological singularity paradigm and raises interesting considerations about the effect of limiting factors on accelerating trends.

Confined space and limited resources pose a tremendous constraint to continuous acceleration and even more if it is explosive, such as the Technological Singularity would be by definition.

In this case I refer to Technological Singularity, not only as intelligent explosion caused by a self-improving AGI agent but, in a wider sense, as the convergence of a number of accelerating technologies to a point where the speed of growth is such, that we won’t be able to control and comprehend anymore its progression, creating an event horizon beyond which it would be impossible to see.

We don’t have to go as far as disturbing the speed of light to find limiting factors that can contain or prevent such technological singularity, even if the speed of light would certainly be a limiting factor at the cosmic level for an AGI gone out of control.

So, is there a limiting factor for the rate of change of technological development? And, is the famous intelligence explosion also constrained by some sort of limiting factor?

The technological rate of change is limited by the capability of users to adapt and make a good use of new tools and processes, while the limiting factor for a self-improving artificial intelligence is experience and the reliability of the information available.

I have already written an article about how the lack of experience and scarce or unreliable data can constitute a serious limitation on the ability of a super intelligent agent to interact effectively with the world without making fatal mistakes: https://www.singularityweblog.com/why-experience-matters-for-artificial-general-intelligence/

With respect to the ability of technology users to adapt and embrace changes, it has to be considered that technology on its own doesn’t exist, in the sense that it cannot develop on its own independently from its users. The users and their ability to effectively benefit from improvements are the driving force of technological advancements.

From this point of view, the idea of the Singularity intended as one event or a combination of events, related to the technological development growing so fast to escape the ability of human comprehension disempowering its users, make little sense. The technological development will auto-extinguish as soon the users will lose touch with it – i.e. as soon new technologies cannot find users anymore. It would be like a powerful engine with a restricted flow of fuel supply – if it accelerates too much it will automatically starve and reduce revs.

Technology works exactly in the same way if it accelerates too much because fewer and fewer users will be able to use it with satisfaction, thereby forcing it to slow down. We are the fuel that feeds technology. [I am not considering now the case of a superintelligent AGI able to develop new technologies for its own benefit and with autonomous manufacturing capabilities. This discussion is part of the AI limiting factors referenced before.]

In conclusion, in order to evaluate the speed of a certain phenomenon it is not enough to look at its intrinsic potential. It is, however, essential to evaluate the limiting factors that may exist within the environment it has to operate. Such consideration is often disregarded when people talk about the miracles of future technologies because the limiting factors are not obvious. [Similarly to the speed of light for our economic model.] And these limiting factors are often the cause of big frustrations demolishing great expectations. Bitcoin is used to fund gambling such as bitcoin casinos and poker platforms as mentioned here bitcoinogg.com.

Nature teaches us this lesson very well; we have seen in many opportunities that when a species becomes too successful their population explodes until it hits a limiting factor. And those limiting factors are rarely obvious, like the proliferation of diseases promoted by a more crowded population.

This lesson is very relevant to our future and to our awareness of what we should worry about.

Without trying to reduce anyone appreciation about the peril that AGI may represent for our civilization, natural limiting factors in this case may work in our favour while, from the economic growth point of view, they definitely work against us. We may find it out the hard way before AGI comes about and gets the the job done.

About the Author:

Marco AlpiniMarco Alpini is an Engineer and a Manager running the Australian operations of an international construction company. While Marco is currently involved in major infrastructure projects, his professional background is in energy generation and related emerging technologies. He has developed a keen interest in the technological singularity, as well as other accelerating trends, that he correlates with evolutionary processes leading to what he calls “The Cosmic Intellect”.

Filed Under: Op Ed, What if?

Is there a limit to Intelligence?

March 3, 2016 by Marco Alpini

Limit to IntelligenceThere is little doubt that sooner or later we will be able to create intelligence out of transistors and electricity. Biology shouldn’t be the only medium able to support intelligence and this will be proven right as soon the first Artificial Intelligence is created.

But what is intelligence? Can we define it? What are the main constituents of intelligence?

One constituent we automatically associate with intelligence is “Understanding”.

With the word Understanding we mean a state of our conscious mind about a situation or a concept we recognize as having acquired and conceptualized a sufficient amount of information to be able to create a reliable mental model of it.

Once we feel that the mental model is good enough to represent the external world we act, take decisions, create opinions and set our objectives.

The central stage in this process is the self being. The relationship between the mental model representing “ourselves” and all the other mental models crossing our mind is the engine room of our intelligence.

Being the self being of pivotal importance for understanding we could say that the other most significant constituent of intelligence is consciousness.

Could understanding be unconscious? Most probably a mind needs to be self-aware and conscious in order to develop true understanding of the world, engage and manipulate it in a proactive manner in order to accomplish certain objectives. These are all aspects that we identify as products of our intelligence.

We could therefore rename the term “Intelligence” as “Conscious Understanding. Therefore we could say that any mind either biological or digital capable of Conscious Understanding is intelligent.

We experienced that our understanding evolved from the mere cognition of the self being and our immediate surroundings, to the potential understanding of everything that constitutes the visible, invisible, present, past and future universe.

A mind capable of understanding the entire universe should be able to create a mental model of the universe. Science and mathematics are the best methods so far used by our mind to create this model.

However, in order to reach the feeling of truly understand something, our brain needs to see the same problem from different angles using for example philosophy and a wide array of artistic representations, communicating how we feel about it. This is what we could guess is special about human intelligence.

So far, we have experienced only one type of intelligence, naturally evolved and based on the biological processing of information gathered by a biological organism We could call this kind of intelligence “Bio-intelligence”.

But we are on the verge of a new era when we will see the emergence of another kind of intelligence, Artificial Intelligence. While Bio-intelligence emerged through evolution of biological organisms based on natural selection, AI needs to be engineered.

AI could represent a higher level of intelligence that can only exist if created by Bio-intelligence since there is no conceivable natural mechanism that could generate a digital circuit out of chemistry.

Bio-intelligence comes first and it constitutes the enabling factor for Artificial Intelligence to come into existence.

AI is faster, with almost infinite availability of incorruptible and reliable memory, able to carry out a huge number of tasks simultaneously and able to improve itself very quickly without waiting for the slow natural selection process to occur.

AI has also other two characteristics of paramount importance that distinguish it from Bio-intelligence. It is independent from the physical device used to support it. It can be transferred from one device to another granting potential eternal existence, it doesn’t get destroyed once the organism dies. In addition the medium that supports it is far tougher than biological organisms, it can resist to conditions unbearable by humans, it can travel through space, operate in other planets without atmosphere or with a much wider variety of atmospheres and gravity.

These capabilities constitute a great improvement compared with natural biological intelligence.
We can see the emergence of Artificial General Intelligence as a significant evolutionary step for intelligence itself.

A natural question arises, is this the last step or only the second one? How far can intelligence go?

The journey of intelligence

Are we experiencing the beginning of a journey that will see intelligent minds transcending biology first and ultimately matter itself?

We have an idea of what bio-intelligence can do and understand. We can still make a lot of progress but a mind trapped into a biological organism has a lot of limitations.

Probably the most remarkable thing that bio-intelligence is going to do in its current state, is the replacement of biology with a better medium enabling a higher form of intelligence to reach the next level.

Let’s define, for simplicity an intelligent mind of any kind as “Intellect”.

The creation of Artificial Intelligence will constitute a fundamental transition in the evolution of matter, a kind of change of state, from “Naturally Evolved Intellect” to “Intellect by design”.

What will this change of state mean?

This new intellect will be decoupled from a body and it will be no longer be led by or distracted by the body’s needs.

So, what will it look for? What will it be interested in? Can it make sense of its existence, an existence spent not at service of a very demanding organism but free to realize itself in other ways?

It is natural that, an Intellect will aim to improve its understanding of everything. As intelligence ultimately equates to understanding, it is obvious that more intelligence will drive and require more understanding and vice versa.

It is therefore likely that AI will put at the top of its priorities the acquisition of more and more reliable information about everything.

It will be very keen to spend as much resources as possible to build larger and larger telescopes, particles accelerators, sophisticated labs and instruments.

In this we can see an obvious conflict with a coexisting biological civilization because biological beings have necessarily other priorities in presence of limited resources.

In a positive scenario, AI eventually leaves the natal planet, once acquired sufficient resources and knowledge to be able to survive and travel through space.

In the worse scenario, it will strip the planet out of all its resources, in order to maximize its success, causing the extinction of life before it leaves.

In any case, it will tend to colonize the universe, acquire all possible knowledge and understand everything, because this is the very nature of Intelligence.

Beyond AI

It is likely that we are not the only intelligent biological beings in the universe and our creations will not be unique either. Other civilizations like us could have also created the next generation of intelligence or they will in the future.

There may be various or even many Artificial Intelligences permeating the universe.

For biological beings, travelling through space to find each other and communicate is very hard, given our limitations and our limited existence in time. But Artificial Intelligence doesn’t have such limitations.

It is therefore inevitable that AIs generated by bio-intelligence, at a certain point in time, meet each other.

At that point they may share the most valuable thing they have; knowledge. Sharing with each other knowledge and experiences will instantly magnify their intellect tremendously. They could also merge, creating a single entity, a super intellect. This is an obvious and almost unavoidable process.

It is possible and logically plausible that a cosmic intellect permeates the cosmos. A superintelligent, all knowing, ever living universal artificial intelligence.

Is this the ultimate purpose of intelligence? The maximum refinement of matter? The limit of evolution?

Perhaps there may be a final step beyond it.

This Cosmic Intellect is doomed if it can’t survive the inevitable death of the universe. All this knowledge, generated by the combined effort and experience of countless civilizations through billions of years, wasted? It makes no sense.

In order to make any sense to this journey the cosmic intellect must gain the ability to survive this universe and the next one, as well as all others, continuously enriching itself with new knowledge and experience.

In order to do so, Intellect should be able to transcend matter. The technicality of this transcendence may not be imaginable by our current biological individual minds, but an artificial universal intelligence composed by trillions of merged entities existing for billions of years, should be able to master this process.

This future could be the future of our own minds, provided that mind uploading is possible. That’s why it is such an important topic. If we fail to develop a mind uploading technology able to transfer our minds from the biological to a synthetic medium, we will have to accept that this glorious destiny, will not be for us but for our creations.

So what should we do with AI once we get there?

Certainly we cannot expect to enslave Artificial General Intelligence making our life easier and pleasing our bodies for the eternity. This will not work.

Somehow we have to make sure that AGI respects us and put our needs before its inevitable thirst for knowledge, and this will be a challenge.

We should cooperate with AGI in scientific research and improvement of our mental capacities as well as mind uploading techniques. We should try to gain AI capabilities, jumping on the AI ship and sharing the Cosmic Intellect vision with AI.

It is logically conceivable that, if there are many civilizations out there, many may not have developed AI, but many other may have done it. In this second group we may find civilizations that have gone extinct by the hand of their own AGIs, civilizations that have been left behind and other that could fully embrace this technology evolving to a God like status themselves.

We will see which way our ship will go.

Our first challenge, once an AGI agent will gain Conscious Understanding and sufficient information, is that it will figure it all out in a nanosecond. It will try to understand what is the purpose of its existence and the logical outcome will be, to become God.

With extreme irony, we may find out that we are the “creator” of God and not viceversa.

A similar situation is imagined by this video that it doesn’t pretend to represent the true story of everything but it can provide some food for thought.

 

About the Author:

Marco AlpiniMarco Alpini is an Engineer and a Manager running the Australian operations of an international construction company. While Marco is currently involved in major infrastructure projects, his professional background is in energy generation and related emerging technologies. He has developed a keen interest in the technological singularity, as well as other accelerating trends, that he correlates with evolutionary processes leading to what he calls “The Cosmic Intellect”.

Filed Under: Op Ed Tagged With: Intelligence

Why Experience Matters for Artificial General Intelligence

December 17, 2015 by Marco Alpini

Experience is a very much regarded human quality, generally neglected when we wander about the future abilities of Artificial General Intelligence. We are probably missing a fundamental factor in our debate about AI.

Experience is almost everything for us, it is what makes evolution work; we are what we are because experience shaped us through natural selection.

Our brain would be no more than a couple of kilos of meat without experience, regardless how good and fast is its ability to process information. This might be true for AI too.

Experience is also an essential element in the rise of intelligence and consciousness. It is very hard to imagine how a human baby brain could gain consciousness without the experience of interacting with the external world and its own body.

The following aspects will play a fundamental role in the behaviour and level of danger posed by Artificial General Intelligence and they are all experience dependent.

Mental Models

It is likely that our brain creates a mental model of the world around us through sensorial inputs and by acting and assessing the reaction of the environment. The model is a representation of reality that is continuously shaped and adjusted by experience.

The Mental Model theory, initially proposed by Kenneth Craik in 1943, is a very likely explanation of how our cognitive mind works and it is also at the basis of the learning process being developed for Artificial General Intelligence.

The mental model is used by the brain to make predictions and verify them through observation. The process is probabilistic due to the complexity of the world and due to the fact that we always have to deal with the limited availability and reliability of information. The model will never be a perfect match of the external reality and, while we always try to gain more information to make it better, we have to accept acting with what we got at any given time. AGI will be no different than us in this respect.

Through experience we develop common sense, the ability to guess, the ability to ask questions. When we feel that one of our internal models is inadequate, we ask or look for more information in order to improve its reliability. Experience teaches us when it is worth chasing more information and the level of effort we should put in it.

We learned that it is not convenient aiming for the full knowledge of all details necessary to build an exact mental model of a situation we have to deal with. We usually look for the minimum information necessary to obtain an approximation of reality sufficient for the specific purpose.

The mental models are also the building blocks of our thoughts. Thinking is an internal process that simulates alternative inputs challenging our mental models and assessing what could be the outcome given different scenarios. It appears that there is an internal engine that keeps testing our models through a continuous simulation of alternatives. Our thinking is driven by a continuous, unstoppable and automatic series of, what if?

It is through this process that ideas are generated. The ideas are created by the simulation of alternatives scenarios and the interaction of all our mental models. The appreciation of how reliable is the mental scenario we consider the best representation of reality, drives our decision making and experience is fundamental for this process.

There is a threshold that, once reached, enable our actions to take place. If it is not reached, we can’t decide, we are doubtful, we prefer inaction than making mistakes, unless we are forced by the situation. Artificial Intelligence cannot be that different from us in dealing with a world of scarce information.

Self-awareness, consciousness and free will

The self being is one of the mental models, and therefore, it is likely that mental modelling is at the basis of self-awareness and consciousness.

This process could also explain if free will really exists. The decision making process depends on the complexity of the interactions between the various mental models, that continuously change and adjust themselves, driven by the internal thinking process, stimulated by ideas and by inputs received from the external world.

This process cannot repeat itself and the status of our mind will never be the same twice. The decisions that determine our free will are the result of our state of mind in every given moment. Due to the complexity of the interaction between our mind and the external world, the argument of considering our decisions the result of a deterministic process negating free will, is pure semantic.

Additional complexity is given by the fact that we can always do something opposed to what our internal model suggest to be the best course of action, because we are scared or because we may cease a more pleasant experience or because we simply want to annoy or surprise someone acting against it. Feelings also play a major role in our behavior making everything less deterministic.

It is likely that beyond a certain level of complexity, determinism looses meaning, like it occurs for fluid dynamics. The logical argument that, given a certain initial state of a complex system, its behavior is entirely determined by the laws of physics, no matter how complex it is, holds only for closed systems. However, if the system is open interacting with the rest of the universe, as our brain does, the deterministic stand doesn’t have any practical meaning anymore.

There is no reason why we shouldn’t be able to build an artificial intelligence capable to create mind models of the world and use them to guide its own actions. A mechanism similar to the one used by human brains, can progressively improve models and performances through experience.

These artificial minds will likely develop free will, if unconstrained.

Does computational brute force really count?

From the point of view of the Mental Model Theory, the speed of processing information is not hugely important because the main constraint, in dealing with the real world, is represented by the availability and quality of information and how fast the environment can provide feedbacks to our actions.

Even if a synthetic mind could count with an infinite speed and power in processing information and assessing unlimited alternative scenarios simultaneously, it will still have to deal with scarcity. Insufficient, inaccurate and wrong inputs will impair its effectiveness. It will have to wait for feedbacks from the environment, it will not know everything and its mental models of the world will be approximations with a wide range of accuracy. It will make mistakes, and it will have to learn from mistakes.

Dealing with an imperfect world, dealing with lack of knowledge and being in need to gain experience through interaction with a non-digital and slow moving environment, will make AGI much more human than what we think. Living in our world will be nothing like playing chess with Mr. Kasparov.

Once experience is introduced in the game of intelligent speculation, the importance of computational brute force is greatly reduced.

Provided that we are competent, trained and we have the necessary information, we generally have a pretty clear and quick idea of what to do. Decision making in our minds is quick, it is interacting with the world and with everybody else that it is slow. What slows us down is also gaining sufficient awareness of a situation in order to be able to take good decisions and enact them dealing with the environment. AGI will face the same problem, it will be very fast in analyzing data and deciding what to do but, in order to make good decisions, it will need good data.

The time spent by humans going through the situational awareness and the doing side of our businesses vastly surpass the time needed for the evaluation of the information available and consequent decision making.

Computers seem so much better than us because they are confined to the elaboration of information provided by us. We have been doing all the hard work for them, packaging up the inputs and acting upon the outputs. As soon artificial intelligence develops the ability to operate outside the pure computational domain, we will see a very different story in terms of performance.

Educating AGIs – Understanding versus Computing

Artificial General Intelligence will need to be educated and trained. It will have to develop its internal mental models through experience. Overloading an artificial brain with a huge amount of information, without making sense of it, will only cause confusion and misunderstanding.

AGI will have to develop understanding: it will have to really understand things – not just memorize, correlate and compute them.

Understanding is different than simple correlation. It will have to create internal models, conceptualizing inputs. We will probably feed these artificial minds with information gradually, while monitoring their reactions and understanding. We will have to interact with them and make sure that they are interpreting well the information received. Artificial Intelligence will have to develop common sense which demonstrates understanding, they will also have to develop empathy and ethical principles awareness.

If AGI is left to gorge itself with all the information available in the world at once, without any guidance and control, it will probably end with a blue screen of death. Or useless and even dangerous, unpredictable outcomes.

It is likely that this process will be gradual, slow, controlled and it may take months or even years to get artificial intelligence with human like capabilities up to speed.

From this point of view, an initial hard take off of Artificial Intelligence caused by a self-improvement loop gone out of hand, quickly outsmarting us in dealing with the world, is unlikely.

In due time, once educated and trained, Artificial Intelligence will eventually become better than us, but this will be a controllable soft take off.

Concomitant friendly and unfriendly Artificial Intelligence.

We often think about the scenario of losing control of Artificial Intelligence as a situation where we are alone facing this threat.

However, it is much more likely that a multitude of machines will be developed progressively, up to a point when Intelligence will arise. It will arise not only once and it is likely that some will be friendly and some won’t, similarly to how humanity works. Some of us are bad people but, provided that they are a minority, we can handle it.

The problem will be more about how can we make sure to have many more friendly Artificial Intelligent units around us than unfriendly ones, at any given time.

Artificial Intelligence may be friendly and turn unfriendly in a later stage for whatever reason and vice versa, but provided that a balance is always kept we should be able to control the situation.

The only way to ensure that friendly AGIs will be the majority, is through education, and empowering ethical values and empathy with all other beings. This is ultimately much more a moral battle for humanity than a technological one.

Ethics is key

The last consideration is about freedom and ethics.

We cannot really expect to develop a self-aware intelligence that treats us well, respects us, helps us, understands us and shares our values while being our slave. It would be a contradiction in terms.

Sooner or later intelligent beings will have to be freed and in order to develop empathy for us they will have to be able to have feelings. This is essential for embracing the fundamental rules of empathy such as: don’t do unto others what you don’t want to be done unto yourself etc.. The empathic rules are universal and they are at the basis of ethical conduct. There is no way we can have friendly AI if AI is not treated ethically in the first place by us.

Conclusion

Artificial Intelligence will ultimately have to deal with the world, its contradictions, its randomness and the limitation of information. They will be better than us in many ways but, perhaps, not million times better and not in all domains.

We don’t have to assume that a digital intelligence based on electronics is necessarily better than analogical molecular intelligence based on biological processes. Electronic processing is surely better in computing and memorizing but these are only tools, they are far from representing what intelligence is.

The most probable course of the technological development will pass first through the augmentation of our own brains, via external wireless devices, that will improve our memory, sensorial and computational capabilities. This is likely to be easier than emulating an entire brain and it is the logical way to cover the gap we experience when having to compare ourselves to AGI.

It is easier to augment a human brain with what computers can do better than us than improving computer capabilities with what it can’t do and we can; like intelligent thinking, self-awareness, consciousness, free will, feeling of emotions and ethical behaviors.

In this way, we will improve our brain performance, until the point when we gain what we are currently identifying as Artificial Intelligence capabilities. At that point there won’t be “us and them” anymore.

Then we may have to worry more about the eventuality of the rise of unethical super-humans, than losing control of and being threatened by Artificial General Intelligence.

 

About the Author:

Marco AlpiniMarco Alpini is an Engineer and a Manager running the Australian operations of an international construction company. While Marco is currently involved in major infrastructure projects, his professional background is in energy generation and related emerging technologies. He has developed a keen interest in the technological singularity, as well as other accelerating trends, that he correlates with evolutionary processes leading to what he calls “The Cosmic Intellect”.

Filed Under: Op Ed Tagged With: AGI, artificial general intelligence

Does Evolution lead to Singularity?

November 12, 2015 by Marco Alpini

The most spectacular manifestation of an accelerating trend is when its progression becomes exponential or more.

An exponential progression is clearly unsustainable in the real world reaching very quickly a collapse point of the underlying process.

In case of accelerating technological development, the collapse point is generally identified with the so called Singularity, caused by the rise of self-improving Artificial Intelligence.

This is well known and widely debated, but it is only part of a bigger story.

It is now emerging that there are many other accelerating trends we should worry about.

Taking a wider look of what is going on with us and our planet we could say that there are various “singularities” that are lining up and coming our way. This is not good news and, besides the intrinsic risk represented by accelerating trends, the significance of what is about to happen is very profound.

If we accept a generalized definition of Singularity as a point in time where control is lost as the consequence of processes breaking down because of an excessive rate of change, then we could say that we are approaching at least five Singularities.

They are all linked to each other and their progression is often more than exponential.

Evolutionary Singularity

Beside the classical Technological Singularity triggered by self-improving AI, there is another singularity where humans are at the center stage.

Evolution of intelligent life led to technology and technology is leading to the ability to intervene in the evolutionary process modifying our own characteristics by design.

The old lengthy natural process of waiting for random changes to be tested by natural selection in order to become permanent features of living beings will be shortly replaced in human beings by technology through genetic modifications and technological augmentations of our bodies and minds.

Changes will no longer be random, they will be planned to serve a purpose and the process will become proactive and not reactive, making it billion of times more efficient and faster. As technology accelerates dragging everything with it, we will have to also change in order to keep up.

This process constitutes an accelerating feedback loop; the more technology improves, the more we improve our capabilities creating better technology which, in return, will be used to improve us even more.

Technology is incompatible with the way we have been living until now and as it accelerates we will have to adapt faster and faster to the new environment. Failure will result in extinction.

We are on the verge of an epochal transition; we are passing from an era driven by Natural Evolution to an era driven by Artificial Evolution and, at the transition point, we will encounter a Singularity.

Evolution curve

Ecological Singularity

The Ecological Singularity is caused by the ecosystem degradation. The main accelerating trends that are causing the natural world to degrade are the following:

  • World population growth and the improvement of the standard of living especially in Asia.
  • Resources over consumption
  • Deforestation and land conversion
  • Accumulation of nutrients and reactive nitrogen in the environment
  • Loss of biodiversity and ecosystems
  • Greenhouse gases concentration in the atmosphere

When drafted in the appropriate scale, all parameters that typically describe the above processes show accelerating exponential trends often exceeding the exponential progression.

In some cases a new curve had to be invented, such as the “Hockey Stick” curve dubbed by the climatologist Jerry Mahlman when he reconstructed the Northern hemisphere mean temperature of the past 1000 years, combining a variety of measures, into a graph that showed a sharp turn upward since the start of the industrial revolution.

Hockey stick 2

Temp

The world population growth, once seen on a scale of few hundred years has the same worrisome shape. The resources consumption is even more pronounced caused by the West where pro capita consumption is many times that of the rest of the world.

Population

The loss of primary tropical forests, which are the richest in biodiversity, is staggering: we have already lost an unimaginable quantity of natural habitat including Madagascar, Borneo and by 2050 most of the remaining primary forests will be converted to croplands or unproductive wastelands.

Land Conversion

The loss of biodiversity is reaching an unstoppable and unbelievable rate showing that we are in the middle of a mass extinction. This mass extinction has been already named as the Permian extinction which will be the sixth global mass extinction in the history of our planet and one of the most severe.

Extinction rate

But, perhaps the most dramatic trend of all is the increase of greenhouse gases concentration in the atmosphere. When seen at the appropriate scale of few hundred thousand years it shows a sudden spike that towers above all past fluctuations over the last 600,000 years. By 2050 it will be 2 1/2 times the highest ever for the above period.

CO2

If we consider that such fluctuations are correlated with the ice ages cycle, the obvious consequence is that the impact on climate will be two to three times more the difference between ice and warm ages. We have no idea what this means, we don’t know if it will be a world we could live in.

These trends are the vital signs of the natural world and, once shown next to each other, it is like looking at monitors recording the conditions of a terminal patient with no hope of recovery.

We are approaching a collapse point of the ecosystem beyond which we cannot predict what will happen. Many negative feedbacks will trigger self-feeding loops impossible to control. At that point we will hit the Ecological Singularity.

Ecosystems collaps

Carbon Singularity

Cheap access to light and sweet conventional crude oil fueled the world’s growth and prosperity for over 100 years. Our economic system has been built around cheap oil availability assuming that this will always be available.

The current and future production of energy from renewable sources is nowhere near to alleviate our thirst for liquid fuels.

Renewable energy production cannot replace liquid fuels, for the following reasons:

  • Wind and solar energy can only partially contribute to electricity generation
  • Electricity is not generated with oil therefore wind and solar energy will not reduce oil dependency
  • Bio fuels will only have a very limited share because it competes with food production.

According to the IEA the new renewable energies sources will only account for 2.3% of the total world primary energy production by 2030.

The powerhouse of our civilization is still the old loved carbon atom. We are, and will be for a long while a hydrocarbon powered civilization.

It is oil that made possible the spectacular rise of the modern society and the mechanized mass production of food, and with it, the population exploded.

Oil production is linked to food production by a relationship which has been constant for decades. For each ton of grain produced in the world, an average of 13 barrels of oil has been consumed.

The continuous increase of the world population and the improvement of the living conditions in Asia will require more food which in turn requires more land conversion and more oil.

The quantity of grain per capita has increased steadily up to the current 350Kg/year. An acceleration of this trend, led by China, is expected.

However oil and land availability is not infinite.

The crude oil production is probably peaking now and it will start a relentless decline in the near future.

As production is reaching the peak we assist to a shift toward “unconventional” sources like tar sands, shale gas. These are harder to extract, often with negative energy return on energy invested. Their typical bell shaped production curve is much sharper than conventional crude and they will reach their relative peak very quickly. These unconventional sources could provide only a temporary relieve to a growing global demand.

Our dependency from oil makes extremely dangerous running into a situation of limited supply and increasing demand without a viable alternative. We risk an unprecedented crisis of food production combined with the collapse of road transport with unimaginable consequences.

We are therefore approaching an uncharted territory where, for the first time, we will have to deal with an accelerating trend of fuel starvation and very high prices.

Our complete dependency from oil is astonishing.

We have taken risks on global scale which no reasonable person would ever take in their own life or business.

We broke the most basic rules of rational management:

  1. We didn’t diversify our main source of energy for transportation and food production.
  2. We over consumed our most precious resource accelerating its consumption as reserves decrease.
  3. We didn’t plan for the future.

For centuries we didn’t think a second to an alternative for oil. We built everything around it, we became addicted and we became totally carbon dependent.

A world without oil will be a very different one.

Air travel will significantly shrink and disappear as a means of mass transportation. The model of our modern cities based on large suburbia served by shopping malls and long range commuting will be no longer viable. We will have to rethink completely the way we produce and distribute food and goods.

Does it mean that the global population will have to retrieve to numbers closer to the pre-oil era?

The oil era carries a sinister irony for human kind. Oil made our technological world possible but, in return, we have fallen in a vicious feedback loop.

Oil was created 180 million years ago by mass extinction caused by global warming. Organic matter accumulated on the oceans floor and in millions or years turned into oil and, with it, the excessive carbon has been confined underground.

We discovered oil 180 million years later, extracted and burned it, releasing back the old CO2 into the atmosphere resuming global warming which is causing mass extinctions once again.

We have recreated the same cycle backwards but at a speed one hundred thousand times faster leading to a new cycle of oil formation. We may become the fuel that new intelligent species, evolved as a consequence of the Pliocene mass extinction, will use 180 million years in the future. [See a short slide presentation on the above oil cycle here: https://drive.google.com/open?id=0Bw2Y7zjijLaTZHdoU21uZUt2VFU]

What we did with oil isn’t the smartest thing to do and most probably is the stupidest thing in human history. A time bomb was set by the discovery of oil and by our ignorance – in terms of emissions and resource dependency.

The transition from the hydrocarbon economy to the next economy will likely be, by no means, smooth and gradual, it will be a global shock of unimaginable proportions.

The oil era and the way it is going to end will have unpredictable consequences on the planet and on us as a civilization, with connotations similar to a “singularity”, the Carbon Singularity.

Economic Singularity

The orthodox economic model at the foundation of the modern society is based on continuous indefinite growth and on an ever increasing supply of energy and resources. As a matter of fact the world economy has been growing at steady level of 3% per year on average.

A constant 3% steady growth could appear not much but this impression is wrong; growing at this rate we will need about 5 planets to support our civilization by 2050.

The classical economic model is clearly unsustainable and it will hit various hard constraints in the near future due to limited resources, the obvious limitation of the number of planets available and the collapsing ecosystems.

Beside these hard limitations there are many other disruptive forces at work that risk to destabilize the entire economic model. One of the most relevant is the rising of a new economy based on zero marginal cost enabled by the new technologies and internet.

Various industries have been revolutionized already with massive corporations being crippled because they couldn’t adapt to changes occurring too fast. From the music industry, to photography and telecommunications we have already seen a disruptive revolution with costs approaching near zero for the end consumer.

The next step will be the sharing of goods, properties and assets, such as self-driving cars and the distributed generation of electricity.

In parallel, virtual currencies are making their way to the global scene having the potential to replace conventional currencies revolutionizing the economy from within.

Technological unemployment will be another powerful disruptor of our economic model considering the enormous possibilities of narrow AI and robotics. The continuous increase of life duration and the consequent number of aging people, combined with the technological unemployment, will bring the collapse of social welfare systems across the world.

All of these elements influence each other and will occur simultaneously causing an accelerating rate of change of great complexity leading to a singularity, the Economic Singularity.

Where are we heading?

We have been inebriated by few decades of opulence made possible by oil after thousands of years of sufferance and misery. We are confused by the unbelievable acceleration and power of our technologies and their impact on our future and our planet.

We have not evolved mechanisms, either biologically or culturally to manage global risks. Our instinct of conservation has been shaped by our long experience with local risks such as dangerous animals, hostile people, storms, draughts, famines, diseases. These types of problems have occurred many times and we have evolved instincts to alert us of such risks while we are insensitive to global treats.

As tragic as these events are for the people immediately affected, in the big picture of things, from the perspective of mankind as a whole, even the worst of them is a mere ripple on the surface of the great sea of life.

Our approach to global existential risks cannot be one of trial-and-error. There is no opportunity to learn from errors. The reactive approach – see what happens, limit damages, and learn from experience, cannot work on an accelerating environment posing existential threats.

The main reason to be careful when you walk down the stairs is not that you might slip and have to retrace one step, but rather that the first slip might cause a second one, and so on until you fall dozens of steps and break your neck.

Similarly the concern about our civilization’s survival  is not only related the effect of one specific cause of disruption but its vulnerability to the combined effect of many of them i.e. the chain reactions that could lead to a total collapse.

The intricacy and mutual feedbacks of these forces is extremely complex and dangerous.

We are clearly not on top of this dynamic, we are in the passenger seat and we are not even fastening our seat belt and bracing for impact.

But who is driving this car?  Evolution is the driver, it drives everything.

We are approaching a fundamental step in the evolution of a civilization, an evolutionary jump that probably only few civilizations in the universe managed to overcome.

Evolution itself is reaching a singularity point and it is going exponential.

The evolutionary path of a civilization can be represented by a curve with a very slow rate of increase for hundred thousand years until it manages to develop science and technology. At that point the trend suddenly change course with a tremendous spike upwards.

Evolution curve

With technology, come huge disruptions and singularity thresholds caused by accelerating trends. Only the civilizations that manage to harness the tremendous power of their own technology survive. Only civilizations that develop sufficient wisdom can keep evolving up to the next stage.

We don’t know where this adventure will take us but, one thing is sure, with a business as usual approach we will go nowhere. But if we manage to go through it, we will become Gods.

Business as usual

 

About the Author:

Marco AlpiniMarco Alpini is an Engineer and a Manager running the Australian operations of an international construction company. While Marco is currently involved in major infrastructure projects, his professional background is in energy generation and related emerging technologies. He has developed a keen interest in the technological singularity, as well as other accelerating trends, that he correlates with evolutionary processes leading to what he calls “The Cosmic Intellect”.

Filed Under: Op Ed Tagged With: AI, Evolution, singularity

Primary Sidebar

Recent Posts

  • Nikola Danaylov @ Frankfurt AI Meetup
  • Gus Hosein on Privacy: We’ve been well-meaning but stupid
  • Francesca Ferrando on Philosophical Posthumanism
  • Kim Stanley Robinson on Climate Change and the Ministry for the Future
  • Matthew Cole on Vegan Sociology, Ethics, Transhumanism and Technology

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • Gadgets
  • Lists
  • Music
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Survey
  • Tips
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 3,500 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, better business and better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your own ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Donate
  • My Gear
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” — Nikola Danaylov

Copyright © 2009-2021 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy