Quantcast
≡ Menu

Practopoiesis: How cybernetics of biology can help AI

digital brain cirquitBy creating any form of AI we must copy from biology. The argument goes as follows. A brain is a biological product. And so must be then its products such as perception, insight, inference, logic, mathematics, etc. By creating AI we inevitably tap into something that biology has already invented on its own. It follows thus that the more we want the AI system to be similar to a human—e.g., to get a better grade on the Turing test—the more we need to copy the biology.

When it comes to describing living systems, traditionally, we assume the approach of different explanatory principles for different levels of system organization. One set of principles is used for “low-level” biology such as the evolution of our genome through natural selection, which is a completely different set of principles than the one used for describing the expression of those genes. A yet different type of story is used to explain what our neural networks do. Needless to say, the descriptions at the very top of that organizational hierarchy—at the level of our behavior—are made by concepts that again live in their own world.

But what if it was possible to unify all these different aspects of biology and describe them all by a single set of principles? What if we could use the same fundamental rules to talk about the physiology of a kidney and the process of a conscious thought? What if we had concepts that could give us insights into mental operations underling logical inferences on one hand and the relation between the phenotype and genotype on the other hand? This request is not so outrageous. After all, all those phenomena are biological.

One can argue that such an all-embracing theory of the living would be beneficial also for further developments of AI. The theory could guide us on what is possible and what is not. Given a certain technological approach, what are its limitations? Maybe it could answer the question of what the unitary components of intelligence are. And does my software have enough of them?

For more inspiration, let us look into Shannon-Wiener theory of information and appreciate how much helpful this theory is for dealing with various types of communication channels (including memory storage, which is also a communication channel, only over time rather than space). We can calculate how much channel capacity is needed to transmit (store) certain contents. Also, we can easily compare two communication channels and determine which one has more capacity. This allows us to directly compare devices that are otherwise incomparable. For example, an interplanetary communication system based on satellites can be compared to DNA located within a nucleus of a human cell. Only thanks to the information theory can we calculate whether a given satellite connection has enough capacity to transfer the DNA information about human person to a hypothetical recipient at another planet. (The answer is: yes, easily.) Thus, information theory is invaluable in making these kinds of engineering decisions.

So, how about intelligence? Wouldn’t it be good to come into possession of a similar general theory for adaptive intelligent behavior? Maybe we could use certain quantities other than bits that could tell us why the intelligence of plants is lagging behind that of primates? Also, we may be able to know better what the essential ingredients are that distinguish human intelligence from that of a chimpanzee? Using the same theory we could compare an abacus, a hand-held calculator, a supercomputer, and a human intellect.

The good news is that, since recently, such an overarching biological theory exists, and it is called practopoiesis. Derived from Ancient Greek praxis + poiesis, practopoiesis means creation of actions. The name reflects the fundamental presumption on what the common property can be found across all the different levels of organization of biological systems: Gene expression mechanisms act; bacteria act; organs act; organisms as a whole act.

Due to this focus on biological action, practopoiesis has a strong cybernetic flavor as it has to deal with the need of acting systems to close feedback loops. Input is needed to trigger actions and to determine whether more actions are needed. For that reason, the theory is founded in the basic theorems of cybernetics, namely that of requisite variety and good regulator theorem.

The key novelty of practopoiesis is that it introduces the mechanisms explaining how different levels of organization mutually interact. These mechanisms help explain how genes create anatomy of the nervous system, or how anatomy creates behavior.

When practopoiesis is applied to human mind and to AI algorithms, the results are quite revealing.

To understand those, we need to introduce the concept of practopoietic traverse. Without going into details on what a traverse is, let us just say that this is a quantity with which one can compare different capabilities of systems to adapt. Traverse is a kind of a practopoietic equivalent to the bit of information in Shannon-Wiener theory. If we can compare two communication channels according to the number of bits of information transferred, we can compare two adaptive systems according to the number of traverses. Thus, a traverse is not a measure of how much knowledge a system has (for that the good old bit does the job just fine). It is rather a measure of how much capability the system has to adjust its existing knowledge for example, when new circumstances emerge in the surrounding world.

To the best of my knowledge no artificial intelligence algorithm that is being used today has more than two traverses. That means that these algorithms interact with the surrounding world at a maximum of two levels of organization. For example, an AI algorithm may receive satellite images at one level of organization and the categories to which to learn to classify those images at another level of organization. We would say that this algorithm has two traverses of cybernetic knowledge. In contrast, biological behaving systems (that is, animals, homo sapiens) operate with three traverses.

This makes a whole lot of difference in adaptive intelligence. Two-traversal systems can be super-fast and omni-knowledgeable, and their tech-specs may list peta-everything, which they sometimes already do, but these systems nevertheless remain comparably dull when compared to three-traversal systems, such as a three-year old girl, or even a domestic cat.

To appreciate the difference between two and three traverses, let us go one step lower and consider systems with only one traverse. An example would be a PC computer without any advanced AI algorithm installed.

This computer is already light speed faster than I am in calculations, way much better in memory storage, and beats me in spell checking without the processor even getting warm. And, paradoxically, I am still the smarter one around. Thus, computational capacity and adaptive intelligence are not the same.

Importantly, this same relationship “me vs. the computer” holds for “me vs. a modern advanced AI-algorithm”. I am still the more intelligent one although the computer may have more computational power.  But also the relationship holds “AI-algorithm vs. non-AI computer”. Even a small AI algorithm, implemented say on a single PC, is in many ways more intelligent than a petaflop supercomputer without AI. Thus, there is a certain hierarchy in adaptive intelligence that is not determined by memory size or the number of floating point operations executed per second but by the ability to learn and adapt to the environment.

A key requirement for adaptive intelligence is the capacity to observe how well one is doing towards a certain goal combined with the capacity to make changes and adjust in light of the feedback obtained. Practopoiesis tells us that there is not only one step possible from non-adaptive to adaptive, but that multiple adaptive steps are possible. Multiple traverses indicate a potential for adapting the ways in which we adapt.

We can go even one step further down the adaptive hierarchy and consider the least adaptive systems e.g., a book: Provided that the book is large enough, it can contain all of the knowledge about the world and yet it is not adaptive as it cannot for example, rewrite itself when something changes in that world. Typical computer software can do much more and administer many changes, but there is also a lot left that cannot be adjusted without a programmer. A modern AI-system is even smarter and can reorganize its knowledge to a much higher degree. Still, nevertheless, these systems are incapable of doing a certain types of adjustments that a human person can do, or that animals can do. Practopoisis tells us that these systems fall into different adaptive categories, which are independent of the raw information processing capabilities of the systems. Rather, these adaptive categories are defined by the number of levels of organization at which the system receives feedback from the environment — also referred to as traverses.

We can thus make the following hierarchical list of the best exemplars in each adaptive category:

A book: dumbest; zero traverses

A computer: somewhat smarter; one traverse

An AI system: much smarter; two traverses

A human: rules them all; three traverses

Most importantly for creation of strong AI, practopoiesis tells us in which direction the technological developments should be heading: Engineering creativity should be geared towards empowering the machines with one more traverse. To match a human, a strong AI system has to have three traverses.

Practopoietic theory explains also what is so special about the third traverse. Systems with three traverses (referred to as T3-systems) are capable of storing their past experiences in an abstract, general form, which can be used in a much more efficient way than in two-traversal systems. This general knowledge can be applied to interpretation of specific novel situations such that quick and well-informed inferences are made about what is currently going on and what actions should be executed next. This process, unique for T3-systems, is referred to as anapoiesis, and can be generally described as a capability to reconstruct cybernetic knowledge that the system once had and use this knowledge efficiently in a given novel situation.

If biology has invented T3-systems and anapoiesis and has made a good use of them, there is no reason why we should not be able to do the same in machines.

 

About the Author: 

danko-nikolicDanko Nikolić is a brain and mind scientist, running an electrophysiology lab at the Max Planck Institute for Brain Research, and is the creator of the concept of ideasthesia. More about practopoiesis can be read here

 

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • Carvalko

    Years ago I worked with Golay and others on a pattern recognition scheme (PRS) using as a class discriminator, a topological operator, which using nearest neighbor logic, expanded or contracted the subjects to be separated (imagine a rectangle and a circle, each stretched like a rubber sheet, would distort differently. Once distorted, morphological measurements were made (perimeter, area, vacancies, etc. in the pattern). The two distinct sets of measure were then separated using statistical tests (Fisher F) and generating the equation of the line for best separation. After successfully learning to ID the different subjects (acquisition, human verification and finally machine verification), the algorithms were used to ID unknown subjects, and once ID’d an action would be taken. A hypothetical use might be to ID the 2 classes water and shore and if water, classify if the water is vacant or has a boat on it. Once the boat was identified, an action could be taken, such as send a surveillance plane. The plane’s PRS verifies if the ID was correct and either corrects or fortifies the initial classification. So, there is the initial loop (with feedback to establish correct ID), and when placed into operation another feedback loop to verify and improve performance. Isn’t this an example of T3? Here is more on the idea:

    http://carvalko.com/wp- content/uploads/2013/05/PARADIGMS_IN_CELLULAR_AUTOMATA.pdf

  • Novica

    Brain = API, right?

  • Danko Nikolic

    Dear Carvalko,

    Your question is interesting but I cannot provide a definite answer. To determine with certainty whether a system is a T2 or T3, one needs to know very well the system. I do not have enough information. It is indeed a possibility that you built a T3-system. In that case it would be almost certainly a “low-variety” T3-system, which would mean that it still does not match human but this time for a more ordinary reason: not enough knowledge has been given to the system.

    To determine yourself whether it is a T2- or T3-system you can inspect it in the light of the three main requirements needed to form practopoietic hierarchy. Those requirements you can find in the original article on page 2. If you can show that there are three traverses that satisfy those requirements, you have a T3-system.

  • Pingback: Kurzweil Interviews Minsky: Is the Singularity Near?()

  • Pingback: Danko Nikolic on Singularity 1 on 1: Practopoiesis Tells Us Machine Learning Is Not Enough!()

  • Pingback: P8 Brochure | Jordan V. Pope()

  • Pingback: Danko Nikolic: To create strong AI, we'll have to create AI-kindergarten()

  • Valerian Takashi Seethaler

    That’s why we breed.

  • Pingback: A Reader’s Response to “Hard-Wired AI: Insurance for a More Harmonious Future”()

  • Pingback: Turing à la Nikolić: From Thoughtless to Thoughtful()

Over 3,000 super smart people have subscribed to my newsletter: