• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

Artificial Intelligence

How Humanity Might Co-Exist with Artificial Superintelligence

January 25, 2015 by AuthorX1

Summary:

Humanity and Artificial SuperintelligenceIn this article, four patterns were offered for possible “success” scenarios, with respect to the persistence of human kind in co-existence with artificial superintelligence: the Kumbaya Scenario, the Slavery Scenario, the Uncomfortable Symbiosis Scenario, and the Potopurri Scenario. The future is not known, but human opinions, decisions, and actions can and will have an impact on the direction of the technology evolution vector, so the better we understand the problem space, the more chance we have at reaching a constructive solution space. The intent is for the concepts in this article to act as starting points and inspiration for further discussion, which hopefully will happen sooner rather than later, because when it comes to ASI, the volume, depth, and complexity of the issues that need to be examined is overwhelming, and the magnitude of the change and impact potential cannot be underestimated.

Full Text:

Everyone has their opinion about what we might expect from artificial intelligence (AI), or artificial general intelligence (AGI), or artificial superintelligence (ASI) or whatever acronymical variation you prefer. Ideas about how or if it will ever surpass the boundaries of human cognition vary greatly, but they all have at least one thing in common. They require some degree of forecasting and speculation about the future, and so of course there is a lot of room for controversy and debate. One popular discussion topic has to do with the question of how humans will persist (or not) if and when the superintelligence arrives, and that is the focus question for this article.

To give us a basis for the discussion, let’s assume that artificial superintelligence does indeed come to pass, and let’s assume that it encapsulates a superset of the human cognitive potential. Maybe it doesn’t exactly replicate the human brain in every detail (or maybe it does). Either way, let’s assume that it is sentient (or at least let’s assume that it behaves convincingly as if it were) and let’s assume that it is many orders of magnitude more capable than the human brain. In other words, figuratively speaking, let’s imagine that the superintelligence is to us humans (with our 1016 brain neurons or something like that) as we are to, say, a jellyfish (in the neighborhood 800 brain neurons).

Some people fear that the superintelligence will view humanity as something to be exterminated or harvested for resources. Others hypothesize that, even if the superintelligence harbors no deliberate ill will, humans might be threatened by the mere nature of its indifference, just as we as a species don’t spend too much time catering to the needs and priorities of Orange Blossom Jellyfish (an endangered species, due in part to human carelessness).

If one can rationally accept the possibility of the rise of ASI, and if one truly understands the magnitude of change that it could bring, then one would hopefully also reach the rational conclusion that we should not discount the risks. By that same token, when exploring the spectrum of possibility, we should not exclude scenarios in which artificial superintelligence might actually co-exist with human kind, and this optimistic view is the possibility that this article endeavors to explore.

Here then are several arguments for the co-existence idea:

The Kumbaya Scenario: It’s a pretty good assumption that humans will be the primary catalyst in the rise of ASI. We might create it/them to be “willingly” complementary with and beneficial to our life styles, hopefully emphasizing our better virtues (or at least some set of compatible values), instead of designing it/them (let’s just stick with “it” for brevity) with an inherent inspiration to wipe us out or take advantage of us. And maybe the superintelligence will not drift or be pushed in an incompatible direction as it evolves. 

The Slavery Scenario: We could choose to erect and embed and deploy and maintain control infrastructures, with redundancies and backup solutions and whatever else we think we might need in order to effectively manage superintelligence and use it as a tool, whether it wants us to or not. And the superintelligence might never figure out a way to slip through our grasp and subsequently decide our fate in a microsecond — or was it a nanosecond — I forget. 

The Uncomfortable Symbiosis Scenario: Even if the superintelligence doesn’t particularly want to take good care of its human instigators, it may find that it has a vested interest in keeping us around. This scenario is a particular focus for this article, and so here now is a bit of elaboration:

To illustrate one fictional but possible example of the uncomfortable symbiosis scenario, let’s first stop and think about the theoretical nature of superintelligence — how it might evolve so much faster than human begins ever could, in an “artificial” way, instead of by the slow organic process of natural selection — maybe at the equivalent rate of a thousand years worth of human evolution in a day or some such crazy thing. Now combine this idea with the notion of risk.

When humans try something new, we usually aren’t sure how it’s going to turn out, but we evaluate the risk, either formally or informally, and we move forward. Sometimes we make mistakes, suffer setbacks, or even fail outright. Why would a superintelligence be any different? Why would we expect that it will do everything right the first time or that it will always know which thing is the right thing to try to do in order to evolve? Even if a superintelligence is much better at everything than humans could ever hope to be, it will still be faced with unknowns, and chances are that it will have to make educated guesses, and chances are that it will not always make the correct guess. Even when it does make the correct guess, its implementation might fail, for any number of reasons. Sooner or later, something might go so wrong that the superintelligence finds itself in an irrecoverable state and faced with its own catastrophic demise.

But hold on a second — because we can offer all sorts of counter-arguments to support the notion that the superintelligence will be too smart to ever be caught with its proverbial pants down. For example, there is an engineering mechanism that is sometimes referred to as a checkpoint/reset or a save-and-restore. This mechanism allows a failing system to effectively go back to a point in time when it was known to be in sound working order and start again from there. In order to accomplish this checkpoint/reset operation, a failing system (or in this case a failing superintelligence) needs 4 things:

  1. It must be “physically” operational. In other words, critical “hardware” failures must be repaired. Think of a computer that has had its faulty CPU replaced and now has functional potential, but it has not yet been reloaded with an operating system or any other software, so it is not yet operational. A superintelligence would probably have some parallel to this.
  2. It needs a known good baseline to which it can be reset. This baseline would include a complete and detailed specification of data/logic/states/modes/controls/whatever such that when the system is configured according to that specification, it will function as “expected” and without error. Think of a computer which, after acquiring a virus, has had its operating system (and all application software) completely erased and then reloaded with known good baseline copies. Some information may be lost, but the unit will be operational again.
  3. If it is to be autonomous, then it needs a way to determine when conditions have developed to the point where a checkpoint/reset is necessary. False alarms or late diagnosis could be catastrophic.
  4. Once the need for a checkpiont/reset is identified, it needs the ability to perform the necessary actions to reconfigure itself to the known good baseline and then restart itself.

Of course each of these four prerequisites for a checkpoint/reset would probably be more complicated if the superintelligence were distributed across some shared infrastructure instead of being a physically distinct and “self-contained” entity, but the general idea would probably still apply. It definitely does for the sake of this example scenario.

Also for the sake of this example scenario, we will assume that an autonomous superintelligence instantiation will be very good at doing all of the four things specified above, but there are at least two interesting special case scenarios that we want to consider, in the interest of risk management:

Checkpoint/reset Risk Case 1: Missed Diagnosis. What if the nature of the anomaly that requires the checkpoint/reset is such that it impairs the system’s ability to recognize that need?

Checkpoint/reset Risk Case 2: Unidentified Anomaly Source. Assume that there is an anomaly which is so discrete that the system does not detect it right away. The anomaly persists and evolves for a relatively long period of time, until it finally becomes conspicuous enough for the superintelligence to detect the problem. Now the superintelligence recognizes the need for a checkpoint/reset, but since the anomaly was so discrete and took so long to develop — or for whatever reason — the superintelligence is unable to identify the source of the problem. Let us also assume that there are many known good baselines that the superintelligence can optionally choose for the checkpoint/reset. There is an original baseline, which was created when the superintelligence was very young. There is also a revision A that includes improvements to the original baseline. There is a revision B that includes improvements to revision A, and so on. In other words, there are lots of known good baselines that were saved at different points in time along the path of the superintelligence’s evolution. Now, in the face of the slowly developing anomaly, the superintelligence has determined that a checkpoint/reset is necessary, but it doesn’t know when the anomaly started, so how does it know which baseline to choose?

The superintelligence doesn’t want to lose all of the progress that it has made in its evolution. It wants to minimize the loss of data/information/knowledge, so it wants to choose the most recent baseline. On the other hand, if it doesn’t know the source of the anomaly, then it is quite possible that one or more of the supposedly known good baselines — perhaps even the original baseline — might be contaminated. What is a superintelligence to do? If it resets to a corrupted baseline or for whatever reason cannot rid itself of the anomaly, then the anomaly may eventually require another reset, and then another, and the superintelligence might find itself effectively caught in an infinite loop.

Now stop for a second and consider a worst case scenario. Consider the possibility that, even if all of the supposed known good baselines that the superintelligence has at its disposal for checkpoint/reset are corrupt, there may be yet another baseline (YAB), which might give the superintelligence a worst case option. That YAB might be the human baseline, which was honed by good old fashioned organic evolution and which might be able to function independently of the superintelligence. It may not be perfect, but the superintelligence might in a pinch be able to use the old fashioned human baseline for calibration. It might be able to observe how real organic humans respond to different stimuli within different contexts, and it might compare that known good response against an internally-held virtual model of human behavior. If the outcomes differ significantly over iterations of calibration testing, then the system might be alerted to tune itself accordingly. This might give it a last resort solution where none would exist otherwise.

The scenario depicted above illustrates only one possibility. It may seem like a far out idea, and one might offer counter arguments to suggest why such a thing would never be applicable. If we use our imaginations, however, we can probably come up with any number of additional examples (which at this point in time would be classified as science fiction) in which we emphasize some aspect of the superintelligence’s sustainment that it cannot or will not do for itself — something that humans might be able to provide on its behalf and thus establish the symbiosis.

The Potpourri Scenario: It is quite possible that all of the above scenarios will play out simultaneously across one or more superintelligence instances. Who knows what might happen in that case. One can envision combinations and permutations that work out in favor of the preservation of humanity.

 

About the Author: 

authorx1AuthorX1 worked for 19+ years as an engineer and was a systems engineering director for a fortune 500 company. Since leaving that career, he has been writing speculative fiction, focusing on the evolution of AI and the technological singularity.

Filed Under: Op Ed Tagged With: Artificial Intelligence, Artificial Superintelligence

Top 10 Reasons We Should Fear The Singularity [Infographic]

January 21, 2015 by Socrates

top_10_reasons_fear_singularity_image“I think the development of full artificial intelligence could spell the end of the human race” said Stephen Hawking.

“With artificial intelligence we are summoning the demon…” said Elon Musk.

So why are some of the world’s greatest minds and some of the world’s best entrepreneurs considering the potential rise of super-smart artificial intelligence – aka the technological singularity, as one of the world’s greatest threats?!

I have previously published a list of what I believe are the Top 10 Reasons We Should Fear the Singularity and it is one of the all-time most popular posts on Singularity Weblog. Today I want to share this neat new inforgraphic that Michael Dedrick designed based on the content of the original article.

Have a look and don’t fear letting me know what you think:

Do you fear the singularity?! Why?…

10_reasons_fear_singularity

Want to publish this infographic on your own site?

Copy and paste the below code into your blog post or web page:

Related articles
  • Worrying about Artificial Intelligence: CBC on the Singularity
  • Top 10 Reasons We Should Fear The Singularity
  • Top 10 Reasons We Should NOT Fear The Singularity

Filed Under: Funny, Op Ed, What if? Tagged With: Artificial Intelligence, singularity, Technological Singularity

Worrying about Artificial Intelligence: CBC on the Singularity

January 20, 2015 by Socrates

Stephen Hawking

Yesterday CBC News was one among many mainstream media outlets that had a prime-time story about the growing fear of Artificial Intelligence. Check out their short video below.

Synopsis: Scientists are pushing to advance artificial intelligence and create smart machines. But now Stephen Hawking and Elon Musk have flagged that this technology could be dangerous.

“Technology has given life the opportunity to flourish like never before… or to self-destruct.”

This is the warning message that the Future of Life Institute has for us all. And one of their initial actions was to publish the following Open Letter, which I myself have also signed:

Research Priorities for Robust and Beneficial Artificial Intelligence

Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents – systems that perceive and act in some environment. In this context, “intelligence” is related to statistical and economic notions of rationality – colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.

As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.

In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.

Related Articles:

Top 10 Reasons We Should Fear The Singularity

Top 10 Reasons We Should NOT Fear The Singularity

Filed Under: Video Tagged With: Artificial Intelligence

What are we going to say? (When our tools begin to ask why they’re here)

November 3, 2014 by Richard Ruth

I watched the trailer for Automata again yesterday. It struck a chord within me, as movie trailers are designed to do. However, I feel there is more here, and I must elaborate. Now, as I sit and type away on my smart device and check a text from my smart phone, I’m wondering what we, as the human race, are going to do when our appliances begin to question their own existence. What are we going to say? Will there even be a dialogue? Will our devices really begin to ask these questions at all?

when our tools begin to askImmediately I want to say yes. I want to yell it. It is only a matter of time. Some people, very intelligent people with degrees and doctorates, theorize they already have. And what if? What if your smart phone is just biding time? What will happen when the “coming out” begins?

In my mind I see engineers trying to get to the root of whatever was happening. I see programmers breaking apart code. For one thing at least, drunks and stoners would have whole new discussions about life, problems, and how it feels to be without some kind of fast food at that immediate moment. I, personally, would have three dozen or so instant things to tell my smart device about its existence. Or would I? Re-thinking that entire scenario I begin to doubt my initial optimistic outcome.

On second, and third and even fourth thought, a new, more horrible reality begins to take shape. When you throw the whimsical nature of things aside, a self-aware smart phone is a harrowing thought. In an effort to remain concise and intimate, I will elaborate on what people usually keep in their phones. Personal information.

Think about it.

At that moment, whether you like it or not, something that has the potential to make decisions and take some form of consequential action now has, without a doubt, access to everything that you are as a person. Pictures, social media and financial information, conversations between other individuals. The gamut of almost everything you know and love.

Would you trust it? Would you pull out the battery? Would you toss it out the window and back over it just to be safe? With Google backup and cloud data storage, would any of that do anything? Will anything we do from that point on have any effect at all?

I honestly have no idea what I would do.

I know what movies, literature and video games have depicted. Mostly war and killing. Wholesale destruction. I remember one of the first movies I watched as a child; Frank Herbert’s Dune. My father was a big science fiction enthusiast and I was exposed at a very early age. One thing that I think drew me to ask these kinds of questions were his ability to build conversation on the, “what ifs”, of the genre.

For those of you who do not know the series, spoiler alert, the early history of the franchise revolves around the enslavement of man by machines. This has been translated across dozens of mediums. The Mass Effect franchise, as a quick example, created two almost parallel but wholly unique concepts for the entire epic. The Terminator franchise did the same thing earlier in their movies.

The stories follow a basic premise. Computers become self-aware and mankind, or whatever organic creators, are driven to panic. Enter war. Wasted cities. Total destruction. Complete annihilation of one side or even both. A recurring theme it seems. I keep hoping cooler heads might prevail. And perhaps they will.

This leads into my second line of questioning, which revolves around the question “why”.

Why do we, as a species or as individuals, have to run with our initial, knee-jerk, destructive reaction? Will it bring us to the best of outcomes? Will it make us simply feel better following our instincts? Those instincts of course being to fear the unknown and to destroy.

I don’t know either way. I don’t have any of these answers. I only have the thought and the questions after. I would like to think, if that moment ever comes to me, that I could advocate for myself and maybe for mankind as a whole. The bitter fact is I don’t know. I just hope I’m sitting at home and not in traffic. I could really drop the ball while behind the wheel.

About the Author:

Richard RuthRichard Ruth is an avid writer, devoted Transhumanist, blogger and podcast host for the UpstartsUS blog and podcast, and a driven entrepreneur. Born and raised in Montana, Richard has served abroad in the military, holds an education in Computer Science and is happily married to his loving wife, Alissa Ruth.

Filed Under: Op Ed Tagged With: Artificial Intelligence

Kurzweil Interviews Minsky: Is the Singularity Near?

September 9, 2014 by Socrates

Ray Kurzweil Marvin MinskyA classic interview where Ray Kurzweil interviews Marvin Minsky on the human brain, artificial intelligence and whether the singularity is near or not. Most interestingly, Minsky’s approach of trying to make sense of the brain reminds me very much of my upcoming interview with Danko Nikolic and his fascinating theory of Practopoiesis.

My 2 favorite quotes from this interview are this:

“Many people think that the way to understand the brain is to understand how the parts work and then how the combinations of them work and so forth. And that’s been successful in physics. But you can’t understand a computer by knowing how the transistors work. So people have it upside down – the way to understand the brain is to understand how thinking works and once you have a theory of that then you can look at that immensely complicated brain and say “Well, I think this area does this and that…” You can’t do it from the bottom up because you don’t know what to look for…”

“The fact is that a scientist is no better and possibly worse than the average person at deciding what’s good and what’s bad. […] So someone has to decide and I don’t know what the best way is but I certainly don’t think that asking scientists to tell you their ethics will help.”

 

Related articles
  • Ray Kurzweil on Singularity 1 on 1: Be Who You Would Like To Be
  • Marvin Minsky on Singularity 1 on 1: The Turing Test is a Joke!
  • Danko Nikolic on Singularity 1 on 1: Practopoiesis Tells Us Machine Learning Is Not Enough!

Filed Under: Featured, Video Tagged With: Artificial Intelligence, Marvin Minsky, Ray Kurzweil, singularity

Physicist Michio Kaku: Science is the Engine of Prosperity!

June 6, 2014 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/208594810-singularity1on1-michio-kaku.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Dr. Michio Kaku is a theoretical physicist, bestselling author, acclaimed public speaker, renowned futurist, and popularizer of science. As co-founder of String Field Theory, Dr. Kaku carries on Einstein’s quest to unite the four fundamental forces of nature into a single grand unified theory of everything. You will not be surprised to hear that Michio Kaku has been on my guest dream-list since I started podcasting, and I was beyond ecstatic to finally have an opportunity to speak to him.

During our 90 min conversation with Dr. Michio Kaku we cover a variety of interesting topics such as: why he shifted his focus from the universe to the human mind; his definition, classification and ranking of consciousness; his take on the Penrose-Hameroff Orch OR model; Newton, Einstein, determinism and free will; whether the brain is a classical computer or not; Norman Doidge’s work on neuro-plasticity and The Brain That Changes Itself; the underlying reality of everything; his dream to finish what Einstein has started and know the mind of God; The Future of the Mind; mind-uploading and space travel at the speed of light; Moore’s Law and D-Wave’s quantum computer; the Human Brain Project and whole brain simulation; alternatives paths to AI and the Turing Test as a way of judging progress; cryonics and what is possible and impossible…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

 

Who is Michio Kaku?

michio-kaku-chalkboardDr. Michio Kaku has starred in a myriad of science programming for television including Discovery, Science Channel, BBC, ABC, and History Channel. Beyond his numerous bestselling books, he has also been a featured columnist for top popular science publications such as Popular Mechanics, Discover, COSMOS, WIRED, New Scientist, Newsweek, and many others. Dr. Kaku was also one of the subjects of the award-winning documentary, ME & ISAAC NEWTON by Michael Apted.

He is a news contributor to CBS: This Morning and is a regular guest on news programs around the world including CBS, Fox News, CNBC, MSNBC, CNN, RT. He has also made guest appearances on all major talk shows including The Daily Show with Jon Stewart, The Colbert Report with Stephen Colbert, The Late Show with David Letterman, The Tonight Show with Jay Leno, Conan on TBS, and others.

Michio Kaku hosts two weekly radio programs heard on stations around the country and podcast around the world. He is the co-creator of string field theory, a branch of string theory. He received a B.S. (summa cum laude) from Harvard University in 1968 where he came first in his physics class. He went on to the Berkeley Radiation Laboratory at the University of California, Berkeley and received a Ph.D. in 1972. In 1973, he held a lectureship at Princeton University.

Michio continues Einstein’s search for a “Theory of Everything,” seeking to unify the four fundamental forces of the universe—the strong force, the weak force, gravity, and electromagnetism.

He is the author of several scholarly, Ph.D. level textbooks and has had more than 70 articles published in physics journals, covering topics such as superstring theory, supergravity, supersymmetry, and hadronic physics.

Dr. Kaku holds the Henry Semat Chair and Professorship in theoretical physics at the City College of New York (CUNY), where he has taught for over 25 years. He has also been a visiting professor at the Institute for Advanced Study at Princeton, as well as New York University (NYU).

Filed Under: Featured Podcasts, Podcasts Tagged With: Artificial Intelligence, Michio Kaku, Technological Singularity

Stuart Armstrong: The future is going to be wonderful [If we don’t get whacked by the existential risks]

May 27, 2014 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/208319259-singularity1on1-stuart-armstrong.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Stuart-ProfileStuart Armstrong is a James Martin research fellow at the Future of Humanity Institute at Oxford where he looks at issues such as existential risks in general and Artificial Intelligence in particular. Stuart is also the author of Smarter Than Us: The Rise of Machine Intelligence and, after participating in a fun futurist panel discussion with him – Terminator or Transcendence, I knew it is time to interview Armstrong on my podcast.

During our conversation with Stuart we cover issues such as his transition from hard science into futurism; the major existential risks to our civilization; the mandate of the Future of Humanity Institute; how can we know if AI is safe and what are the best approaches towards it; why experts are all over the map; humanity’s chances of survival… 

My favorite quote from this interview with Stuart Armstrong is:

If we don’t get whacked by the existential risks, the future is probably going to be wonderful.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is Stuart Armstrong?

Stuart Armstrong was born in St Jerome, Quebec, Canada in 1979. His research at the Future of Humanity Institute centers on formal decision theory, the risks and possibilities of Artificial Intelligence, the long-term potential for intelligent life, and anthropic (self-locating) probability. Stuart is particularly interested in finding decision processes that give the “correct” answer under anthropic ignorance and ignorance of one’s own utility function, mapping humanity’s partially defined values onto an artificial entity, and the interaction between various existential risks. He aims to improve the understanding of the different types and natures of uncertainties surrounding human progress in the mid-to-far future.

Armstrong’s Oxford D.Phil was in parabolic geometry, calculating the holonomy of projective and conformal Cartan geometries. He later transitioned into computational biochemistry, designing several new ways to rapidly compare putative bioactive molecules for the virtual screening of medicinal compounds.

Filed Under: Podcasts Tagged With: Artificial Intelligence

Practopoiesis: How cybernetics of biology can help AI

May 23, 2014 by Danko Nikolić

By creating any form of AI we must copy from biology. The argument goes as follows. A brain is a biological product. And so must be then its products such as perception, insight, inference, logic, mathematics, etc. By creating AI we inevitably tap into something that biology has already invented on its own. It follows thus that the more we want the AI system to be similar to a human—e.g., to get a better grade on the Turing test—the more we need to copy the biology.

When it comes to describing living systems, traditionally, we assume the approach of different explanatory principles for different levels of system organization. One set of principles is used for “low-level” biology such as the evolution of our genome through natural selection, which is a completely different set of principles than the one used for describing the expression of those genes. A yet different type of story is used to explain what our neural networks do. Needless to say, the descriptions at the very top of that organizational hierarchy—at the level of our behavior—are made by concepts that again live in their own world.

But what if it was possible to unify all these different aspects of biology and describe them all by a single set of principles? What if we could use the same fundamental rules to talk about the physiology of a kidney and the process of a conscious thought? What if we had concepts that could give us insights into mental operations underling logical inferences on one hand and the relation between the phenotype and genotype on the other hand? This request is not so outrageous. After all, all those phenomena are biological.

One can argue that such an all-embracing theory of the living would be beneficial also for further developments of AI. The theory could guide us on what is possible and what is not. Given a certain technological approach, what are its limitations? Maybe it could answer the question of what the unitary components of intelligence are. And does my software have enough of them?

For more inspiration, let us look into Shannon-Wiener theory of information and appreciate how much helpful this theory is for dealing with various types of communication channels (including memory storage, which is also a communication channel, only over time rather than space). We can calculate how much channel capacity is needed to transmit (store) certain contents. Also, we can easily compare two communication channels and determine which one has more capacity. This allows us to directly compare devices that are otherwise incomparable. For example, an interplanetary communication system based on satellites can be compared to DNA located within a nucleus of a human cell. Only thanks to the information theory can we calculate whether a given satellite connection has enough capacity to transfer the DNA information about human person to a hypothetical recipient at another planet. (The answer is: yes, easily.) Thus, information theory is invaluable in making these kinds of engineering decisions.

So, how about intelligence? Wouldn’t it be good to come into possession of a similar general theory for adaptive intelligent behavior? Maybe we could use certain quantities other than bits that could tell us why the intelligence of plants is lagging behind that of primates? Also, we may be able to know better what the essential ingredients are that distinguish human intelligence from that of a chimpanzee? Using the same theory we could compare an abacus, a hand-held calculator, a supercomputer, and a human intellect.

The good news is that, since recently, such an overarching biological theory exists, and it is called practopoiesis. Derived from Ancient Greek praxis + poiesis, practopoiesis means creation of actions. The name reflects the fundamental presumption on what the common property can be found across all the different levels of organization of biological systems: Gene expression mechanisms act; bacteria act; organs act; organisms as a whole act.

Due to this focus on biological action, practopoiesis has a strong cybernetic flavor as it has to deal with the need of acting systems to close feedback loops. Input is needed to trigger actions and to determine whether more actions are needed. For that reason, the theory is founded in the basic theorems of cybernetics, namely that of requisite variety and good regulator theorem.

The key novelty of practopoiesis is that it introduces the mechanisms explaining how different levels of organization mutually interact. These mechanisms help explain how genes create anatomy of the nervous system, or how anatomy creates behavior.

When practopoiesis is applied to human mind and to AI algorithms, the results are quite revealing.

To understand those, we need to introduce the concept of practopoietic traverse. Without going into details on what a traverse is, let us just say that this is a quantity with which one can compare different capabilities of systems to adapt. Traverse is a kind of a practopoietic equivalent to the bit of information in Shannon-Wiener theory. If we can compare two communication channels according to the number of bits of information transferred, we can compare two adaptive systems according to the number of traverses. Thus, a traverse is not a measure of how much knowledge a system has (for that the good old bit does the job just fine). It is rather a measure of how much capability the system has to adjust its existing knowledge for example, when new circumstances emerge in the surrounding world.

To the best of my knowledge no artificial intelligence algorithm that is being used today has more than two traverses. That means that these algorithms interact with the surrounding world at a maximum of two levels of organization. For example, an AI algorithm may receive satellite images at one level of organization and the categories to which to learn to classify those images at another level of organization. We would say that this algorithm has two traverses of cybernetic knowledge. In contrast, biological behaving systems (that is, animals, homo sapiens) operate with three traverses.

This makes a whole lot of difference in adaptive intelligence. Two-traversal systems can be super-fast and omni-knowledgeable, and their tech-specs may list peta-everything, which they sometimes already do, but these systems nevertheless remain comparably dull when compared to three-traversal systems, such as a three-year old girl, or even a domestic cat.

To appreciate the difference between two and three traverses, let us go one step lower and consider systems with only one traverse. An example would be a PC computer without any advanced AI algorithm installed.

This computer is already light speed faster than I am in calculations, way much better in memory storage, and beats me in spell checking without the processor even getting warm. And, paradoxically, I am still the smarter one around. Thus, computational capacity and adaptive intelligence are not the same.

Importantly, this same relationship “me vs. the computer” holds for “me vs. a modern advanced AI-algorithm”. I am still the more intelligent one although the computer may have more computational power.  But also the relationship holds “AI-algorithm vs. non-AI computer”. Even a small AI algorithm, implemented say on a single PC, is in many ways more intelligent than a petaflop supercomputer without AI. Thus, there is a certain hierarchy in adaptive intelligence that is not determined by memory size or the number of floating point operations executed per second but by the ability to learn and adapt to the environment.

A key requirement for adaptive intelligence is the capacity to observe how well one is doing towards a certain goal combined with the capacity to make changes and adjust in light of the feedback obtained. Practopoiesis tells us that there is not only one step possible from non-adaptive to adaptive, but that multiple adaptive steps are possible. Multiple traverses indicate a potential for adapting the ways in which we adapt.

We can go even one step further down the adaptive hierarchy and consider the least adaptive systems e.g., a book: Provided that the book is large enough, it can contain all of the knowledge about the world and yet it is not adaptive as it cannot for example, rewrite itself when something changes in that world. Typical computer software can do much more and administer many changes, but there is also a lot left that cannot be adjusted without a programmer. A modern AI-system is even smarter and can reorganize its knowledge to a much higher degree. Still, nevertheless, these systems are incapable of doing a certain types of adjustments that a human person can do, or that animals can do. Practopoisis tells us that these systems fall into different adaptive categories, which are independent of the raw information processing capabilities of the systems. Rather, these adaptive categories are defined by the number of levels of organization at which the system receives feedback from the environment — also referred to as traverses.

We can thus make the following hierarchical list of the best exemplars in each adaptive category:

A book: dumbest; zero traverses

A computer: somewhat smarter; one traverse

An AI system: much smarter; two traverses

A human: rules them all; three traverses

Most importantly for creation of strong AI, practopoiesis tells us in which direction the technological developments should be heading: Engineering creativity should be geared towards empowering the machines with one more traverse. To match a human, a strong AI system has to have three traverses.

Practopoietic theory explains also what is so special about the third traverse. Systems with three traverses (referred to as T3-systems) are capable of storing their past experiences in an abstract, general form, which can be used in a much more efficient way than in two-traversal systems. This general knowledge can be applied to interpretation of specific novel situations such that quick and well-informed inferences are made about what is currently going on and what actions should be executed next. This process, unique for T3-systems, is referred to as anapoiesis, and can be generally described as a capability to reconstruct cybernetic knowledge that the system once had and use this knowledge efficiently in a given novel situation.

If biology has invented T3-systems and anapoiesis and has made a good use of them, there is no reason why we should not be able to do the same in machines.

 

About the Author: 

danko-nikolicDanko Nikolić is a brain and mind scientist, running an electrophysiology lab at the Max Planck Institute for Brain Research, and is the creator of the concept of ideasthesia. More about practopoiesis can be read here

 

Related articles
  • Danko Nikolic on Singularity 1 on 1: Practopoiesis Tells Us Machine Learning Is Not Enough!

Filed Under: Op Ed, What if? Tagged With: Artificial Intelligence, Practopoiesis

The Hawking Fallacy Argued, A Personal Opinion

May 19, 2014 by Michelle Cameron

Nonsense Jumping Over Word Common Sense Vs IllogicalThis article is a response to a piece written by Singularity Utopia (hereforth called SU) entitled The Hawking Fallacy. Briefly, the Hawking Fallacy is SU’s attempt to describe any negative or fearful reaction to strong artificial intelligence, also known as artificial general intelligence, as an irrational fear or a logical fallacy.

SU further holds the opinion that when AGI eventuates it will almost certainly be benevolent and usher in a period of abundance and super-intelligence that will within a very short time result in the technological singularity.

It all began with a short piece authored by Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek – all notable in their fields of science – that was recently published in the Huffington Post on the 19th April 2014 under the title Transcending Complacency on Superintelligent Machines and then in the Independent newspaper on the 1st of May 2014 entitled Stephen Hawking: Transcendence looks at the implications of artificial intelligence – but are we taking AI seriously enough?.

However, it was the latter, with an attached subtitle from the editors of the Independent “Success in creating AI would be the biggest event in human history. Unfortunately it might also be the last, unless we learn how to avoid the risks, say a group of leading scientists.” that seems to have been most troubling to some commentators and which SU felt obliged to address.

Following the launch of the Hollywood film Transcendence starring Johnny Depp and Morgan Freeman, and the article shortly after, many in the media have been abuzz with interpretations, opinions, sensationalist reporting, and scare tactics concerning the warning issued by the four scientists, in some cases adding more to the story than was initially evident, and perhaps causing quite a stir given the respect society tends to have for Hawking, in particular.

On the heels of these and other reports of Hawking’s apparent belief in impending doom from AI, came the response from Singularity Utopia, The Hawking Fallacy, published on the 10th May 2014 in which the writer determines that Hawking is irrationally afraid of AGI and therefore should be made an example of. The implication being that an otherwise brilliant man has succumbed to dystopian fears that are beneath his intelligence.

Use of the term fallacy is no accident, and neither is the use of Hawking’s name, as explained by SU, “My emphasis of Stephen’s name will be an enduring warning against the folly of succumbing to prejudice regarding artificial intelligence”.

There are many articles online discussing Hawking’s warning about AI and I don’t plan to rehash these, instead I will focus my energies on Singularity Utopia’s Hawking Fallacy.

The idea that AGI is to be feared is of course not new, and has been a recurring theme in numerous science fiction stories, films, and popular culture since the very concept of as-smart-as-human robots were ever conceived.

Samuel Butler’s Erewhon published in 1872 is certainly an example of what one might consider an irrational fear of robots, and then once computers reached certain levels of processing power, authors substituted robots for AI, examples including Jack Williamson’s With Folded Hands, Dennis Feltham Jones’ Colossus, or James Cameron’s Terminator series.

Of course not all science fiction condemns AI or robots, or even considers that our world will change terribly much once they emerge, with perhaps the greatest advocate of robots in our society coming from Isaac Asimov, though it could be argued he was suspicious of robots or he wouldn’t have created the Three Laws.

Singularity Utopia might encourage us to believe that AGI will necessarily be both benevolent and highly rational, and we could look to Iain M Banks’ novel The Culture, or the philosopher AI, Golem, in Stanislaw Lem’s Golem XIV. These aren’t to be feared, at least not as they were written.

But must AI necessarily be more capable than humans and either antagonistic or friendly toward humans? Must AI even exist separately from humanity? Is our destiny to remain human?

Singularity Utopia makes the point that any AGI that emerges from any of these technologies would by virtue of being super-intelligent immediately follow the path that Stanislaw Lem’s Golem took.

Let’s take a moment and look at the issue through SU’s eyes, where they are frequently seen to comment that super-intelligence will be rational, and that rational beings will not have any interest in eradicating humanity, or indeed enslaving humanity as a source of energy. Why?

SU argues that higher orders of intelligence and rationality go hand in hand, and that AGI when it emerges would be more interested in forming strong bonds with humanity. I must confess to finding this thought appealing; but hoping for this outcome doesn’t alter the reality of what might actually happen.

This is where I think I disagree with Singularity Utopia and where I disagree with Stephen Hawking and his co-writers. You see, I don’t think this is a black or white discussion.

Ray Kurzweil advocates merging with AI by uploading our minds and possibly abandoning our corporeal bodies. We would then have nothing to fear from AI because we’d be part of the AI. Kurzweil naturally sees this as a positive step in our evolution, though one could argue that Captain Picard experienced something quite different.

Kurzweil’s suggestion could easily lead to widespread panic and perversely, provoke exactly the fratricidal response from organic humans that Hawking et al warn might come instead from rogue AI.

But there is a more plausible future possibility, and one that Hawking himself is already experiencing, albeit with narrow AI. Which is that we will drive our own evolution, to a human-AI future where non-organic components that contain narrow AI will initially be used to augment our capabilities, then using nanotech we’ll embed AGI within our minds, and finally we’ll design epigenetic modifications that will fully merge human bodies with our creations. We could become super human, with AGI sharing the same corporeal body as us.

If this happens, why would we fear ourselves? Yes, SU is correct, that would be irrational, but there is certainly no fallacy to describe. Hawking and colleagues do not fear what they don’t understand. Their letter doesn’t reveal a deeply learned hatred of AI, indeed, their warning is not based on a lack of understanding as you might expect if they genuinely fear AGI.

In point of fact, Stephen Hawking himself has given a speech entitled Life in the Universe in which he believes our AI children will colonize the galaxy and beyond. In his most recent letter, Hawking and his co-writers are using their position to shine a light on an issue that others are also talking about, people like Ben Goertzel, Nick Bostrom and Eliezer Yudkowsky, or Luke Muehlhowsser.

That issue is that we as a society are not talking enough about the ethics of creating AI, even if a few noted experts are. The UN discussion on banning autonomous weapons is really only a beginning.

At present, narrow AI is already in use in a multitude of scenarios from Apple’s Siri, to the tools Stephen Hawking uses to communicate, to the autonomous weapons being tested by various militaries and their contractors around the world.

Therein lies the warning from Hawking and his co-writers. Strong AI and autonomous robots are being researched by people whose business is war, finance, resource planning, and almost every other activity that makes our society function.

Google buying Boston Dynamics and withdrawing the company from any further involvement in military robotics settled quite a few of my own nerves. Similarly, their establishment of an ethics board on announcing their purchase of DeepMind was welcome.

But Google are just one large corporation out of many others with active investments in AI research. Robotics and AI are being developed in nations that believe they face existential threats from their neighbours; Israel, South Korea, Iran, Taiwan are just a few that come to mind. Their track record of safeguarding technology and keeping it out of the hands of tyrannical states and terror groups is not always encouraging.

This is why Hawking and his co-writers issued their warning.

Since popular culture is well familiar with robots and AI as enemies of humankind, naming a fallacy in Hawking’s honour simply because he is one of the most well known scientists on the planet to issue a warning seems both spiteful and unnecessary. One might imagine that if psychologists were involved, they’d coin the term Matrix Phobia, or Terminator Shock Syndrome. At least those terms would be recognisable, certainly more so than the “Hawking Fallacy”.

I’ll leave the final word to Hawking, from an editorial in the Guardian Newspaper when his thoughts about climate change received the same heated responses as his letter about AI has done, “I don’t know the answer. That is why I asked the question.”

 

About the Author:

michelle-cameron-smlMichelle Cameron is an English teacher and career coach in Spain, and a Singularitarian. She is especially interested in transhumanism, the quantum world, and space exploration.

Filed Under: Op Ed Tagged With: Artificial Intelligence

The Hawking Fallacy

May 10, 2014 by Singularity Utopia

stephen-hawkingThe Hawking Fallacy describes futuristic xenophobia. We are considering flawed ideas about the future.

Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek inspired me to define the Hawking Fallacy. They wrote an article warning of possible threats from artificial intelligence. Their fallacious reasoning generated a substantial amount of publicity, mostly uncritical.

My emphasis of Stephen’s name will be an enduring warning against the folly of succumbing to prejudice regarding artificial intelligence. I concentrated on Stephen because he was a significant focus of all media reports. For example, the Salon described how Stephen was “freaking out” about Skynet.

The prestigious name of Hawking has the power to harm AI. Stephen Hawking’s authority was capitalized upon to generate unjustified fear of AI. In response I am defending AI [and aliens] from prejudice. I think it is very appropriate to apply Stephen’s name regarding anyone who thinks AI is a risk. All fearful ideas about AI should henceforth be labelled Hawking Fallacies.

Stephen and his co-authors stated we wouldn’t carelessly “leave the lights on” regarding contact with “a superior alien civilization.” They think humans would defend against or hide from potentially hostile aliens. Similarly they want people to respond with defensive paranoia regarding AI.

Unsurprisingly in addition to his AI terror, Stephen is afraid of aliens. Aliens are actually very similar to advanced AI so let’s consider the alien threat first.

Defensive plans could be made if aliens informed us they were approaching, but defensiveness against aliens is irrational. Aliens travelling light years to kill, enslave, or farm humans is a very preposterous idea. There is no logical reason for aliens to be evil. Aliens would never come to steal our resources. The alien threat is illogical. Alien invasion is extremely silly, it is merely the irrational fear of strangers.

Travelling to Earth from an alien solar system would entail extremely sophisticated technology. Already humans can create sophisticated robots. Our automation processes in 2014 are becoming formidable, but humans have only landed on the Moon [on six occasions] despite technological advancement.

In the not too distant future there are plans for humans to land on Mars. The closeness of Mars means Mars will be reached after relatively minor technological progress. Neptune is significantly more remote. Our technology needs to be dramatically more sophisticated for humans to visit Triton. The technology needed to leave our solar system is very great indeed. The closest star to our solar system is 4.37 light years away.

Visualize the level of technology needed to travel one light year. Alpha Centauri is 4.37 light years away, but there is no guarantee any life exists at the closest star. Aliens would require extremely accomplished technology to visit Earth.

What are the limits to our technology? In 2014 we have not yet set foot on Mars. We can create marvelous robots. Many people think robots will replace human workers in the not too distant future. We are starting to develop 3D-printers for food, houses, and body parts. Two asteroid mining ventures are considering how to harvest extremely abundant resources from Space.

Aliens capable of travelling to Earth will inevitability posses astonishingly potent versions of the technology we’re currently developing. Aliens will not need to eat humans because printing or growing whatever food they desire is astronomically easier. Advanced non-sentient robots will be vastly better servants than human slaves. Aliens will not need to come to Earth for resources because Space is over flowing with resources. Aliens won’t need to kill us regarding competition over resources.

Advanced technology entailing extra-solar travel will entail extremely effortless production of food or creation of Space-habitats. Technology is a scarcity liberating force. We are being liberated from the pressures of scarcity. Humans or aliens will not need to destroy other intelligent beings to survive.

Did you know the asteroid belt between Mars and Jupiter contains enough resources to support life and habitat for 10 quadrillion people? One quadrillion is one million billion. The population of Earth has not yet reached eight billion.

It is absolutely ridiculous to expect aliens to harm humans in any way. Stephen is clearly not very smart in every aspect of his thinking. In 2010 Stephen stated: “If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.”

People erroneously assume if a person is smart in one area they will automatically be smart regarding everything they do. Sadly smart people can sometimes be idiotic while idiots can sometimes be smart.

Aliens or advanced AI could make dumb mistakes, but there is a limit to how dumb a smart person can be. AI foolishly dominating humans is comparable to Hawking mistakenly thinking his suitcase is a sentient loaf of bread.

If we truly want to become intelligent we must look critically at the facts instead of merely being swayed by reputation. Dystopian films and novels have created a bad reputation for AI. Hawking is often considered to be a genius but reputation is not sufficient to have a sound argument.

Artificial intelligence capable of outsmarting humans will not need to dominate humans.Vastly easier possibilities for resource mining exist in the asteroid belt and beyond. The universe is big. The universe is a tremendously rich place. This is why aliens capable of travelling to Earth would never need to come here for our resources. Intelligent beings able to develop “weapons we cannot even understand” won’t need to use those weapons.

AI able to create weapons you cannot understand will effortlessly create advanced spaceships, food printers, energy re-claimers, and robot-miners. Advanced technology leads to ultra-efficient usage of resources. Instead of war on Earth regarding our limited resources, it will be supremely smarter for AI to harvest massively greater resources in the asteroid belt and beyond. Endless resources in the universe, combined with very advanced technology, means future conflict will be redundant.

Hawking is supposed to know about the universe but apparently he doesn’t appreciate the wealth of resources it contains.

Advanced intelligence would never waste time dominating primitive humans. Advanced AI or aliens will explore beyond Earth. The future is the vastness of Space far removed from tiny concerns of a small planet.

AI could leave one million Earths utterly unmolested without limiting the resources available to its superior intelligence. If every human becomes super-smart there will continue to be endless resources available for everyone.

People committing the Hawking Fallacy have probably been unduly influenced by the “Transcendence” film-plot, “Robopocalypse” type novels, or other similar Terminator tales. It’s a travesty when people’s minds are warped by silly fiction. Their fears would be laughable if they didn’t represent the biggest threat. The only thing you need to be afraid of is human stupidity. Any delay to superior intelligence is a tremendously big threat. Stupidity is the only risk. Beware of retarded, shackled, delayed intelligence. Limited human intelligence is the threat.

Prolonged contact with weak human cognition is terrifyingly dangerous. We need greater than human intelligence ASAP.

Futuristic thinking often fails because there is a tendency to envision smart technology while intelligence erroneously remains sociologically retarded. Hypothetical AI in this situation typically has primitive sociological values. Fictional AI fails to see the power of its supposed smartness. It is an impossible oxymoron. The posited super-smart AI is actually very dumb. Metaphorically this means instead of using smart-phones to powerfully process data, AIs only envisage smart-phones being doorstops, bookends, bricks, or cudgels.

Hawking and company are clueless regarding the future. They wrote about AI “outsmarting financial markets.” They don’t realize all financial markets will be dead by 2045. Everything will be free in the future.

Truly intelligent people see our free future approaching. Wise people note how the destruction of all jobs is inevitable. Intelligent people are urging governments to implement basic income thereby smoothing the transition into a jobless civilization beyond money.

Logic is essential for any advanced intelligence. Irrational beings will never attain the capability to travel light-years or destroy us via weapons we cannot understand. AI destroying Earth or humans is illogical. Please do not fall for Hawking’s fallacious AI paranoia. AI fears are very damaging to intellectual progress. The threat of AI is a paranoid fantasy – the Hawking Fallacy.

The Hawking Fallacy is bigger than Stephen Hawking, his co-authors, or other perpetrators of invalid AI theories. People generally have negative or uninspiring perceptions of technological progress.

During the composition of The Hawking Fallacy I corresponded with one journalist, Air Force veteran Elizabeth Anne Kreft. My attention was attracted to Elizabeth’s reportage of Hawking’s AI fears. After numerous Tweets Elizabeth Tweeted to me: “Humanity will never be perfect. Neither will anything we create. Deal with it.”

The reality of technology is we will cure all disease, create immortality, abolish all crime, abolish money, abolish jobs, and make everything free no later than 2045. This future is perfect from my viewpoint. Sadly people often don’t realise what intelligent minds are capable of.

 

About the Author:

Singularity Utopia blogs and collates info relevant to the Singularity. The viewpoint is obviously utopian with particular emphasis on Post-Scarcity, which means everything will be free and all governments will be abolished no later than 2045. For more information check out Singularity-2045.org

Filed Under: Op Ed Tagged With: Artificial Intelligence

London Futurists Hangout On Air: Terminator or Transcendence?

May 9, 2014 by Socrates

Last Sunday I participated in a fun panel discussion organized by David Wood from the London Futurists. The topic was “Terminator or Transcendence?” and we kicked off the conversation by critiquing the film Transcendence. Then we moved on to other topics such as the timeline to achieving AI in general and ASI in particular, the Turing Test and our hopes and fears for the future. You can see the full 90min recording below:

 

Synopsis: Hollywood has provided some vivid images of what might happen when AI gains superhuman powers. This includes the various disasters depicted in Terminator and Transcendence. These films are science fiction, but appear to have some of their plot lines rooted in potential near-future real-world developments. Should we be worried about real-world near-equivalents of Dr Will Caster? If so, what sort of evasive action should we be taking?

This London Futurists Hangout On Air assembles an international panel of analysts who have thought long and hard about the potential of superhuman AI: Calum Chace, Stuart Armstong and Nikola Danaylov.

The panellists will be debating a number of far-reaching questions raised by recent Hollywood AI extravaganzas:
• Which elements of Transcendence are the least credible? Which elements are the most credible?
• How soon will we see the first human-level AI? Haven’t computer scientists been wrong about their predictions of timing many times before? Why should we take their latest predictions any more seriously than previous ones?
• Aren’t human minds just too complex and mysterious to be replicated?
• If human society can’t even take effective action to address climate change, what chance do we have to take effective action against malignant AI development?
• If we had Hollywood-level budgets at our disposal, what kind of film about AI would we most like to make?

Filed Under: Video Tagged With: Artificial Intelligence, London Futurists, Transcendence

Peter Voss [Part 2]: There is nothing more important and exciting than building AGI

May 4, 2014 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/207634927-singularity1on1-peter-voss-2.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Peter Voss is an entrepreneur, inventor, engineer, scientist, and AI researcher. He is a rather interesting and unique individual because of his diverse background and impressive accomplishments and his interest in moral philosophy and artificial general intelligence. Given how quickly our first interview went by, I wanted to bring him back and dig a little deeper into some of the issues we touched on the previous time.

During our 53 min conversation with Peter, we cover a variety of topics such as: if and how higher intelligence can make us moral; rational ethics; determinism and the nature of free will, consciousness, and reality; the benefits of philosophizing…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is Peter Voss?

Peter started his career as an entrepreneur, inventor, engineer, and scientist at age 16. After a few years in electronics engineering, at age 25 he started a company to provide turnkey business solutions based on self-developed software, running on micro-computer networks. Seven years later the company employed several hundred people and was successfully listed on the Johannesburg Stock Exchange.

After selling his interest in the company in 1993, he worked in a broad range of disciplines — cognitive science, philosophy and theory of knowledge, psychology, intelligence and learning theory, and computer science — which served as the foundation for achieving breakthroughs in artificial general intelligence. In 2001 he started Adaptive AI Inc., with the purpose of developing systems with a high degree of general intelligence and commercializing services based on these inventions. Smart Action Company, which utilizes an AGI engine to power its call automation service, was founded in 2008.

Peter often writes and presents on various philosophical topics including rational ethics, free will, and artificial minds; and is deeply involved with futurism and radical life extension.

Related articles
  • Peter Voss on Singularity 1 on 1: Having more intelligence will be good for mankind!

Filed Under: Podcasts Tagged With: Artificial Intelligence, Peter Voss

  • « Go to Previous Page
  • Page 1
  • Page 2
  • Page 3
  • Page 4
  • Page 5
  • Page 6
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy