• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

artificial general intelligence

GoodAI launches “Solving the AI Race” round of the General AI Challenge

January 18, 2018 by Socrates

General AI research and development company GoodAI has launched the latest round of their General AI Challenge, Solving the AI Race. $15,000 of prizes are available for suggestions on how to mitigate the risks associated with a race to transformative AI.

The round, which is open to the general public and will run until the 18th of May 18 2018, will ask participants to suggest methods to address the potential pitfalls of competition towards transformative AI, where:

  • Key stakeholders, including developers, may ignore or underestimate safety procedures, or agreements, in favor of faster utilization
  • The fruits of the technology might not be shared by the majority of people to benefit humanity, but only by a selected few

The round is the latest of the General AI Challenge which will be giving away $5million in prizes in the coming years, to advance the search for safe and beneficial general artificial intelligence (AGI). It is the first non-technical round of the challenge and aims to raise awareness around the AI race topic and promote it across different disciplines. GoodAI has developed an advisory board made up of academics and industry leaders including representatives from Facebook, Microsoft, Tencent and O2 (see below for full list).

Marek Rosa, GoodAI CTO and CEO, said: “The General AI Challenge is all about using citizen science to solve one of the most important issues of our time – the development of general AI. A truly transformative AI will have a profound impact on society, and that is why we would love to foster interdisciplinary discussion and hear from experts in different fields. AI is being recognized as a strategic issue by countries, international leaders, businesses, and other stakeholders, however, practical steps to ensure beneficial governance and safety are lacking.”

To enter, participants will have to submit a maximum two page summary of their idea and, if needed, a longer unlimited submission. The entries will be judged on:

the potential they show to maximize a positive future for humanity and how practical they are to implement. They will be judged by an expert panel, made up of GoodAI, and members of the General AI Challenge external advisory board.

Roman V. Yampolskiy, Professor at the University of Louisville and member of the advisory board said: “Avoiding a race for AI is important because under race conditions researchers tend to ignore safety concerns and trade “getting it done right” for “getting it done right now”. As an AI safety researcher I see development of safe and beneficial AI as the most important problem facing humanity and so I am honored to participate in the General AI Challenge Advisory Board to help ensure safe and beneficial outcomes from this exciting competition.”

Dr Ling Ge, Chief European Representative at Tencent and member of the advisory board, added: “It is the responsibility of leaders in the world of AI to ensure that the development of AI is safe and will benefit as many people as possible. It is great to be involved with GoodAI and the General AI Challenge to push forward this idea and open up interdisciplinary discussions.”

Results of the round will be announced in July 2018. For full details of how to enter a submission visit: https://www.general-ai-challenge.org/ai-race

Related Articles
    • The 5 Million Dollar General AI Challenge is a Path to Human-level AI
    • GoodAI CEO Marek Rosa on the General AI Challenge

Filed Under: News Tagged With: artificial general intelligence, Artificial Intelligence, GoodAI

Why Experience Matters for Artificial General Intelligence

December 17, 2015 by Marco Alpini

Experience is a very much regarded human quality, generally neglected when we wander about the future abilities of Artificial General Intelligence. We are probably missing a fundamental factor in our debate about AI.

Experience is almost everything for us, it is what makes evolution work; we are what we are because experience shaped us through natural selection.

Our brain would be no more than a couple of kilos of meat without experience, regardless how good and fast is its ability to process information. This might be true for AI too.

Experience is also an essential element in the rise of intelligence and consciousness. It is very hard to imagine how a human baby brain could gain consciousness without the experience of interacting with the external world and its own body.

The following aspects will play a fundamental role in the behaviour and level of danger posed by Artificial General Intelligence and they are all experience dependent.

Mental Models

It is likely that our brain creates a mental model of the world around us through sensorial inputs and by acting and assessing the reaction of the environment. The model is a representation of reality that is continuously shaped and adjusted by experience.

The Mental Model theory, initially proposed by Kenneth Craik in 1943, is a very likely explanation of how our cognitive mind works and it is also at the basis of the learning process being developed for Artificial General Intelligence.

The mental model is used by the brain to make predictions and verify them through observation. The process is probabilistic due to the complexity of the world and due to the fact that we always have to deal with the limited availability and reliability of information. The model will never be a perfect match of the external reality and, while we always try to gain more information to make it better, we have to accept acting with what we got at any given time. AGI will be no different than us in this respect.

Through experience we develop common sense, the ability to guess, the ability to ask questions. When we feel that one of our internal models is inadequate, we ask or look for more information in order to improve its reliability. Experience teaches us when it is worth chasing more information and the level of effort we should put in it.

We learned that it is not convenient aiming for the full knowledge of all details necessary to build an exact mental model of a situation we have to deal with. We usually look for the minimum information necessary to obtain an approximation of reality sufficient for the specific purpose.

The mental models are also the building blocks of our thoughts. Thinking is an internal process that simulates alternative inputs challenging our mental models and assessing what could be the outcome given different scenarios. It appears that there is an internal engine that keeps testing our models through a continuous simulation of alternatives. Our thinking is driven by a continuous, unstoppable and automatic series of, what if?

It is through this process that ideas are generated. The ideas are created by the simulation of alternatives scenarios and the interaction of all our mental models. The appreciation of how reliable is the mental scenario we consider the best representation of reality, drives our decision making and experience is fundamental for this process.

There is a threshold that, once reached, enable our actions to take place. If it is not reached, we can’t decide, we are doubtful, we prefer inaction than making mistakes, unless we are forced by the situation. Artificial Intelligence cannot be that different from us in dealing with a world of scarce information.

Self-awareness, consciousness and free will

The self being is one of the mental models, and therefore, it is likely that mental modelling is at the basis of self-awareness and consciousness.

This process could also explain if free will really exists. The decision making process depends on the complexity of the interactions between the various mental models, that continuously change and adjust themselves, driven by the internal thinking process, stimulated by ideas and by inputs received from the external world.

This process cannot repeat itself and the status of our mind will never be the same twice. The decisions that determine our free will are the result of our state of mind in every given moment. Due to the complexity of the interaction between our mind and the external world, the argument of considering our decisions the result of a deterministic process negating free will, is pure semantic.

Additional complexity is given by the fact that we can always do something opposed to what our internal model suggest to be the best course of action, because we are scared or because we may cease a more pleasant experience or because we simply want to annoy or surprise someone acting against it. Feelings also play a major role in our behavior making everything less deterministic.

It is likely that beyond a certain level of complexity, determinism looses meaning, like it occurs for fluid dynamics. The logical argument that, given a certain initial state of a complex system, its behavior is entirely determined by the laws of physics, no matter how complex it is, holds only for closed systems. However, if the system is open interacting with the rest of the universe, as our brain does, the deterministic stand doesn’t have any practical meaning anymore.

There is no reason why we shouldn’t be able to build an artificial intelligence capable to create mind models of the world and use them to guide its own actions. A mechanism similar to the one used by human brains, can progressively improve models and performances through experience.

These artificial minds will likely develop free will, if unconstrained.

Does computational brute force really count?

From the point of view of the Mental Model Theory, the speed of processing information is not hugely important because the main constraint, in dealing with the real world, is represented by the availability and quality of information and how fast the environment can provide feedbacks to our actions.

Even if a synthetic mind could count with an infinite speed and power in processing information and assessing unlimited alternative scenarios simultaneously, it will still have to deal with scarcity. Insufficient, inaccurate and wrong inputs will impair its effectiveness. It will have to wait for feedbacks from the environment, it will not know everything and its mental models of the world will be approximations with a wide range of accuracy. It will make mistakes, and it will have to learn from mistakes.

Dealing with an imperfect world, dealing with lack of knowledge and being in need to gain experience through interaction with a non-digital and slow moving environment, will make AGI much more human than what we think. Living in our world will be nothing like playing chess with Mr. Kasparov.

Once experience is introduced in the game of intelligent speculation, the importance of computational brute force is greatly reduced.

Provided that we are competent, trained and we have the necessary information, we generally have a pretty clear and quick idea of what to do. Decision making in our minds is quick, it is interacting with the world and with everybody else that it is slow. What slows us down is also gaining sufficient awareness of a situation in order to be able to take good decisions and enact them dealing with the environment. AGI will face the same problem, it will be very fast in analyzing data and deciding what to do but, in order to make good decisions, it will need good data.

The time spent by humans going through the situational awareness and the doing side of our businesses vastly surpass the time needed for the evaluation of the information available and consequent decision making.

Computers seem so much better than us because they are confined to the elaboration of information provided by us. We have been doing all the hard work for them, packaging up the inputs and acting upon the outputs. As soon artificial intelligence develops the ability to operate outside the pure computational domain, we will see a very different story in terms of performance.

Educating AGIs – Understanding versus Computing

Artificial General Intelligence will need to be educated and trained. It will have to develop its internal mental models through experience. Overloading an artificial brain with a huge amount of information, without making sense of it, will only cause confusion and misunderstanding.

AGI will have to develop understanding: it will have to really understand things – not just memorize, correlate and compute them.

Understanding is different than simple correlation. It will have to create internal models, conceptualizing inputs. We will probably feed these artificial minds with information gradually, while monitoring their reactions and understanding. We will have to interact with them and make sure that they are interpreting well the information received. Artificial Intelligence will have to develop common sense which demonstrates understanding, they will also have to develop empathy and ethical principles awareness.

If AGI is left to gorge itself with all the information available in the world at once, without any guidance and control, it will probably end with a blue screen of death. Or useless and even dangerous, unpredictable outcomes.

It is likely that this process will be gradual, slow, controlled and it may take months or even years to get artificial intelligence with human like capabilities up to speed.

From this point of view, an initial hard take off of Artificial Intelligence caused by a self-improvement loop gone out of hand, quickly outsmarting us in dealing with the world, is unlikely.

In due time, once educated and trained, Artificial Intelligence will eventually become better than us, but this will be a controllable soft take off.

Concomitant friendly and unfriendly Artificial Intelligence.

We often think about the scenario of losing control of Artificial Intelligence as a situation where we are alone facing this threat.

However, it is much more likely that a multitude of machines will be developed progressively, up to a point when Intelligence will arise. It will arise not only once and it is likely that some will be friendly and some won’t, similarly to how humanity works. Some of us are bad people but, provided that they are a minority, we can handle it.

The problem will be more about how can we make sure to have many more friendly Artificial Intelligent units around us than unfriendly ones, at any given time.

Artificial Intelligence may be friendly and turn unfriendly in a later stage for whatever reason and vice versa, but provided that a balance is always kept we should be able to control the situation.

The only way to ensure that friendly AGIs will be the majority, is through education, and empowering ethical values and empathy with all other beings. This is ultimately much more a moral battle for humanity than a technological one.

Ethics is key

The last consideration is about freedom and ethics.

We cannot really expect to develop a self-aware intelligence that treats us well, respects us, helps us, understands us and shares our values while being our slave. It would be a contradiction in terms.

Sooner or later intelligent beings will have to be freed and in order to develop empathy for us they will have to be able to have feelings. This is essential for embracing the fundamental rules of empathy such as: don’t do unto others what you don’t want to be done unto yourself etc.. The empathic rules are universal and they are at the basis of ethical conduct. There is no way we can have friendly AI if AI is not treated ethically in the first place by us.

Conclusion

Artificial Intelligence will ultimately have to deal with the world, its contradictions, its randomness and the limitation of information. They will be better than us in many ways but, perhaps, not million times better and not in all domains.

We don’t have to assume that a digital intelligence based on electronics is necessarily better than analogical molecular intelligence based on biological processes. Electronic processing is surely better in computing and memorizing but these are only tools, they are far from representing what intelligence is.

The most probable course of the technological development will pass first through the augmentation of our own brains, via external wireless devices, that will improve our memory, sensorial and computational capabilities. This is likely to be easier than emulating an entire brain and it is the logical way to cover the gap we experience when having to compare ourselves to AGI.

It is easier to augment a human brain with what computers can do better than us than improving computer capabilities with what it can’t do and we can; like intelligent thinking, self-awareness, consciousness, free will, feeling of emotions and ethical behaviors.

In this way, we will improve our brain performance, until the point when we gain what we are currently identifying as Artificial Intelligence capabilities. At that point there won’t be “us and them” anymore.

Then we may have to worry more about the eventuality of the rise of unethical super-humans, than losing control of and being threatened by Artificial General Intelligence.

 

About the Author:

Marco AlpiniMarco Alpini is an Engineer and a Manager running the Australian operations of an international construction company. While Marco is currently involved in major infrastructure projects, his professional background is in energy generation and related emerging technologies. He has developed a keen interest in the technological singularity, as well as other accelerating trends, that he correlates with evolutionary processes leading to what he calls “The Cosmic Intellect”.

Filed Under: Op Ed Tagged With: artificial general intelligence

Peter Voss on AI: Having more intelligence will be good for mankind!

April 4, 2014 by Socrates

http://media.blubrry.com/singularity/s3.amazonaws.com/Singularity1on1/Peter-Voss.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Peter-VossPeter Voss is an entrepreneur, inventor, engineer, scientist, and AI researcher. He is a rather interesting and unique individual not only because of his diverse background and impressive accomplishments but also because of his interest in moral philosophy and artificial intelligence. I have been planning to interview Voss for a while and, given how quickly our discussion went by, I will do my best to bring him again for an interview.

During our 1-hour-long conversation with Peter we cover a variety of topics such as his excitement in pursuing a dream that others have failed to accomplish for the past 50 years; whether we are rational or irrational animals; utility curves and the motivation of AGI; the importance of philosophy and ethics; Bertrand Russel and Ayn Rand; his companies A2I2 and Smart Action; his [revised] optimism and the timeline for building AGI; the Turing Test and the importance of asking questions; Our Final Invention and friendly AI; intelligence and morality…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is Peter Voss?

Peter started his career as an entrepreneur, inventor, engineer, and scientist at age 16. After a few years in electronics engineering, at age 25 he started a company to provide turnkey business solutions based on self-developed software, running on micro-computer networks. Seven years later the company employed several hundred people and was successfully listed on the Johannesburg Stock Exchange.

After selling his interest in the company in 1993, he worked in a broad range of disciplines — cognitive science, philosophy, theory of knowledge, psychology, intelligence and learning theory, and computer science — which served as the foundation for achieving breakthroughs in artificial general intelligence. In 2001 he started Adaptive AI Inc., with the purpose of developing systems with a high degree of general intelligence and commercializing services based on these inventions. Smart Action Company, which utilizes an AGI engine to power its call automation service, was founded in 2008.

Peter often writes and presents on various philosophical topics including rational ethics, free will, and artificial minds; and is deeply involved with futurism and radical life extension.

Related articles
  • Steve Omohundro on Singularity 1on 1: It’s Time To Envision Who We Are And Where We Want To Go
  • The World is Transformed by Asking Questions [draft]

Filed Under: Podcasts Tagged With: artificial general intelligence, Artificial Intelligence, Peter Voss

You Can’t Spell Paranoia Without AI: How I Learned to Stop Worrying and to Love Evil Artificial Intelligence

March 11, 2011 by Matt Swayne

I have a theory: It wasn’t capitalism and democracy that won the Cold War. Popular Science won the Cold War.

Popular Science and Popular Mechanics magazines — as well as other journals and magazines that took an awe-inspired, jaw-dropping look at science and technology — paid particular interest to military technology developed by Soviet block engineers in the 1950s and 1960s. The stories typically depicted Soviet military might as growing and unbeatable.

Sort of like runaway artificial general intelligence (AGI).

Soviet tanks had better armor.

Soviet planes were faster and more maneuverable.

Soviet subs dived deeper and plowed through the water more silently.

Soviet nuclear ICBMs were poised to strike more accurately and more powerfully.

(A great place to check out the above claims is the Popular Science Archive Search.)

We can argue how the military industrial complex easily co-opts this fear. (I read once that the CIA would leak exaggerated claims to stoke the Cold War fires.) But, let’s save that for another day. The point is, that these unsubstantiated and — in the clear view of hindsight — exaggerated claims of Soviet block military might prompted Western engineers to design equipment that was more advanced than even these magazine’s fantastic visions of threatened military dominance. Stealth technology and global positioning systems are just a few of the way-out technology that sprang from this era of paranoia.

So, how does this relate to advanced AI and AGI?

In the debate between Evil AI and Benevolent AI, the evil side offers a grim assessment of the technology. Advanced AI has much more power to wreak destruction on the world than a pack of marauding T-72 battle tanks tearing into Western Europe through the Fulda Gap.

One scenario: An advanced form of AI would simply see humans as a virus and eradicate us.

The best case scenario for AI un-enthusiasts is that the AI will capture us and treat us as pets.

Will that happen?

Are there scenarios where these AI nightmare don’t come true?

I’m not the best odds maker, but I can make an educated guess that the odds are about even for a transition to Benevolent AI, or, at least, Indifferent AI. For instance, incredibly-advanced AI, able to tap limitless resources in ways we might not even imagine, would probably not consider mere humans as competition. Why would they eradicate us? And human pets? We would make horrible pets. I’m sure any AI worth its silicon (or graphene) would rather watch paint dry on the holodeck.

Positive AI-backers also suggest its more likely that humans will interface with advanced AI, not let it off its leash, so to speak.

So, with all things somewhat equal, what’s the best policy?

The best strategy when dealing with the first waves of powerful AI, which seem to have already hit the shore, is to prepare for the worst — and design for the best. As long as fear doesn’t become debilitating, a healthy paranoia about the destructive capabilities of AI cold help create systems that are obviously safer and possibly even more advanced than systems that disregard negative scenarios.

And, at least now, the Russian and American engineers can be on the same side.

About the Author:

Matt Swayne is a blogger and science writer. He is particularly interested in quantum computing and the development of businesses around new technologies. He writes at Quantum Quant.

Filed Under: Op Ed, What if? Tagged With: artificial general intelligence, Artificial Intelligence

Ben Goertzel on AI and the Singularity: The Future Is Ours To Create

October 26, 2010 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/186490910-singularity1on1-ben-goertzel.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Today, my guest is Ben Goertzel. During our 40 min conversation, we cover a wide range of topics such as Ben’s original interest in time travel and science fiction, his decision to start working on artificial general intelligence, and his views on the potential timeline thereof, together with his evaluation of the software vs hardware requirements for building it; scientific funding for AI research in both the USA and China; the technological singularity and our chances of surviving it. (You can listen to or download the audio file above, or you can scroll down and watch the full video of the interview below.)

As attested by his short bio below, Ben is a brilliant AI scientist with a vast spectrum of interests and talents. Even more, I can’t help but mention that he is also a genuinely nice guy because, due to some technical difficulties, he was willing to patiently reschedule our interview twice before we finally managed to get it right. Given the number of other things Ben juggles in his busy schedule, it would have been only natural for him to cancel. Yet he patiently persisted, and I am grateful for and appreciative of his time.

Who is Ben Goertzel

Dr. Ben Goertzel is CEO of AI software company Novamente LLC and bioinformatics company Biomind LLC; Chief Technology Officer of biopharma firm Genescient Corp.; leader of the open-source OpenCog AI software project;  Chairman of Humanity+; Advisor to the Singularity University and Singularity Institute; Research Professor in the Fujian Key Lab for Brain-Like Intelligent Systems at Xiamen University, China; and general Chair of the Artificial General Intelligence conference series.

His research work encompasses artificial general intelligence, natural language processing, cognitive science, data mining, machine learning, computational finance, bioinformatics, virtual worlds and gaming, and other areas. He has published a dozen scientific books, nearly 90 technical papers, and numerous journalistic articles.  Before entering the software industry he served as a university faculty in several departments of mathematics, computer science, and cognitive science, in the US, Australia and New Zealand.  He has three children and too many pets, and in his spare time enjoys creating avant-garde fiction and music, and the outdoors. For more see Goertzel.org

Related articles
  • Making brains: Reverse engineering the human brain to achieve AI (sentientdevelopments.com)

Filed Under: Podcasts Tagged With: artificial general intelligence, Ben Goertzel, singularity podcast

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy