The 5 Million Dollar General AI Challenge is a Path to Human-level AI

Marek Rosa /

Posted on: February 18, 2017 / Last Modified: February 18, 2017

Summary:

● Working towards my goal to create general AI; it will be a tool that will leverage discovery in every domain
● Using game development to bootstrap GoodAI
● The General AI Challenge is a way to crowdsource and diversify our search for human-level AI

***

Since my childhood, I have been working towards my goal of building human-level AI.

But I had two main areas of interest – game programming and artificial intelligence.

I spent my days and nights developing various real-time algorithms for computer graphics, aiming for a career in game development.

This period deepened my understanding of programming and computer architectures. I learned how to invent new algorithms, to optimize the code on a machine level, to use math and physics to simulate the world, to develop game engines and to do art, game design, and sound design.

When I got a bit older, I started to realize that I can combine my passion for creating games with my goal of creating general AI. I realized that if I make a successful game and get rich, I can reinvest in building general AI. The opposite direction (from AI to games) didn’t make sense.

So I founded a game studio, Keen Software House. After overcoming the initial learning curve and conducting a few experiments, my team and I developed Space Engineers. The game became a big hit and enabled me to start GoodAI and dedicate more of my time to researching and developing general AI.

I am currently transitioning from an active role in games to one of a mentor so that I can focus on AI full-time.

Game development, programming, and entrepreneurship taught me that I can achieve challenging goals by reducing the complexity of problems and their solutions. That means looking for efficient shortcuts, hacks and approximations, seeking general principles behind special cases, and investing and diversifying resources into more promising areas. I also learned that “impossible” problems can be reworked into “improbable” ones and then solved just by being persistent and relentless.

Why general AI?

Human-level AI has always been the center of my attention for one single reason – it will be a tool that will leverage every other tool.

Whenever I looked into other fields of human endeavor, I felt like they all will get revolutionized by AI one day. Then I asked myself, “Why should I work in that field if I know this?”

I am always looking for ways to make a maximum impact with as few resources as possible. AI seems to fit this perfectly. AI is a natural-born optimizer.

With quantitatively and qualitatively higher, scalable, automated intelligence, we can revolutionize space exploration, medicine, science, and more, as well as solve many issues affecting society today. It doesn’t work the other way – better medicine won’t give you human-level AI.

We could use general AI to augment our natural intelligence, improve the life of every person, and truly understand this universe. And I don’t think these things are achievable without upgrading our intelligence first. Or maybe they are, but not without much more time and effort.

What is intelligence?

If we look at them a certain way, all problems can be viewed as search and optimization problems. I see intelligence as a tool for searching for the solutions to them. The principles guiding our research are centered around an AI which can accumulate skills gradually and in a self-improving manner. The AI would then be able to not only reuse the skills it has learned, but also improve them as it acquires new ones.

Each new skill works like a heuristic that both guides and narrows the search for solutions to a given problem. Some heuristics improve the ability to search for better heuristics. Some skills work like building blocks for new, more complex skills.

Equipped with useful skills (heuristics, shortcuts, tricks), intelligence will be capable of searching for solutions in dynamic, complex, and uncertain environments. The goal of intelligence (or an intelligent agent), then, is to narrow the search space in order to find the best available solution with as few resources as possible.

Gradual and guided learning helps narrow the agent’s search space. When the agent builds skills on top of each other and is able to follow the clues given by the environment, it can reduce the number of candidate solutions and the complexity of the search space. Limited resources force the agent to focus on the most useful skills and the most promising paths.

Without the gradual and guided principle, the agent would be doomed to traverse the search space forever in the hope of finding the best solution completely by chance.

To make the process more effective, we have been building the School for AI, an environment where our general AI agents will learn required skills in a gradual and guided manner. The School allows us to fine-tune individual learning tasks to teach the AI desired behaviors.

The School gives us more control over the kind of behaviors and skills that the AI will acquire and keep reusing in future tasks, letting us create a more predictable AI and teach it positive human biases. This will be useful for solving one of the most important AI safety challenges: value alignment between AI and humans.

We still have many open issues and questions to resolve, including:

  • ensuring that a system that keeps improving itself will remain on its designated path of positive self-improvement, increasing its capacity to improve with every new iteration, instead of self-devolving into a negative future, where eventually any self-improvement will become impossible.
  • ensuring that the system just amplifies human intelligence and doesn’t replace it with an unpredicted set of goals.
  • being able to postpone or stop AI’s recursive self-improvement once started.
  • How different will the superintelligence be when it actually arrives? Will it be just quantitatively different? Or will there be a step change, some threshold in quantity, when it actually delivers a new qualitative change? How will this manifest? Can there be something more than intelligence?

Our main principles are described in our Framework document.

GoodAI

I founded GoodAI in January 2014 and since then we have grown to about 20 researchers and engineers.

My colleagues come from various fields – computer science, machine learning, AI, deep learning, neuroscience, psychology, behavioral sciences, etc.

GoodAI’s mission is to develop general artificial intelligence – as fast as possible – to help humanity and understand the universe.

In the early phases of our project, we were mapping the AI landscape and designing a set of potential general AI architectures.

This helped us to identify key milestones that we used to create a set of roadmaps.

These roadmaps can be compared and analyzed to identify potential roadblocks, dead ends, or large unmapped gaps.

We published one of our summarized roadmaps – however, this one mostly covers the learning phase and focuses less on the architecture building phase.

During this process, we realized the importance of big-picture thinking and so we established the independent AI Roadmap Institute. Currently, the next task for the institute is to organize a workshop where AI researchers will propose, design, and compare a set of roadmaps, and then publish a comparison report that will serve the wider community in their research.

We currently have 3 groups in our team working on different general AI architectures. Each one has its short-term and long-term advantages. Everyone is free to work on what they want and however they want, but we are tracking and measuring the progress through roadmap institute processes to see if the planned path is really the most viable, promising, shortest, etc.

To support our research and development, we have built a set of tools and made them available to the public – Brain Simulator and Arnold Simulator.

We also have our GoodAI soundtrack 🙂 We are probably the only AI R&D group in the known universe that has this, as a result of me being a game developer 🙂

Next Steps

As another way to diversify the search for general AI, we have launched the first “gradual learning” round of the General AI Challenge. We are offering a total of $5mil in prizes in this multi-year challenge.

We have identified “gradual learning” as an architecture property that will enable the efficient inclusion of additional properties, making it a logical starting point to build on. The ability to reuse your existing knowledge makes you a more efficient problem-solver compared to someone who always has to start from scratch.

The goal of the gradual learning round is not to create an agent that is best at solving a particular task or at scoring the highest numbers. Gradual learning is about how efficient an agent is at learning to solve new and unseen tasks (with as little training data and as few computational resources as possible). Essentially, such an agent will be more like us humans in the way it learns.

I believe we (the AI research community in general) are on track to getting to general AI. People sometimes ask how quickly I think we will get there, and my honest answer is I don’t know. We could get there in 3 years or in 30 years. But I do believe that it will happen in this century.

Last year we set up a countdown clock in our office. It reads 3,482 days and 11 hours to general AI 🙂

It’s there to keep me and my team focused on the goal, and also to remind ourselves that we don’t have all the time in the world. Each day without general AI means another day in humanity’s struggle with many hard problems. The bottom line: human-level AI is finally within reach, and a united, driven effort will make it a reality, redefining the line between the possible and impossible.

 

About the Author:

Marek Rosa is CEO/CTO at GoodAI, a general artificial intelligence R&D company, and the CEO at Keen Software House, an independent game development studio best known for their best-seller Space Engineers (2mil+ copies sold). Both companies are based in Prague, Czech Republic. Marek has been interested in artificial intelligence since childhood. Marek started his career as a programmer but later transitioned to a leadership role. After the success of the Keen Software House titles, Marek was able to personally fund GoodAI, his new general AI research company building human-level artificial intelligence, with $10mil. GoodAI started in January 2014 and has grown to an international team of 20 researchers.

Browse More

The Future of Circus

The Future of Circus: How can businesses and artists thrive in a changing entertainment industry?

The Problem with NFTs preview

The Problem with NFTs [Video]

Micro-Moments of Perceived Rejection

Micro-Moments of Perceived Rejection: How to Navigate the (near) Future of Events

Futurist Tech Conference Preview

Futurist Conferences: Considerations for Progressive Event Professionals

Nikola Danaylov on Ex Human

Nikola Danaylov on Ex Human: the Lessons of 2020

Immortality or Bust preview

Immortality or Bust: The Trailblazing Transhumanist Movie

COVID19

Challenges for the Next 100 Days of the COVID19 Pandemic

2030 the film preview

Why I wanted to Reawaken FM-2030’s Vision of the Future for 21st Century Audiences