Quantcast
≡ Menu

GoodAI CEO Marek Rosa on the General AI Challenge

Marek Rosa is the founder and CEO of Keen Software, an independent video game development studio. After the success of Keen Software game titles such as Space Engineers, Marek founded and funded GoodAI with a 10 million dollar personal investment thereby finally being able to pursue his lifelong dream of building General Artificial Intelligence. Most recently Marek launched the General AI Challenge with a fund of 5 million dollars to be given away in the next few years.

During our 80 min discussion with Marek Rosa we cover a wide variety of interesting topics such as: why curiosity is his driving force; his desire to understand the universe; Marek’s journey from game development into Artificial General Intelligence [AGI]; his goal to maximize humanity’s future options; the mission, people and strategy behind GoodAI; his definitions of intelligence and AI; teleology and the direction of the Universe; adaptation, intelligence, evolution and survivability; roadmaps, milestones and obstacles on the way to AGI; the importance of theory of intelligence; the hard problem of creating Good AI; the General AI Challenge

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes or make a donation.

Who is Marek Rosa?

Marek Rosa is CEO/CTO at GoodAI, a general artificial intelligence R&D company, and the CEO at Keen Software House, an independent game development studio best known for their best-seller Space Engineers (2mil+ copies sold). Both companies are based in Prague, Czech Republic. Marek has been interested in artificial intelligence since childhood. Marek started his career as a programmer but later transitioned to a leadership role. After the success of the Keen Software House titles, Marek was able to personally fund GoodAI, his new general AI research company building human-level artificial intelligence, with $10mil. GoodAI started in January 2014 and has grown to an international team of 20 researchers.

 

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • Frank Buss

    Thanks for the interesting interview. I’ve written a simple webserver so that everyone can try to solve the tasks of the AI challenge with a webbrowser, instead of installing Python etc.:

    http://www.frank-buss.de/ai/index.html

    Do you manage to solve the second task? How many tasks can you solve? I couldn’t even solve the first task without looking in the source code. I hope (or not?) there are some smarter AIs out there.

  • Steve Giess

    Dear Frank,

    Your post raises an interesting question. If a human of average (or even above average) intelligence may be unable to solve a particular task then is that task really a suitable objective measure of whether or not we have achieved AI? Are we even testing the right things? Many people are not very good at solving puzzles for example and yet we have to accept that as human beings those same people do in fact possess “generalised intelligence” in the human sense. Does this imply that if we were to achieve the goal of general AI equivalent to human level intelligence then the AI may actually not be very good at solving puzzles or at solving the kinds of math problems that are trivial for computers for example? Are our fundamental ideas of “intelligence” the problem and is this why we have not yet achieved generalised AI despite all our advances in computing and processing speed etc.? Is the question of generalised AI a bit like Douglas Adam’s “Ultimate Question” posed to the “Deep Thought” computer in the Hitchiker’s Guide to the Galaxy? In relation to generalised human-level AI are we asking for an answer without really having defined exactly what the “question” is? Are we in effect saying when asked to clearly define the question “you know, how do we make human type generalised artificial intelligence, that THING we all have, you know the ULTIMATE holy grail of AI thingy.” Would we even recognise true generalised AI if we saw it or would we just think “this software’s crap. It can’t even solve this relatively simple puzzle” without stopping to consider that many humans couldn’t solve the puzzle either? Meanwhile the AI is musing on the nature of life, the universe and everything then spends the rest of the morning thinking about lunch before falling asleep in the afternoon. You know, just like a human would.

  • Frank Buss

    Good questions. I wrote some of my questions in the comments at the end of this blog posting:
    http://blog.marekrosa.org/2017/02/first-round-of-general-ai-challenge_15.html
    In summary: I think the tasks can’t be solved by a computer program, and even many humans fail, as I did. There are just too many possibilities and not enough feedback. Usually we learn best by example, definitions, asking a teacher, and by other interactive processes. With such a process, e.g. showing an example for each tasks, they would be trivial to solve, as demonstrated with the first task. And such an AI program which learns by example would be more like a human, too.

    I think this round of the challenge encourages to create special AIs like Watson. It might be perfect at solving these kind of puzzles, better than many humans, but can’t do much other useful things (same as with Watson, it can win Jeopary, but can’t drive a car), and doesn’t help much to develop a general AI.

    Regarding asking the right question, there is a good definition on Wikipedia for general AI ( https://en.wikipedia.org/wiki/Artificial_general_intelligence ) : “Artificial general intelligence (AGI) is the intelligence of a machine that could successfully perform any intellectual task that a human being can.”. So I think asking the right question then would be a challenge, where an AI can drive a car (maybe in a simulated 3D world), play chess, solve some simple puzzles, win Jeopardy etc., but where it can’t do anything at the beginning, but you can train it to do any of these tasks, and any other task a human can do. This would be easy to verify. You could describe some of the tasks as examples, so that the programmers can test their AIs, but then in the challenge try to train it non-public tasks, but which are easy for humans. I guess the next rounds of the challenge might get in this direction, with simulated 3D environments, bots etc. (just look at their games 🙂

Over 3,000 super smart people have subscribed to my newsletter: