Quantcast
≡ Menu

Suzanne Gildert on Kindred AI: Non-Biological Sentiences are on the Horizon

kindred-ai-logoSuzanne Gildert is a founder and CTO of Kindred AI – a company pursuing the modest vision of “building machines with human-like intelligence.” Her startup just came out of stealth mode and I am both proud and humbled to say that this is the first ever long-form interview that Suzanne has done. Kindred AI has raised 15 million dollars from notable investors and currently employs 35 experts in their offices in Toronto, Vancouver and San Francisco. Even better, Suzanne is a long term Singularity.FM podcast fan, total tech geek, Gothic artist, PhD in experimental physics and former D-Wave Quantum Computer maker. Right now I honestly can’t think of a more interesting person to have a conversation with.

During our 100 min discussion with Suzanne Gildert we cover a wide variety of interesting topics such as: why she sees herself as a scientist, engineer, maker and artist; the interplay between science and art; the influence of Gothic art in general and the images of angels and demons in particular; her journey from experimental physics into quantum computers and embodied AI; building tools to answer questions versus intelligent machines that can ask questions; the importance of massively transformative purpose; the convergence of robotics, the ability to move large data across networks and advanced machine learning algorithms; her dream of a world with non-biological intelligences living among us; whether she fears AI or not; the importance of embodying intelligence and providing human-like sensory perception; whether consciousness is classical Newtonian emergent properly or a Quantum phenomenon; ethics and robot rights; self-preservation and Asimov’s Laws of Robotics; giving robots goals and values; the magnifying mirror of technology and the importance of asking questions

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes or make a donation.

Who is Suzanne Gildert?

suzanne-gildertSuzanne Gildert is co-founder and CTO of Kindred. She oversees the design and engineering of the company’s human-like robots and is responsible for the development of cognitive architectures that allow these robots to learn about themselves and their environments. Before founding Kindred, Suzanne worked as a physicist at D-Wave, designing and building superconducting quantum processors, and as a researcher in quantum artificial intelligence software applications.

Suzanne likes science outreach, retro tech art, coffee, cats, electronic music and extreme lifelogging. She is a published author of a book of art and poetry. She is passionate about robots and their role as a new form of symbiotic life in our society.

Suzanne received her Ph.D. in experimental physics from the University of Birmingham (UK) in 2008, specializing in quantum device physics, microfabrication techniques, and low-temperature measurements of novel superconducting circuits.

 

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • Dean Pomerleau

    Nikola,

    I hate to say it, but I think you let Suzanne off much too easy. Not the best interview I’ve seen you do.

    For someone who claims to be very concerned about AI ethics and AI rights, Suzanne seems pretty nonchalant about endowing her robots with analogs of pain, especially if she believes consciousness will naturally and automatically emerge in her robot creations when they build rich models themselves and their environment (a perspective I agree with).

    She subscribes to the idea that the AI’s her company is building will one day be able to recursively self-improve, but seems very unconcerned about potential problems of control and AI safety. She seemed pretty oblivious to the idea that intelligence and ethics/empathy might be completely orthogonal and independent. Instead she said she *hopes* superintelligent AIs will naturally also be super-ethical, as long as we given them embodies form (as robots) and model them on human beings. She seems to naively expect if we send AI robots out into the world to interact and learn from people, they’ll develop/adopt compassion and empathy, because that’s how (many? most? some? a few?) people act. When you pointed out that didn’t work out very well for Tay, Microsoft’s AI chatbot that was supposed to learn from people it interacted with. We saw how much of a debacle that turned out to be (anti-semitism etc), but she seemed oblivious to the potential problems with doing something similar with humanoid robots. What am I (or is she) missing?

    All these perspectives seem pretty naive to me, and I was a bit surprised you didn’t push her harder about it, given how probative and hard-hitting I’ve seen you be with other guests. Why such softball questions? You even seemed to be feeding her some of the answers – many of which she didn’t even seem to pick up on.

    She clearly hasn’t thought through a lot of the potential implications – including all of the following:

    * Technological unemployment and “uberization”/off-shoring of jobs as a result of the robot avatars her company is trying to develop.

    * AI slavery and the ethics of creating robots that are sentient (even intelligent) and feel pain, but that we nonetheless force to do our dirty jobs.

    * AI rights – What happens when we create digital minds with robot bodies that are as intelligent as us, but can be easily duplicated? Does each of them get a vote? Does their owner, or the company that made them, or the government, have the right to shut them down? Do we really have a moral obligation to pay them, as Suzanne suggests?

    * Value alignment and safety concerns – how to prevent AGIs from going rogue. Heck, she literally says she wants to give her robots a sense of pain, and a survival instinct as one of their basic imperatives. Doesn’t that strike you or anyone else as a recipe for potential disaster?

    Don’t get me wrong – I liked her, and she seemed like a nice person. I got the impression you felt the same way. But if she is the CTO of a company that is getting millions of dollars from silicon valley venture capitalists to combine telepresence, big data and deep learning / ML that with crack the problem of AGI and build sentient, humanoid robots, I think humanity may be in quite a bit of trouble…

  • I have not seen any conclusive evidence that it can’t be done. Plus, we have seen tremendous progress in the field for the past couple of decades. And it is accelerating too.

  • Kubrickguy

    I wish I were as bright and go getting… Some people just seem to have it all effortlessly and it all falls into place, whilst the rest of us struggle. She is clearly quite brilliant… Wonderful stuff…

  • Dalton

    In criticizing Nikola, “Let he without fault cast the first stone!”

    The fact of the matter is that no one has done a good job of covering what’s transpiring in the world of AI. Take all the worries about a future AI revolt against humanity. For an AI to be responsible for an attack on the human race and its possible demise, the AI would have to be autonomous, sentient, and have a motive. There are reasons to fear AI, but it should be a fear of human directed aggression by AI as automated smart weapons.

    Sure, there are a lot of questions Nikola could have asked,
    but consider that he was interviewing a person, not a technology or a
    philosophy. Were he to have asked a lot of questions about the technology, he
    might have painted himself into a corner and ended up looking foolish. To have
    been harder on her he would have had to be adversarial. As anyone who has
    researched neural net approaches to AI and brain science in an effort to
    comprehend what it would take to emulate the human mind knows, this is not a topic
    for the faint of heart. It integrates nearly all human experience and
    disciplines at a depth incomparable to almost any other challenge. Nikola has
    demonstrated considerable breadth, he may even have all the parts do such an
    interrogation, but has yet to assemble them into a comprehensible structure.
    Assembling everything that is needed to understand what it will take to meet
    this challenge is just the first step. Integrating them into architecturally
    sound structure which functionally performs feats of human cognitive prowess is
    the big one. There’s just so much to know and so much to implement that it’s
    difficult to even know where to start. As it was, Nikola’s interview ate up an
    hour an forty minutes.

    Everyone clamors for forecasts about the future. Well, seers and fortune tellers are lucky to see 40 minutes into the future and rarely with 20/20 foresight. It’s an exercise in futility. I suspect that Suzanne appeared nonchalant because she lacks any nefarious intent. At least DARPA was not a source in her funding! Focus your fear on the military industrial complex for that’s where the real danger lies – they have no interest in producing an AI that’s empathetic or has a conscience. The military has a saying, Mine is not to ask why, Mine is but to do or die.

    On the potential of super intelligent AI, I’ll begin with – What’s that mean? Heck, no one can agree on what intelligence is much less what would constitute a SUPER AI. Like everyone else, I have an opinion on what it means to be super intelligent and even on what it would take to achieve it – of course, you know what they say about opinions (like a______s, everyone has one). In comparison with all other species found on this earth, mankind would be considered to be a super intelligence. I don’t bear malevolence against any other species. Do you bear malevolence against other species? Based on emulating and expanding human intelligence, malevolence towards others without perceived cause is rare and transient. Fundamentally, malevolence stems from the biological drives for survival and procreation. What would a developer endow an artificial with fundamental biological drives? Drives are needed as a source to motivate, but they need not be ones that conflict with our own.

    Ya know, everyone seems to think that absorbing all the tombs of knowledge mankind has generated would produce a super intelligence. The foundation of literature produced by mankind is littered with crap. Just imagine a super intelligence that believed in alchemy and astrology, numerology, and taro cards for predicting the future. It takes experience to discriminate between what’s real and true. The body of literature on UFOs and Aliens having landed in New Mexico would convince the inexperienced that those things we real. And once the Super AI had accumulate all of human knowledge, would that really make it any smarter than humanity? To be smarter, it would have to have acquire knowledge that humanity doesn’t know and that would reduce it’s acquisition to the same constraints that humanity faces – time, resources, and funding.

    Recursive self-improvement: My immediate question would be,
    “why would it want too?” Better to be one of the few than one of the
    many. There’s potential benefits and value in being unique. I suspect that the
    production of an AI population will be accomplish by machine intelligences
    limited to task specific activities. In Suzanne’s case, the need to be
    profitable in order to support the continued pursuit of a human level sentient
    AI, dedicated task related AIs will be the bulk of what they produce. Achieving
    the higher goal of sentience will be a very long path and she may not even see
    the kind of success she dreams about before the end arrives. Another reason she
    may appear unconcerned.

    A chatbot is hardly what Suzanne’s talking about. There is no other way to grow real intelligence without interacting with the environment. Knowledge without experience and assimilation lacks context and has no meaning. It’s called socialization.
    Children go through this and their inexperience exemplifies the before and
    after. Children can be little monsters until volitional inhibitory connects
    finally get established. At least we can endow an artificial intelligence with
    those connections before setting it loose on the world. If memory is structured
    along multiple emotional dimensions, the resources allocated to each can go
    along way towards biasing demeanor and personality traits. We have no control
    over that sort of things in our offspring, but can do a lot with artificial
    lifeforms.

    AI Slavery? What’s the difference between AI Slavery and Economic Slavery. I loved the comment made in a movie long ago (His Girl Friday) in which Roseland Russell, addressing her news reporter colleagues, bid them farewell and calling them Wage Slaves. Admittedly, there’s a difference, in that human’s are not owning each other these days (in this country), though in bad economies, the difference seems to diminish (Tennessee Ernie Ford once sung a song about coal mining and owing his soul to the Company Store. Anyone supporting a family knows this feeling! I think the way to look at the potential for AI Slavery is to recognize that there will be a complimentary relationship between humans and robots. Each will require some form of reward, but I’m at a loss to predict what a sentient AI automaton would find rewarding. I’m sure it will tell me.

    All in all, cut Nikola a little slack. You can’t cover everything in an hour and 40 minutes.

  • Muz

    I know I’m more than a little late to this discussion but what I believe you and more than just a few others get confused with is the difference between computation and various types of neuronal modelling. After all the human brain could be seen as a biological substrate running the model of a sentient neuronal structure. There is no proof that all that the human brain does is necessary for sentience. There is no proof that the neuronal model implemented in the human brain can’t also be modeled in other substrates to produce sentience. There no proof that the essential neuronal(brain) elements necessary for sentience couldn’t be modeled within a classic (though powerful) computer.
    To lump all modelling at various resolutions (including deep learning) of a brain (e.g. human) into the ‘computation bin’ and discount the possibility of future success at sentience requires proof that sentience is disallowed by physics…which it obviously is not. At the vary least modelling of the brain will improve with the real-time understanding (though mapping and scanning) of brain until models produced will be of sufficient resolution to produce sentience as a byproduct. This is without any clever tricks or leaps forward and even if it takes another 70 years.
    In the mean time all these different approaches have a chance of success until proven otherwise.

  • Muz

    I liked this episode a lot as it lined up with the thinking I’ve had over the last 2-3 years. The only confusion I had was with the nature of the training she proposed of the AI empowered robot/android. Training of an ‘immature’ AI with the basics of vision, hearing, touch and movement (action with sense feedback) must eventually empower them with an internal physical world view/understanding of sorts (similar to a child). This can then be built on with higher abstraction understanding gained by further experience with action and sense (i.e. with the telepresence).
    I was wondering at which point can their AI learn by being given verbal or textural instruction while string together concepts from previous situations which can be applied in new contexts?
    This would then seem to not require the mounds of training required with starting an immature AI in a new physical activity.
    For example my son can drive a car pretty well with few lessons (with his existing physical world understanding as a 16 year old) but AI driven cars require 100,000s of kms of driving training to do less well.
    I wonder if Suzanne (Kindred) envisages that an AI with a physical world understanding grown of an AI in a body could be the basis for all AIs that interact in the physical world regardless of the subsequent specialties or situations e.g. as a car and driving, as a plane and flying etc.

  • Colin Hales

    All I can do for you here is reveal the same meme running throughout your comment and get you to cogitate on it a bit.

    Throughout the entire history of science we made artificial versions of nature by replicating the physics involved in the natural original. The physics became re-jigged but the essential physics always remained. The whole of science history is a tale of such processes.

    Except where we get to the brain.

    In the brain, for historical reasons of accident only, because the brain appears to be processing ‘information’, we have confused computed models of what the brain is doing with what the brain is doing.

    If this were flight then we’d all be in flight simulators. I am not saying that it is an impossibility that a computed model can’t be sufficiently similar to the natural original in the case of the brain. I am saying that it is the first time in history that we have assumed equivalence and forgotten what the actual scientific process involves.

    I wrote about this here:
    https://theconversation.com/the-modern-phlogiston-why-thinking-machines-dont-need-computers-7881

    There are 2 signalling systems in the brain. Action Potential and Electromagnetic field coupling. If this were normal science we’d all be replicating this signalling, just like we replicate air/flight surface physics to make artificial flight. We have never even started doing this!

    Instead we have assumed none of this physics is essential (with no physical proved principle that this is the case, if it were then computed models of flight would fly!), thrown it all out from day 1 in the 1950s and are still not making artificial intelligence by the original science process of replication of the physics.

    That is all I want to see: normalisation of the science.

    So your ‘no proof’ statements are all backwards. What you do is replicate what the brain does and THEN eliminate it in a reductive fashion to see what goes missing. What you do not do, for 70 years non-stop, is assume the physics is all irrelevant and then compute your way around in circles, never matching the capabilities of nature, and forget what real science is supposed to have done. 🙂

Over 3,000 super smart people have subscribed to my newsletter: