Quantcast
≡ Menu

Unanimous AI CEO Dr. Louis Rosenberg on Human Swarming

YouTube ThumbHow can humans ever hope to push back and delay the ever growing power of software based artificial intelligence? Well, Dr. Louis Rosenberg – founder and CEO of Unanimous AI, believes that human swarming may be able to amplify human intelligence and buy us some time, if not change the outcome entirely.

During our 86 min conversation with Louis Rosenberg we cover a variety of interesting topics such as: how and why he got interested in AI, virtual and augmented reality; his motivation and main goals; why groups are smarter than individuals; his dream of amplifying human intelligence; the dangers of software based AGI; being a vegan and respecting [i.e. not consuming] other intelligences; his latest company Unanimous AI; the definition of a swarm and its differences from both a herd and a crowd; his platform computer interface for human swarming called UNU; predicting the Oscars with a human swarm; flocks of birds and schools of fish versus swarms of bees; swarms of experts as a super-expert; Miguel Nicolelis‘ brain-to-brain computers; collective intelligence and the Borg…

(You can listen to/download the audio file above or watch the video interview in full. If you want to help me produce more episodes like this one please make a donation!)

Who is Louis Rosenberg?

Louis RosenbergRosenberg founded Unanimous AI to pursue his interests in collective intelligence and human-computer interaction.  He attended Stanford University where he earned his Bachelor’s, Master’s, and PhD degrees. His doctoral work focused on robotics, virtual reality, and HCI. While conducting research at the U.S. Air Force’s Armstrong Labs in the early 90’s, Rosenberg created the Virtual Fixtures platform, the first immersive Augmented Reality system ever built. He then founded Immersion Corp to pursue virtual reality technologies.  The company went public in 1999 on NASDAQ. Rosenberg also founded Microscribe, maker of the world’s first desktop 3D digitizer – the Microscribe 3D – which has been used in the production of many feature films, including Shrek and Ice Age. Rosenberg has also worked as a tenured Professor at California State University (Cal Poly), teaching design and entrepreneurship. Rosenberg has been awarded more than 300 patents for his technological efforts. He currently lives in California as a longtime vegan, animal rights supporter, and friend of squirrels.

 

Like this article?

Please help me produce more content:

Donate!

OR

Please subscribe for free weekly updates:

  • Elling Borgersrud

    Great interview! Keep on giving us those great insights and ideas. I hope they will continue to develop their product. Is it possible to organize “swarmy polls” outside of the site? It could be a great tool for chats and forums, I think? If they could be embedded, perhaps?
    As I am watching, it strikes me as a technology that could do wonders for parts of the public debate, perhaps even in newspapers and stuff?

  • Adam Peri

    Louis/community:

    I’ll preface my question by saying that I did not complete the entire podcast. If I missed the answer, my apologies… The idea of swarm intelligence (as well as herd/crowd) is fascinating, particularly the analogy to honey bees and similar animals. That being said, the intelligence level of bees has certainly evolved and amplified over hundreds of thousands of years, but it hasn’t grown exponentially. Do you (or anyone in the community, readers, et al) believe that human swarm intelligence has the ability to achieve exponential growth and some sort of singularity? Or perhaps is it an augmentative step to a technological/software driven one?

  • 1PricePerspective1

    Louis, great interview. Would you agree with the following statements and conclusion:

    P1 A swarm Intelligence can grow exponentially without consideration for human kind.

    P2 A swarm Intelligence can grow exponentially with intent and with value for human kind.

    P 3 Humans (we now) retain a sufficient degree of self-control that we might at least arrest exponential growth of harmful AI swarms.

    P 4 Humans (we now) retain a sufficient degree of self-control that we might do more than arrest exponential growth of harmful AI swarms and that we might influence them constructively (if we can arrest the potentially destructive harmful growth in time)

    C1 By keeping human kind in the rapid AI development loop, and by arresting that growth now, we increase the likelihood that such a swarm intelligence will value (and not slaughter) human kind?

    In summary, UNU is an effort put the genius of swarm intelligence into any hand/mind with access to the cloud and the will to try.

    Some observations:

    Finding purpose, getting a following, and find an adequate user interface remain a challenge. Interestingly, speeding UNU to a VR/AR environment (as market forces might urge) is an example of “run-away-AI”. VR/AR helmets and thin-plastic walls are about to flood the market. Brains unleashed.

    In an effort to not speed self (human) destruction U AI & UNU would, virtuously, eschew market forces for paced-good…..by staying out of the AR/VR froth now, U AI and UNU…and the underlying good/right idea get left behind…in flatland…playing horseshoes. The minds most affected are those who have already “crawled in”……the swam has (disruptively) already moved to AR/VR and mouse inputs are like 4 inch unix-orange monochrome screens.

    It is a paradox, i understand. Move the value of UNU-logic too quickly, and you add to the potential self destructive din….not being mindful. On the other hand, the minds that you hope to save (the swarm) will have moved to VR/AR and UNU will be looking for “mouse dwellers” in a “plasma ball” world.

    The pace in the face of your challenge will pick-up soon. Build-in or face being left out. The ideas, ideals, and virtues of UNU are correct. Near-term market forces subject it to disruption.

  • Pingback: Will Robots Take Over By Swarm?()

  • Quite a few themes in here that have fascinated me for over 40 years.

    Q1 – To what extent is human thought controlled by the values of a market based system of exchange? It seems that markets are a great mechanism for allocating scarce resources, but cannot deliver a non-zero value to universal abundance of anything. This characteristic places market values in direct conflict with individual human interest, in a world where automation and robotic technology delivers an exponentially increasing set of goods and services that could be delivered in universal abundance. And before anyone says “but everything has a cost” consider oxygen in the air – arguably the single most important thing to any human being, yet of zero market value because of its universal abundance. We have the technical ability to develop systems that would deliver housing, food, water, energy, education, information, communication, sanitation, transportation and medical services in universal abundance, yet to do so would destroy the foundation of the current economic system, and the social systems derived there-from, and will therefore generate resistance, even though such abundance is clearly in the long term self interest of every individual (even the wealthiest within the current system).

    Q2 – To what extent is human behaviour controlled by deliberately maintaining ignorance of the increasing role of cooperation in complex evolved systems like ourselves? Looked at from a Games Theoretical perspective (as first popularised by Axelrod) it is clear the all major advances in the complexity of living systems are characterised by the emergence of new levels of cooperation, and as raw cooperation is always vulnerable to cheating strategies, cooperative strategies must adopt secondary strategies that work to prevent cheating, or perish. Axelrod demonstrated a simple class of such strategies (the retaliator class), Elinor Ostrom showed how some variations on that class have worked over long periods in human societies, and Wolfram has shown that there may in fact be an infinite set of classes of such strategies in the deeper realms of strategy space.

    Q3 – If we value individual life, and individuals freedom, doesn’t that compel us to go beyond market based economic systems into a systemic space that empowers everyone to do whatever they responsibly choose (where responsibility here is defined as taking reasonable steps to mitigate the adverse effects of ones actions on the life and liberty of others, which entails derivative responsibilities to care for the environments that support us all, and to accept and cater for the exponentially expanding diversity that must result from such freedom).

    Q4 – To what extent do we allow the information content of our past, as expressed in our genetic and cultural dispositions to feel pleasure or displeasure in specific conditions or actions, to determine our future? Or put more simply, why should happiness be important? In a time of exponential change, how likely is it that our past, particularly our deep genetic or cultural past, will be a good predictor of our future? I strongly suspect that the answer is that the degree of reliability or utility of such systems is exponentially decreasing with time.

    Q5 – David Snowden has developed the “Sensemaker Ap” approach that has many similarities to this approach, but rather than using the measurements taken to localise to a single outcome, he uses it to enable a display of the probabilistic landscape of system response to any set of parameters. This gives one an in depth view of the sorts of system drivers that are localising in any population, and allows focus of attention on outliers, rather than on the well explored middle-ground. Does this system do such analysis, and simply not expose it to the users?

    If not, could an interface be developed that could allow participants to view whatever dimensions interest them?

    Q6 – is there any intention to explicitly include anything similar to Snowden’s Cynefin framework for the management of complexity? It is a highly simplified set of heuristics and it does give a very useful set of tools to any participant working in a complex environment.

    Summary – The initial characterisation of polls and polarisation is exactly counter to what complexity theory tells us. Taking the average of expert guesstimates and averaging them is a very powerful tool, and it is powerful only to the extent that the judgements are independent. If the judgements are not independent, then the effect is lost, as there is a strong tendency to cluster around the first estimate given, as social interaction overrides independent judgement (the Social Influence Bias Rosenberg later acknowledges).

    Rosenberg in the later part of the interview seems to clearly keep these aspects separate but in the early part of the interview seems to actually conflate 3 distinctly different effects into one, and lose clarity as a result:

    1 the ability of the average of independently polled experts to deliver judgements that are very accurate;

    2 the ability of groups who take the time to uncover common values and build understanding of each other and respect for each other, and share their expert knowledge, to reach decisions that deliver high utility to all; and

    3 the tendency of crowds in emotionally charged situations to move to the lowest common denominator, and invoke low level reactions from all participants.

    Certainly there is power in groups negotiating outcomes in complex situations, and that is a completely different domain of both judgement and complexity from the “wisdom of crowds” or crowd behaviour and confusing or conflating the three very different domains of process does not help anyone.

    I am all for consensus decision making in groups, and it takes a long time to build both trust and understanding to allow such processes to work effectively. It requires a long time for the information and value sets of all participants to emerge and be understood (in so far as such understanding is possible) from all the different paradigms present. I have spent the last 10 years in such a process in coastal fisheries management. It took 5 years of monthly meeting to get to the point that we had a shared set of values, and a shared set of working understandings, that allowed us to make real progress towards specific strategic outcomes.

    Crowds as groups of people in emotionally charged situations tend to be very simple and very dangerous entities – capable of destruction on a massive scale. To be avoided.

    Rosenberg acknowledges Social Influence Bias, and in the later parts of the interview acknowledges that the participants in the collective decision have to be experienced and knowledgeable about some significant aspect of the subject. This is true, and it is not at all like his earlier claims about the behaviours of swarms and crowds.

  • Much ai tech is also applicable to human intelligence augmentation. That we are already 100+ IQ ahead of ai means that continuous application of ai tech to augmentation means we maintain lead into the future until full human uploads can occur. The myth of ai takeover is based on a stasis view of human development.

  • Art Toegemann

    Two major mental health authorities, the American Psychiatric Association (APA) and the World Health Organization, attack parapsychology, claiming it is only schizophrenic psychosis. This position must be obstructive to attempts to swarm human intelligence.
    The American Psychological Association issued a statement of “no confidence” in the APA’s last edition of its Diagnostic and Statistical Manual of Mental Disorders, 5, for unspecified reasons.

  • Pingback: Are we destined to be out-played by A.I.?()

Over 3,000 super smart people have subscribed to my newsletter: