• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

Strong AI

Dr. Joscha Bach: Build Strong AI that Bridges Humanity and the Powers that Be

November 5, 2015 by Daniel Faggella

Dr. Joscha Bach of the MIT Media Lab, and the Harvard Program for Evolutionary Dynamics, has dedicated much of his research pursuits to figuring out how the mind works. Over a decade ago he founded the MicroPsi project, in which virtual agents are constructed and used in a computer model to discover and describe the interactions of emotion, motivation, and cognition of situated agents. Bach’s mission to build a model of the mind is the bedrock research in the creation of Strong AI i.e. cognition on par with that of a human being.

Building Stronger AI with Reinforcement Learning

Reinforcement learning drives much of the agent interactions in MicroPsi. Though a type of machine learning, Bach points out that reinforcement learning is “different from machine learning, in that it involves interaction with the world and becoming more intelligent as a consequence, something that AI is not yet smart enough to do on its own”.

Humans and other intelligence beings look for external features that they can organize into hierarchies of information. The recognition that takes place in “the mind” of a computer is, in some part, based on a similar model. Scientists use a “grammatically-simplified” model, which allows the computer to recognize patterns, images, even actions and events. The agents are able to make sense by identifying suitable behavior, in turn making more sense of the world. They are able to organize the world just by seeing it and identifying statistical dimensions.

Humans do this constantly by taking signals from multiple dimensions – time, space, color, etc. – in real-time, forming neural networks that help us learn how, and how not to behave, in the world. We then organize these concepts into categories, such as  features, object permanence, mental states of people, etc., and then come up with  models and theories to explain these abstractions.

Joscha and other cognitive scientists believe that figuring out how the mind works will be the catalyst for success in reverse engineering a strong AI mind. “One of the big questions”, says Joscha, “is how much do we have to put into the machine in the first place?”

The Moneyed Hands that Pave the AI Road

On the road to stronger AI, Bach provides a no-frills perspective on how the  direction of AI will be influenced in the decades ahead. Joscha thinks the strongest winds may not always come from the ‘makers’ in the lab and field, but from the ‘shakers’ at the top of the economic totem pole – the ones providing the funding. Right now, if you want to really make a quick and lasting change on the planet, you have a great chance of doing so in the form of an organization as opposed to an individual. “AI is likely to come from the top as an extension of business intelligence…it’s super dangerous if we make corporations more powerful, as they don’t always act in the best interest of humans”.  In some ways, corporations already have more rights than human beings, and they operate much like agents do, in an efficient and goal-oriented manner, but often without a whole unified consciousness.

Bach’s prediction should be discussed publicly and privately amongst tech companies, private investors, and policy makers. Precedents in the grey areas of morality and progress have been set before, and history  cannot be brushed away or ignored.  For example, though the evidence against burning fossil fuels is rather clear today, companies are still harvesting and burning fossil fuels. Why? The commodity does well in the stock markets; if the companies stop supplying and using, the stock market and companies incur huge losses.

This approach by the corporate world ties into another looming – that AI will soon replace an unprecedented number of jobs. “If a person is no longer fulfilling your organization’s goal, and you (as an organization) want to survive, you’re going to replace that person by somebody else who is better aligned with the goals of the organization, there’s a kind of evolution going on among these organizations, and part of this evolution is how well they perform in the marketplaces, the financial systems, how well they are able to better other organizations to persist and stay…in power,” says Bach.

Companies in a wide range of industries are already starting to use AI, and at some point may not need humans. “…Companies don’t care about externalities, they care about the parts that make it more efficient and help it survive”, says Joscha. He believes the challenge of making a world safe against AI is embedded in our decisions to make economic decisions that are in line with the best human outcomes

Though AI poses risks, now almost synonymous with the Future of Life Institute open letter presented to the United Nation and signed by Musk, Gates, Hawking, and other leading thinkers, funding for AI will likely not stop anytime soon. Bach believes the only motivation for stopping funding would rest on the belief that a certain goal is not possible, not because we fear AI.

But if one day humans are no longer primary agents on the planet, and those “artificial” agents have goals that are not like ours, we might find ourselves in a hard spot. As Joscha explained, “Musk gave generously, for good purposes, to support AI research; at the same time, research into (both) development and risk are important. It makes technical sense for him to be concerned about the dangers as it relates to probabilities of our demise by AI”.

 

About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

Filed Under: Op Ed Tagged With: Joscha Bach, Strong AI

Human Rights for Artificial Intelligence: What is the Threshold for Granting (Human) Rights?

February 4, 2011 by CMStewart

It is the year 2045. Strong artificial intelligence (AI) is integrated into our society. Humanoid robots with non-biological brain circuitries walk among people in every nation. These robots look like us, speak like us, and act like us. Should they have the same human rights as we do?

The function and reason of human rights are similar to the function and cause of evolution. Human rights help develop and maintain functional, self-improving societies. Evolution perpetuates the continual development of functional, reproducible organisms. Just as humans have evolved, and will continue to evolve, human rights will continue to evolve as well. Assuming strong AI will eventually develop strong sentience and emotion, the AI experience of sentience and emotion will likely be significantly different from the human experience.

But is there a definable limit to the human experience? What makes a human “human”? Do humans share a set of traits which distinguish them from other animals?

Consider the following so-called “human traits” and their exceptions:

Emotional pleasure / pain – People with dissociative disorder have a disruption or elimination of awareness, identity, memory, and / or perception. This can result in the inability to experience emotions.

Physical pleasure / pain – People with sensory system damage may have partial or full paralysis. Loss of bodily control can be accompanied by inability to feel physical pleasure, pain, and other tactile sensations.

Reason – People with specific types of brain damage or profound mental retardation may lack basic reasoning skills.

Kindness – Those with specific types of psychosis may be unable to feel empathy, and in turn, are unable to feel and show kindness.

Will to live – Many suicidal individuals lack the will to live. Some people suffering from severe depression and other serious mental disorders also lack this will.

So what is the human threshold for granting human rights? Consider the following candidates:

A person with a few non-organic machine body parts.

A human brain integrated into a non-organic machine body.

A person with non-biological circuitry integrated into an organic human brain.

A person with more non-biological computer circuitry mass than organic human brain mass.

The full pattern of human thought processes programmed into a non-biological computer.

A replication of a human thought processes into an inorganic matrix.

Which of these should be granted full “human rights”? Should any of these candidates be granted human rights while conscious and cognitive non-human animals (cats, dogs, horses, cows, chimpanzees, et cetera) are not? When does consciousness and cognition manifest within a brain, or within a computer?

If consciousness and, in turn, cognition are irreducible properties, these properties must have thresholds, before which the brain or computer is void of these properties. For example, imagine the brain of a developing human fetus is non-conscious one day, then the next day has at least some level of rudimentary consciousness. This rudimentary consciousness, however, could not manifest without specific structures and systems already present within the brain. These specific structures and systems are precursors to further developed structures and systems, which would be capable of possessing consciousness. Therefore, the precursive structures which will possess full consciousness – and the precursors to consciousness itself – must not be irreducible. A system may be more than the sum of its parts, but it is not less than the sum of its parts. If consciousness and cognition are not irreducible properties, then all matter must be panprotoexperientialistic at the least. Reducible qualities are preserved and enhanced through evolution. So working backward through evolution from humans to fish to microbes, organic compounds, and elements, all matter, at minimum, exists in a panprotoexperientialistic state.

Complex animals such as humans posses sentience and emotion through the evolution of internal stimuli reaction. Sentience and emotion – like consciousness – are reproduction-enhancing tools which have increased in complexity over evolutionary time. An external stimulus will trigger an internal stimulus (emotional pleasure and pain). This internal stimulus, coupled with survival-enhancing reactions to it, will generally increase the likelihood of reproduction. Just as survival-appropriate reactions to physical pleasure and pain increase our likelihoods of reproduction, survival-appropriate reactions to emotional pleasure and pain also increase our likelihoods of reproduction.

Obviously, emotions may be unnecessary to continue reproduction in a post-strong AI world. But they will still likely be useful in preserving human rights. We don’t yet have the technology to prove whether a strong AI experiences sentience. Indeed, we don’t yet have strong AI. So how will we humans know whether a computer is strongly intelligent? We could ask it. But first we have to define our terms, and therein exists the dilemma. Paradoxically, strong AI may be best at defining these terms.

Definitions as applicable to this article:*

Human Intelligence – Understanding and use of communication, reason, abstract thought, recursive learning, planning, and problem-solving; and the functional combination of discriminatory, rational, and goal-specific information-gathering and problem-solving within a Homo sapiens template.

Artificial Intelligence (AI) – Understanding and use of communication, reason, abstract thought, recursive learning, planning, and problem-solving; and the functional combination of discriminatory, rational, and goal-specific information-gathering and problem-solving within a non-biological template.

Emotion – Psychophysiological interaction between internal and external influences, resulting in a mind-state of positivity or negativity.

Sentience – Internal recognition of internal direct response to an external stimulus.

Human Rights – Legal liberties and considerations automatically granted to functional, law-abiding humans in peacetime cultures: life, liberty, the pursuit of happiness.

Strong AI – Understanding and use of communication, reason, abstract thought, recursive learning, planning, and problem-solving; and the functional combination of discriminatory, rational, and goal-specific information-gathering and problem-solving above the general human level, within a non-biological template.

Panprotoexperientialism – Belief that all entities, inanimate as well as animate, possess precursors to consciousness.

* Definitions provided are not necessarily standard to intelligence- and technology-related fields.

About the Author:

CMStewart is a psychological horror novelist, a Singularity enthusiast, and a blogger. You can follow her on Twitter @CMStewartWrite or go check out her blog CMStewartWrite.

Filed Under: Op Ed, What if? Tagged With: Artificial Intelligence, Strong AI

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy