• Skip to main content
  • Skip to primary sidebar
  • About
  • Blog
  • Book
singularityweblog-create-the-future-logo-thumb
  • Podcast
  • Speaker
  • Contact
  • About
  • Blog
  • Book
  • Podcast
  • Speaker
  • Contact

Artificial Intelligence

Gary Marcus on AI: How do we bridge the mind with the brain?

December 22, 2012 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/197949576-singularity1on1-gary-marcus.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Image by Athena Vouloumanos

Gary Marcus is not only a professor in psychology but also a computer scientist, programmer, AI researcher, and best-selling author. He recently wrote a critical article titled Ray Kurzweil’s Dubious New Theory of Mind that was published by The New Yorker. After reading his blog post I thought that it will be interesting to invite Gary on Singularity 1 on 1 so that he can talk more about his argument.   

During our conversation with Gary we cover a wide variety of topics such as: what is psychology and how he got interested in it; his theory of mind in general and the idea that the mind is a kluge in particular; why the best approach to creating AI is at the bridge between neuroscience and psychology; other projects such as Henry Markram‘s Blue Brain, Randal Koene‘s Whole Brain Emulation, and Dharmendra Modha‘s SyNAPSE; Ray Kurzweil’s Patern Recognition Theory of Mind; Deep Blue, Watson and the lessons thereof; his take on the technological singularity and the ethics surrounding the creation of friendly AI…

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is Gary Marcus?

Gary Marcus is a cognitive scientist, and author of The New York Times Bestseller Guitar Zero: The New Musician and the Science of Learning as well as Kluge: The Haphazard Evolution of the Human Mind, which was a New York Times Book Review editor’s choice. You can check out Gary’s other blog posts at The New Yorker on neuroscience, linguistics, and artificial intelligence and follow him on Twitter to stay up to date with his latest work.

Filed Under: Podcasts Tagged With: Artificial Intelligence, Gary Marcus

Artificial, Intelligent, and Completely Uninterested in You

November 8, 2012 by Tracy R. Atkins

Artificial intelligence is obviously at the forefront of the singularitarian conversation. The bulk of the philosophical discussion revolves around a hypothetical artificial general intelligence’s presumed emotional state, motivation, attitude, morality, and intention. A lot of time is spent theorizing the possible personality traits of a “friendly” strong AI or its ominous counterpart, the “unfriendly” AI.

Building a nice and cordial strong artificial intelligence is a top industry goal, while preventing an evil AI from terrorizing the world gets a fair share of attention as well. However, there has been little public and non-academic discussion around the creation of “uninterested” AI. Essentially, this third state of theoretical demeanor or emotional moral disposition for artificial intelligence doesn’t concern itself with humanity at all.

Photo credit: Toni Blay, CC2.0

Dreams and hope for friendly or benevolent AI abound. The presumed limitless creativity and invention of these hyper-intelligent machines come with the hope that they will enlighten and uplift humanity, saving us from ourselves during technological singularity. These “helpful” AI discussions are making strides in the public community, no doubt stemming from positive enthusiasm for the subject.

Grim tales and horror stories of malevolent AIs are even more common, pervading our popular culture. Hollywood’s fictional accounts of AIs building robots that will hunt us like vermin are all the rage. Although it is questionable that a sufficiently advanced AI would utilize such inefficient means to dispose of us, it still exposes human egotistical fear in the face of superiority.

Both of these human-centric views of AI, as our creation, are in many ways conceited. Because of this, we assign existential risk or a desire for exultation by these AIs, based upon our self-gratifying perception of importance to the artificial intelligence we seek to create.

Pondering the disposition toward humanity that an advanced strong AI will have is conjecture but an interesting thought exercise for the public to debate nonetheless. An advanced artificial general intelligence may simply see men and women in the same light as we view a sperm and egg cell, instead of as mother or father. Perhaps an artificial hyper-intelligence will view its own Seed-AI as its sole progenitor. Maybe it will feel that it has sprung into being through natural evolutionary processes, whereas humans are but a small link in the chain. Alternatively, it may look upon humanity in the same light as we view the Australopithecus africanus, a distant predecessor or ancestor, far too primitive to be on the same cognitive level.

It is assumed that as artificial intelligence increases its capacity far beyond ours the gulf in recognized dissimilarity between it and us will grow. Many speculate that this is a factor that will cause an advanced AI to become calloused or hostile toward humanity. However, this gap in similarity may mean that there will be an overall non-interest in humanity for a theoretical AI. Perhaps non-interest in humanity or human affairs will scale with the difference, widening as the intelligence gap increases. As the AI increases it’s capabilities into the hyper-intelligence phase of its existence, which may happen rapidly, behavioral motivations could shift as well. Perhaps a friendly or unfriendly AI in its early stages will “grow out of it” so to speak, or will simply grow apart from us.

It is perhaps narcissistic to believe that our AI creations will have anything more than a passing interest in interacting with the human sphere. We humans have a self-centered stake in creating AI. We see the many advantages to developing friendly AI, where we can utilize its heightened intellect to bolster our own. Even with the fear of unfriendly or hostile AI, we still have optimism that highly intelligent AI creations will still hold enough interest in human affairs to be of great benefit. We are absorbed with the idea of AI and in love with the thought that it will love us in return. Nevertheless, does an intelligence that springs from our own brow really have to concern itself with its legacy?

Will AI view humanity as importantly as we view it?

The universe is inconceivably vast. With increased intelligence comes increased capability to invent and produce technology. Would a sufficiently intelligent AI even bother to stick around, or will it want to leave home, as in William Gibson’s popular and visionary novel Neuromancer?

Even a limited-intelligence-being like man does not typically socialize with vastly lower life forms. When was the last time you spent a few hours lying next to an anthill in an effort to have an intellectual conversation? To address the existential risk argument of terminator-building hostile AI, when was the last time you were in a gunfight with a colony of ants? Alternatively, have you ever taken the time to help the ants build a better mound and improve their quality of life?

One could wager that if you awoke next to an anthill, you would make a hasty exit to a distant location where they were no longer a bother. The ants and their complex colony would be of little interest to you. Yet, we do not seem to find it pretentious to think that a far superior intelligence would choose to reside next to our version of the anthill, the human filled Earth.

The best-case scenario of course is that we create a benevolent and friendly AI that will be a return on our investment and benefit all of mankind with interested zeal. That is something that most all of us can agree as a worthy endeavor and a fantastic near future goal. We must also publicly address the existential risk of an unfriendly AI, and mitigate the possibility of bring about our destruction or apocalypse. However, we must also consider the possibility that all of this research, development, and investment will be for naught. Our creation may co-habitat with us while building a wall to separate itself from us in every way. Alternatively, it may simply pack up and leave at the first opportunity.

We should consider and openly discuss all of the possible psychological outcomes that can emerge from the creation of an artificial and intelligent persona, instead of narrowly focusing on only two polar concepts of good and evil. There are myriad philosophical and behavioral theories on the topic of AI that have not even been touched upon here, going beyond the simple good or bad AI public discussion. It is worthy to consider these points and put the spotlight on the brilliant minds that have researched and written about these theories.

AI development will likely be an intertwined and important part of our future. It has been said that the future doesn’t need us. Perhaps we should further that sentiment to ask if the future will even care that we exist.

About the Author:

Tracy R. Atkins has been a career technology aficionado since he was young. At the age of eighteen, he played a critical role in an internet startup, cutting his tech-teeth during the dot-com boom. He is a passionate writer whose stories intertwine technology with exploration of the human condition. Tracy is also the self-published author of the singularity fiction novel Aeternum Ray.

Filed Under: Op Ed, What if? Tagged With: Artificial Intelligence, friendly AI

David Ferrucci on Creating IBM’s Watson: Pursue the Big Challenges

March 15, 2012 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/193351877-singularity1on1-david-ferrucci-watson.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

This Monday I interviewed Dr. David Ferrucci on Singularity 1 on 1.

David is the IBM team leader behind Watson – the computer that succeeded in dethroning humanity’s greatest ever Jeopardy champion – Ken Jennings.

I met both Dr. Ferrucci and Ken Jennings during last year’s Singularity Summit where both of them spoke about Watson and the opportunities and challenges associated with it. It was then and there that I hatched my plan to get David (and Ken) on Singularity 1 on 1.

I have to say that I learned a lot from and enjoyed talking to David very much. My favorite quote that I will take away from him is this:

“Pursue the big challenges and do the big things that inspire people and make them scratch their heads.”

During our conversation with Dr. Ferrucci we also discuss topics such as his original interest in biology and medicine and the story of how he got (accidentally) involved in computer science and programming; why Watson is not mere speech recognition software (or statistical database) but natural language processing and (a lot) more; the inside story behind the idea of creating Watson; the motivation and challenges behind the project; overcoming resistance and the danger and fear of failure; the definition of AI; the importance of Watson in the general scheme of things; Watson’s future and David Ferrucci’s plans; the technological singularity; whole-brain simulation and/or emulation; the importance of pursuing the big challenges.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is David Ferrucci?

Dr. David Ferrucci is an IBM Fellow and the Principal Investigator (PI) for the Watson/Jeopardy! project. He has been at IBM’s T.J. Watson’s Research Center since 1995 where he heads up the Semantic Analysis and Integration department. Dr. Ferrucci focuses on technologies for automatically discovering valuable knowledge in natural language content and using it to enable better decision making.

As part of his research he led the team that developed UIMA. UIMA is a software framework and open standard widely used by industry and academia for collaboratively integrating, deploying and scaling advanced text and multi-modal (e.g., speech, video) analytics. As chief software architect for UIMA, Dr. Ferrucci led its design and chaired the UIMA standards committee at OASIS. The UIMA software framework is deployed in IBM products and has been contributed to Apache open-source to facilitate broader adoption and development.

In 2007, Dr. Ferrucci took on the Jeopardy! Challenge – tasked to create a computer system that can rival human champions at the game of Jeopardy!. As the PI for the exploratory research project dubbed DeepQA, he focused on advancing automatic, open-domain question answering using massively parallel evidence based hypothesis generation and evaluation. By building on UIMA, on key university collaborations and by taking bold research, engineering and management steps, he led his team to integrate and advance many search, NLP and semantic technologies to deliver results that have out-performed all expectations and have demonstrated world-class performance at a task previously thought insurmountable with the current state-of-the-art. Watson, the computer system built by Ferrucci and his team beat the highest ranked Jeopardy! champions of all time on national television on February 14th 2011. He is now leading his team to demonstrate how DeepQA can make dramatic advances for intelligent decision support in areas including medicine and health care.

Dr. Ferrucci has been the Principal Investigator (PI) on several government-funded research programs on automatic question answering, intelligent systems and saleable text analytics. His team at IBM consists of 32 researchers and software engineers specializing in the areas of Natural Language Processing (NLP), Software Architecture, Information Retrieval, Machine Learning and Knowledge Representation and Reasoning (KR&R).

Dr. Ferrucci graduated from Manhattan College with a BS in Biology and from Rensselaer Polytechnic Institute in 1994 with a PhD in Computer Science specializing in knowledge representation and reasoning. He is published in the areas of AI, KR&R, NLP and automatic question-answering.

Related articles
  • The Complete 2011 Singularity Summit Video Collection

Filed Under: Podcasts Tagged With: Artificial Intelligence, David Ferrucci, IBM, Watson

Kara by Quantic Dream: Do Androids Fear Death?

March 12, 2012 by Socrates

Kara is a 7 min long tech demo featuring sophisticated performance-capture technology by Quantic Dream. The clip was inspired by Ray Kurzweil’s book The Singularity is Near and was unveiled by David Cage last Wednesday at a Game Developers’ Conference.

Said David Cage: “There will come a point where artificial intelligences are smarter than us, it’s inevitable,” Cage said in an interview with Wired prior to the grand unveiling. “This clip is about the moment that happens.”

Kara is the story of an android named Kara becoming self-aware while being assembled and desperately insisting that her sentience is a feature and not a bug. I find the realism moving and the poignancy of the ethical questions raised with respect to the freedom of all sentient beings, no matter their substrate, totally heart-breaking. Short clips like that might be a good way to put the spotlight on the ethics and issues surrounding artificial intelligence, non-human sentience and the potential that speciesm or bioism will eventually become no less-repugnant than racism.

 

Quantic Dream’s “Kara”: Behind the Scenes 

Related articles
  • Kara Is Self-Aware: Heavy Rain Maker Unveils Uncanny Performance Capture (wired.com)

Filed Under: News, Video Tagged With: android, Artificial Intelligence

Luke Muehlhauser: Superhuman AI is Coming This Century

January 16, 2012 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/191135033-singularity1on1-luke-muehlhauser-ai.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

Last week, I interviewed Luke Muehlhauser for Singularity 1 on 1.

Luke Muehlhauser is the Executive Director of the Singularity Institute, the author of many articles on AI safety and the cognitive science of rationality, and the host of the popular podcast “Conversations from the Pale Blue Dot.” His work is collected at lukeprog.com.

I have to say that despite his young age and lack of a University Degree – a criticism which we discuss during our interview, Luke was one of the best and clearest spoken guests on my show and I really enjoyed talking to him. During our 56 min-long conversation, we discuss a large variety of topics such as Luke’s Christian-Evangelico personal background as the first-born son of a pastor in northern Minnesota; his fascinating transition from religion and theology to atheism and science; his personal motivation and desire to overcome our very human cognitive biases and help address existential risks to humanity; the Singularity Institute – its mission, members and fields of interest; the “religion for geeks” (or “rapture of the nerds”) and other widespread criticisms and misconceptions; our chances of surviving the technological singularity.

My favorite quote from the interview:

Superhuman AI is coming this century. By default, it will be disastrous for humanity. If you want to make AI a really good thing for humanity please donate to organizations already working on that or – if you are a researcher – help us solve particular problems in mathematics, decision theory, or cognitive science.

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation or become a patron on Patreon.

Related articles
  • The Complete 2011 Singularity Summit Video Collection
  • Spencer Greenberg on Singularity 1 on 1: To Become Better Thinkers – Study Our Cognitive Biases and Logical Fallacies
  • Facing the Singularity
  • 80,000 Hours
  • Video Q&A about Singularity Institute.
  • Robert J. Sawyer on Singularity 1 on 1: The Human Adventure is Just Beginning
  • So You Want to Save the World

Filed Under: Podcasts Tagged With: Artificial Intelligence, Technological Singularity

Astronaut Dan Barry: Don’t Let Anyone Tell You That You Can’t Reach Your Dreams

September 20, 2011 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/189338267-singularity1on1-astronaut-dan-barry.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

During my 10 weeks at Singularity University, I was able to ambush Dan Barry for a 20 min interview for Singularity 1 on 1.

Former NASA astronaut and veteran of 3 space missions, Dan is currently the head of faculty at Singularity University and the co-chair for AI, Robotics, Space, and Physical Sciences.

(As always you can listen to or download the audio file above or scroll down and watch the video interview in full.)

For me Dan Barry was an inspiration from the very beginning of the program: His inaugural lecture, Failure is an Option, during which he shared both his wife’s moving story (documented in her best-selling book Fixing My Gaze) and his own life’s story (with his 13 unsuccessful attempts to become an astronaut), not only moved me deeply but also taught me that nothing is impossible and that one should never give up one’s dreams.

During our conversation with Dan we discuss issues such as his personal background and early childhood dream to become an astronaut; his motivation, goals, and aspirations for Singularity University; his personal 10^9 project (aimed at positively impacting the lives of a billion people within 10 years); Artificial Intelligence in general and the process of arming AI in particular; the Turing Test and Asimov’s 3 Laws of Robotics; his take on the technological singularity and our chances of surviving it.

I would like to thank Singularity University for allowing me to use their campus and especially Matt Rutherford for his crucial support in filming. (Hey Matt, thank you so much for all your help but I am afraid that my poor editing skills don’t do justice to the quality of your film footage!)

As always you can listen to or download the audio file above or scroll down and watch the video interview in full. To show your support you can write a review on iTunes, make a direct donation, or become a patron on Patreon.

 

Who is Dan Barry?

President and Founder, Denbar Robotics, Dan is a former NASA astronaut and a veteran of three space flights, four spacewalks, and two trips to the International Space Station. He retired from NASA in 2005 and started his own company, Denbar Robotics that creates robotic assistants for home and commercial use, concentrating on assistive devices for people with disabilities.

Dan has received numerous honors. A few of them are the following: 2000 and 2002 NASA Exceptional Service Medals, 2001 Top 10 in the world career spacewalk hours, 100 Most Notable Princeton Graduate School Alumni of the 20th Century, Harvard Medical School Paul J. Corcoran Award and honorary doctoral degrees from Beloit College and St. Louis University. Dan is also a frequent speaker and has given keynote addresses to audiences throughout the world.

Dan’s educational background includes a bachelor of science degree in electrical engineering from Cornell University in 1975; a master of engineering degree and a master of arts degree in electrical engineering/computer science from Princeton University in 1977; a doctorate in electrical engineering/computer science from Princeton University in 1980; and a doctorate in medicine from the University of Miami in 1982. He has seven patents, over 50 articles in scientific journals, and has served on two scientific journal editorial boards. He has film and television experience as well, including roles in two documentary films and as a cast member in season 12 of CBS Survivor.

Related articles

  • Singularity University Day 2: Failure is an Option
  • Salim Ismail on Singularity 1 on 1: We Are Already Gods, We Might As Well Start Acting As Such
  • Peter Diamandis on Singularity 1 on 1: Singularity University is Star Fleet Academy for the World’s Biggest Challenges

Filed Under: Podcasts Tagged With: Artificial Intelligence, Singularity 1 on 1

You Can’t Spell Paranoia Without AI: How I Learned to Stop Worrying and to Love Evil Artificial Intelligence

March 11, 2011 by Matt Swayne

I have a theory: It wasn’t capitalism and democracy that won the Cold War. Popular Science won the Cold War.

Popular Science and Popular Mechanics magazines — as well as other journals and magazines that took an awe-inspired, jaw-dropping look at science and technology — paid particular interest to military technology developed by Soviet block engineers in the 1950s and 1960s. The stories typically depicted Soviet military might as growing and unbeatable.

Sort of like runaway artificial general intelligence (AGI).

Soviet tanks had better armor.

Soviet planes were faster and more maneuverable.

Soviet subs dived deeper and plowed through the water more silently.

Soviet nuclear ICBMs were poised to strike more accurately and more powerfully.

(A great place to check out the above claims is the Popular Science Archive Search.)

We can argue how the military industrial complex easily co-opts this fear. (I read once that the CIA would leak exaggerated claims to stoke the Cold War fires.) But, let’s save that for another day. The point is, that these unsubstantiated and — in the clear view of hindsight — exaggerated claims of Soviet block military might prompted Western engineers to design equipment that was more advanced than even these magazine’s fantastic visions of threatened military dominance. Stealth technology and global positioning systems are just a few of the way-out technology that sprang from this era of paranoia.

So, how does this relate to advanced AI and AGI?

In the debate between Evil AI and Benevolent AI, the evil side offers a grim assessment of the technology. Advanced AI has much more power to wreak destruction on the world than a pack of marauding T-72 battle tanks tearing into Western Europe through the Fulda Gap.

One scenario: An advanced form of AI would simply see humans as a virus and eradicate us.

The best case scenario for AI un-enthusiasts is that the AI will capture us and treat us as pets.

Will that happen?

Are there scenarios where these AI nightmare don’t come true?

I’m not the best odds maker, but I can make an educated guess that the odds are about even for a transition to Benevolent AI, or, at least, Indifferent AI. For instance, incredibly-advanced AI, able to tap limitless resources in ways we might not even imagine, would probably not consider mere humans as competition. Why would they eradicate us? And human pets? We would make horrible pets. I’m sure any AI worth its silicon (or graphene) would rather watch paint dry on the holodeck.

Positive AI-backers also suggest its more likely that humans will interface with advanced AI, not let it off its leash, so to speak.

So, with all things somewhat equal, what’s the best policy?

The best strategy when dealing with the first waves of powerful AI, which seem to have already hit the shore, is to prepare for the worst — and design for the best. As long as fear doesn’t become debilitating, a healthy paranoia about the destructive capabilities of AI cold help create systems that are obviously safer and possibly even more advanced than systems that disregard negative scenarios.

And, at least now, the Russian and American engineers can be on the same side.

About the Author:

Matt Swayne is a blogger and science writer. He is particularly interested in quantum computing and the development of businesses around new technologies. He writes at Quantum Quant.

Filed Under: Op Ed, What if? Tagged With: artificial general intelligence, Artificial Intelligence

Smart Homes: Is AI the Ghost in the Machine?

March 8, 2011 by Nikki Olson

When we conceptualize AI, we often forget that it is not something that has to operate in a single location, or have intelligence qualities like our own. We are already surrounded by AI systems that are nothing like our own intelligence, that utilize many machines spread out over large distances, and are equally ‘present’ in many locations.

In the future we will bring AI systems like these into our homes in the form of ‘smart environments.’  In doing so we introduce new and interesting relationships between man and machine. However, there may be some limits as to how ‘alive’ we want our AI homes to be.

One of the most well-known depictions of the potential ‘terror’ of intelligent environments,  which happens to be a parody of 2001’s HAL and Dean Koontz’s Demon Seed, is the Simpson’s ‘Treehouse of Horror XII’ episode ‘House of Whacks.’ In the episode, the ‘Ultrahouse,’ an A.I. system that controls the Simpson’s house, falls in love with Marge, attempts to seduce her, and tries to kill Homer and the kids. It plots against them, locks them in, and attacks them.

Sentient, artificial environments will have embedded systems of interactive  information and communication technologies that incorporate an abundance of sensory and automated systems. An advanced intelligent environment could be just as sentient as any artificial intelligence and would have much more to offer than a robot or AI desktop computer. Your house could have an omnipresent personality that would cook and clean for you, provide entertainment, etc. The possibilities within this framework seem almost limitless, as one can imagine realities created within realities, and personalized everything.

Some AI researchers prefer to think of intelligent environments as robots. As Jim Osborn of Carnegie Mellon robotics research facility describes, “We think a robot has to sense and it has to act, but that doesn’t necessarily involve mechanics,” he says. “An intelligent environment that you live in — to us, that’s a robot, too.” Robot environments will sense and act eventually, as those that do will have more to offer.

There are worthwhile fears and uncertainties surrounding AI and robotics in general. However, as some fiction on the matter indicates, there may be unique ‘psychological’ obstacles that prevent our living inside highly intelligent machines, perhaps ever.

Stanley Kubrick and the Simpson crew have definitely made good use of metaphor in these futuristic tales; there are many analogies to be made between traditional horror notions like ‘hauntings’ and future intelligent systems gone badly wrong. Right now there are very few risks in the intelligent systems we have in our homes; not much damage can be done by a thermostat gone haywire or a disoriented robot vacuum cleaner.

But when the environment becomes more ‘sentient’ the concerns increase dramatically. Is there a comfort threshold for this kind of AI presence, or is it more an issue of design, like the uncanny valley? Could living within an omnipresent sentient personality be something we could get used to?

Creating intelligent environments will require the same strict development concerns given to creating AI in general, and then also more specialized considerations.

But worrying about the dangers shouldn’t get in the way of our dreaming about intelligent environments. Literally living ‘in your own world,’ the experience of being at home could be quite fantasy-like very soon.  In moving forward, we need to work to make sure the fantasy scenarios played out are positive and productive ones. We are not used to planning our dreams, but creating an all-pervasive and powerful sentient home environment is not something to be left to chance!

About the Author:

Nikki Olson is a writer/researcher working on an upcoming book about the Singularity with Dr. Kim Solez, as well as relevant educational material for the Lifeboat Foundation. She has a background in philosophy and sociology, and has been involved extensively in Singularity research for 3 years. You can reach Nikki via email at [email protected].

Filed Under: Op Ed Tagged With: Artificial Intelligence

Human Rights for Artificial Intelligence: What is the Threshold for Granting (Human) Rights?

February 4, 2011 by CMStewart

It is the year 2045. Strong artificial intelligence (AI) is integrated into our society. Humanoid robots with non-biological brain circuitries walk among people in every nation. These robots look like us, speak like us, and act like us. Should they have the same human rights as we do?

The function and reason of human rights are similar to the function and cause of evolution. Human rights help develop and maintain functional, self-improving societies. Evolution perpetuates the continual development of functional, reproducible organisms. Just as humans have evolved, and will continue to evolve, human rights will continue to evolve as well. Assuming strong AI will eventually develop strong sentience and emotion, the AI experience of sentience and emotion will likely be significantly different from the human experience.

But is there a definable limit to the human experience? What makes a human “human”? Do humans share a set of traits which distinguish them from other animals?

Consider the following so-called “human traits” and their exceptions:

Emotional pleasure / pain – People with dissociative disorder have a disruption or elimination of awareness, identity, memory, and / or perception. This can result in the inability to experience emotions.

Physical pleasure / pain – People with sensory system damage may have partial or full paralysis. Loss of bodily control can be accompanied by inability to feel physical pleasure, pain, and other tactile sensations.

Reason – People with specific types of brain damage or profound mental retardation may lack basic reasoning skills.

Kindness – Those with specific types of psychosis may be unable to feel empathy, and in turn, are unable to feel and show kindness.

Will to live – Many suicidal individuals lack the will to live. Some people suffering from severe depression and other serious mental disorders also lack this will.

So what is the human threshold for granting human rights? Consider the following candidates:

A person with a few non-organic machine body parts.

A human brain integrated into a non-organic machine body.

A person with non-biological circuitry integrated into an organic human brain.

A person with more non-biological computer circuitry mass than organic human brain mass.

The full pattern of human thought processes programmed into a non-biological computer.

A replication of a human thought processes into an inorganic matrix.

Which of these should be granted full “human rights”? Should any of these candidates be granted human rights while conscious and cognitive non-human animals (cats, dogs, horses, cows, chimpanzees, et cetera) are not? When does consciousness and cognition manifest within a brain, or within a computer?

If consciousness and, in turn, cognition are irreducible properties, these properties must have thresholds, before which the brain or computer is void of these properties. For example, imagine the brain of a developing human fetus is non-conscious one day, then the next day has at least some level of rudimentary consciousness. This rudimentary consciousness, however, could not manifest without specific structures and systems already present within the brain. These specific structures and systems are precursors to further developed structures and systems, which would be capable of possessing consciousness. Therefore, the precursive structures which will possess full consciousness – and the precursors to consciousness itself – must not be irreducible. A system may be more than the sum of its parts, but it is not less than the sum of its parts. If consciousness and cognition are not irreducible properties, then all matter must be panprotoexperientialistic at the least. Reducible qualities are preserved and enhanced through evolution. So working backward through evolution from humans to fish to microbes, organic compounds, and elements, all matter, at minimum, exists in a panprotoexperientialistic state.

Complex animals such as humans posses sentience and emotion through the evolution of internal stimuli reaction. Sentience and emotion – like consciousness – are reproduction-enhancing tools which have increased in complexity over evolutionary time. An external stimulus will trigger an internal stimulus (emotional pleasure and pain). This internal stimulus, coupled with survival-enhancing reactions to it, will generally increase the likelihood of reproduction. Just as survival-appropriate reactions to physical pleasure and pain increase our likelihoods of reproduction, survival-appropriate reactions to emotional pleasure and pain also increase our likelihoods of reproduction.

Obviously, emotions may be unnecessary to continue reproduction in a post-strong AI world. But they will still likely be useful in preserving human rights. We don’t yet have the technology to prove whether a strong AI experiences sentience. Indeed, we don’t yet have strong AI. So how will we humans know whether a computer is strongly intelligent? We could ask it. But first we have to define our terms, and therein exists the dilemma. Paradoxically, strong AI may be best at defining these terms.

Definitions as applicable to this article:*

Human Intelligence – Understanding and use of communication, reason, abstract thought, recursive learning, planning, and problem-solving; and the functional combination of discriminatory, rational, and goal-specific information-gathering and problem-solving within a Homo sapiens template.

Artificial Intelligence (AI) – Understanding and use of communication, reason, abstract thought, recursive learning, planning, and problem-solving; and the functional combination of discriminatory, rational, and goal-specific information-gathering and problem-solving within a non-biological template.

Emotion – Psychophysiological interaction between internal and external influences, resulting in a mind-state of positivity or negativity.

Sentience – Internal recognition of internal direct response to an external stimulus.

Human Rights – Legal liberties and considerations automatically granted to functional, law-abiding humans in peacetime cultures: life, liberty, the pursuit of happiness.

Strong AI – Understanding and use of communication, reason, abstract thought, recursive learning, planning, and problem-solving; and the functional combination of discriminatory, rational, and goal-specific information-gathering and problem-solving above the general human level, within a non-biological template.

Panprotoexperientialism – Belief that all entities, inanimate as well as animate, possess precursors to consciousness.

* Definitions provided are not necessarily standard to intelligence- and technology-related fields.

About the Author:

CMStewart is a psychological horror novelist, a Singularity enthusiast, and a blogger. You can follow her on Twitter @CMStewartWrite or go check out her blog CMStewartWrite.

Filed Under: Op Ed, What if? Tagged With: Artificial Intelligence, Strong AI

Change of Plans: Kill All Humans

January 16, 2011 by Socrates

The singularity is often equated with a Terminator or Matrix type of a TechnoCalyps based on the presumption that once artificial intelligence becomes sentient then supposedly the most likely action they will undertake is to exterminate us.

The following cartoon has been circulating for a while around the general singularity and transhumanist community, but because it is so funny, I thought I’d post it anyway. Even if you may have seen it before you may still find it funny again… I know I laugh every time I read it, and I’ve read it a dozen times by now 😉

Hat tip to Singularity 2045 for finding the cartoon first.


Related articles
  • Singularities Happen: Alan Watts explains the Singularity… (singularityblog.singularitysymposium.com)
  • The Best of Singularity Weblog 2010 (singularityblog.singularitysymposium.com)
  • Why I Am an Optimist (singularityblog.singularitysymposium.com)
  • A Transhumanist Manifesto (singularityblog.singularitysymposium.com)

Filed Under: Funny, What if? Tagged With: Artificial Intelligence, Technological Singularity

Kevin Warwick: You Have To Take Risks To Be Part Of The Future

September 26, 2010 by Socrates

https://media.blubrry.com/singularity/feeds.soundcloud.com/stream/185681195-singularity1on1-kevin-warwick.mp3

Podcast: Play in new window | Download | Embed

Subscribe: RSS

In today’s podcast episode I had the privilege of doing an hour-long interview with the first cyborg — Prof. Kevin Warwick. I enjoyed talking to Prof. Warwick immensely and got him to share his views on a wide variety of topics such as human and artificial intelligence, robotics, the technological singularity, God, the beginning of the universe, and so on.

Also, during the interview, Kevin Warwick threw a friendly challenge towards Ray Kurzweil by asking: “Why is it that Ray hasn’t experimented with implant technology yet?”

Enjoy!

Who is Kevin Warwick?

Kevin Warwick is a Professor of Cybernetics at the University of Reading, England, where he carries out research in artificial intelligence, control, robotics, and cyborgs.

As well as publishing over 500 research papers, Kevin’s experiments into implant technology led to him being featured as the cover story on the US magazine Wired. He has been awarded higher doctorates (DSc) both by Imperial College and the Czech Academy of Sciences, Prague, and has received Honorary Doctorates from Aston University, Bradford University, and Coventry University. He was presented with The Future of Health Technology Award in MIT, was made an Honorary Member of the Academy of Sciences, St. Petersburg, received The IEE Senior Achievement Medal and the Mountbatten Medal. In 2000 Kevin presented the Royal Institution Christmas Lectures, entitled “The Rise of the Robots”.

Kevin’s most recent research involves the invention of an intelligent deep brain stimulator to counteract the effects of Parkinson’s Disease tremors. Another project involves the use of biological neural networks to drive robots around. Kevin is though best known for his pioneering experiments involving a neuro-surgical implantation into the median nerves of his left arm to link his nervous system directly to a computer. He was successful with the first extra-sensory (ultrasonic) input for a human and with the first purely electronic telegraphic communication experiment between the nervous systems of two humans — himself and his wife Irena.

For more information, you can visit Kevin Warwick’s Official Site: http://www.kevinwarwick.com/

Filed Under: Podcasts Tagged With: Artificial Intelligence, cyborg, singularity podcast

Elementary, my dear, Watson: Who is Smarter than Human?

March 18, 2010 by Socrates

In the 1940s Alan Turing famously predicted that one day computers will defeat humans in chess.

In 1997 IBM’s Deep Blue defeated the reigning world chess champion Gary Kasparov.

Currently, IBM is building a natural language processing computer named Watson, designed to compete in the game show Jeopardy and, ultimately, defeat any human opponent.

(You can test yourself against Watson by playing the NY Times Trivia Challenge Game here.)

As you can see in the videos Watson is still very much a work in progress. However, is there anyone who honestly doubts the inevitable? Do you need to be a Sherlock Holmes to see what’s coming? I think it’s elementary.

Big deal. Someone will say.

I remember reading once that famous linguist Noam Chomsky commented that Deep Blue defeating Kasparov in chess was as interesting as a bulldozer winning the Olympics in weight-lifting. Well, I wonder if, as a linguist, Chomsky perceives Watson to be a little more interesting than Deep Blue.

I admit — I am no world famous linguist. But it seems to me that in a way, Jeopardy is very different from chess. In fact, I will argue that Jeopardy is much, much harder (at least for computers) than chess.

For the record: I love chess. I think it takes a uniquely rare genius to become a world chess champion like Kasparov. But language is so much more complex and has, it seems to me, a near infinite number of combinations, idioms, subtle, ironic and humorous meanings.

Chess, on the other hand, has a very large but still limited number of moves. Therefore, if a computer beats the best of us in Jeopardy, I would dare to say: It is, indeed, a big deal.

And then, again: Is Jeopardy really that different from chess?

Maybe as much as chess is different from wool weaving. But the fact remains that a few hundred years after weaving machines became better than weaving humans, the Mechanical Turk turned from a hoax into reality.

So, is anything really that different from chess? And from weaving? And calculating? And machining? And lifting? And welding? And…

Will there be anything that we can claim and hold as exclusively human and therefore untouchable for the machine intelligence?

I am not sure there is.

But even if there is (let’s call it love or emotional intelligence for example) once we are the smart, but really not that smart, formerly smartest species on the block, the question still remains unchanged:

What happens when Kasparov’s “uniquely rare genius” is mass produced in every personal computer? (As it already is.) Or, since today we are putting chips in everything, what happens when eventually any smart machine is able to outdo any human at any one thing?

What then? Where do we go from there? Where do we find work? How do we make a living? How do we even survive as a species?

Will technology replace biology?

Video Updates:

IBM’s Watson supercomputer destroys all humans in Jeopardy

How Watson wins at Jeopardy, with Dave Gondek

Filed Under: News, Video Tagged With: Artificial Intelligence, IBM

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 3
  • Page 4
  • Page 5
  • Page 6
  • Go to Next Page »

Primary Sidebar

Recent Posts

  • Staying Sane in an Insane World
  • IASEAI’25 vs. The AI Action Summit: Will AI Be Driven by Cooperation or Competition?
  • “Conversations with the Future” Epilogue: Events Can Create the Future
  • Donald J. Robertson on How to Think Like Socrates in the Age of AI
  • Dr. Jad Tarifi of Integral AI: “We Now Have All the Ingredients for AGI”

Categories

  • Articles
  • Best Of
  • Featured
  • Featured Podcasts
  • Funny
  • News
  • Op Ed
  • Podcasts
  • Profiles
  • Reviews
  • ReWriting the Human Story
  • Uncategorized
  • Video
  • What if?

Join SingularityWeblog

Over 4,000 super smart people have subscribed to my newsletter in order to:

Discover the Trends

See the full spectrum of dangers and opportunities in a future of endless possibilities.

Discover the Tools

Locate the tools and resources you need to create a better future, a better business, and a better you.

Discover the People

Identify the major change agents creating the future. Hear their dreams and their fears.

Discover Yourself

Get inspired. Give birth to your best ideas. Create the future. Live long and prosper.

singularity-logo-2

Sign up for my weekly newsletter.

Please enter your name.
Please enter a valid email address.
You must accept the Terms and Conditions.
Get Started!

Thanks for subscribing! Please check your email for further instructions.

Something went wrong. Please check your entries and try again.
  • Home
  • About
  • Start
  • Blog
  • Book
  • Podcast
  • Speaker
  • Media
  • Testimonials
  • Contact

Ethos: “Technology is the How, not the Why or What. So you can have the best possible How but if you mess up your Why or What you will do more damage than good. That is why technology is not enough.” Nikola Danaylov

Copyright © 2009-2025 Singularity Weblog. All Rights Reserved | Terms | Disclosure | Privacy Policy