Can Abuse of AI Agents Shape The Future of Human Computer Interaction?

Daniel Faggella / ,

Posted on: November 13, 2015 / Last Modified: November 13, 2015

If you’ve watched any commercial television lately, it’s likely you’ve seen spots featuring IBM’s Watson supercomputer having casual conversations with Bob Dylan, Jeopardy champion Ken Jennings, and a cute little girl named Annabelle. While those ads may help IBM sell the big data computing powers of Watson, everyday human computer interactions are frequently quite different, according to Artificial Intelligence Researcher Dr. Sheryl Brahnam. Those differences highlight how we might interact with artificial intelligence agents now and in the future.

Human computer interaction (HCI) has evolved over the last 50 years to the point that what we have now as HCI as communication, is where we actually interact with computers using natural language,” Brahnam said. “We call these systems by many names, but the name I often use is ‘conversational agent.’ If they have a physical form and I can see them, we call them an “embodied conversational agent.’”

Observing those human interactions with conversational agents is where Brahnam and her team have focused their research. While they’ve found there are plenty of benefits to making a computer interface resemble or behave more like a human being, she believes the manner in which humans interact with that agent can show us how far that HCI still needs to evolve.

As an example, Brahnam cited the typical frustration that everyone has likely encountered when dealing with a conversational agent. She also noted that the typical reaction tends to be abuse of that agent.

“(People) do call the agent names. We look at interaction laws of conversational agents online or in other settings and we examine what people do,” she said. “(People interacting with conversational agents) don’t just do what they’re intended to do.”

Those wayward interactions include misuse of the interface and people saying things to the agent based on its gender or perceived social attributes that, if they were said to a human, would be considered abuse, Brahnam said. The problem, which she attributes to anthropomorphism, is that even though they’re now commonplace, people still aren’t accustomed to talking to computers.

“We’ve been taught, if you start talking to things, there’s something wrong with you,” Brahnam said. “With kids, dolls are people. We’re told, as we grow up, that we’re supposed to give that up and get rid of the anthropomorphic tendencies. Then we come to computers and suddenly we’re told ‘No, we want you to anthropomorphize.”

Brahnam believes that conflict is the root of much of the abuse of these conversational agents. As people go in and out of believing that what they’re talking to is real, their interaction with the agents, which were designed to imitate human beings, will often go smoothly. Conversely, people will frequently get upset when those conversational agents try to act too human.

The problem, as Brahnam sees it, is that humans feel degraded by machines that take on human attributes, and if the machine they’re interacting with fails at its task, they lash out. That abusive reaction is the opposite of how someone might react if they had the same interaction with a human, she noted.

In addition, how those agents handle that abuse plays a large part in perpetuating it, Brahnam believes. That said, how humans interact with computers, and how computers are programmed to interact with humans, might also be a reflection on humanity at large.

“People are scripting these agents to handle this abuse in a specific way and often these ways are not good. If you have a female agent that people would say very catty things to, (the designers) would script these agents to recognize these terms and say catty things back,” Brahnam said. “If we’re practicing abusing agents of different types, then I think it lends itself to real world abuse.”

To curb the abuse of these agents of artificial intelligence, and to help humans learn to interact better with the computers as they evolve, Brahnam said future HCIs need to evolve as well.

“I think we can be honest, the agent can say, ‘I’m not human’ and recognize abuse. You want a conversation where you exhibit goodwill, excellence of good moral character and expertise,“ Brahnam said. “If the agent does these things, and appropriately defuses abuse, it shapes the way we interact with the agent, and we become better people in our communications with the agents.”

 

About the Author:

Daniel-Faggella-150x150Dan Faggella is a graduate of UPENN’s Master of Applied Positive Psychology program, as well as a national martial arts champion. His work focuses heavily on emerging technology and startup businesses (TechEmergence.com), and the pressing issues and opportunities with augmenting consciousness. His articles and interviews with philosophers / experts can be found at SentientPotential.com

Browse More

The Future of Circus

The Future of Circus: How can businesses and artists thrive in a changing entertainment industry?

The Problem with NFTs preview

The Problem with NFTs [Video]

Micro-Moments of Perceived Rejection

Micro-Moments of Perceived Rejection: How to Navigate the (near) Future of Events

Futurist Tech Conference Preview

Futurist Conferences: Considerations for Progressive Event Professionals

Nikola Danaylov on Ex Human

Nikola Danaylov on Ex Human: the Lessons of 2020

Immortality or Bust preview

Immortality or Bust: The Trailblazing Transhumanist Movie

COVID19

Challenges for the Next 100 Days of the COVID19 Pandemic

Nikola Danaylov on 2030 preview

Nikola Danaylov on 2030 Beyond the Film: We Need Ethics and Commitment, Not Transhumanism