Chatbots need to have just the right amount of empathy to earn a person's trust.Chatbots need to have just the right amount of empathy to earn a person's trust.

People are picky when it comes to chatbot personalities. A new study from the Media Effects Research Lab at Penn State has found that most people are wary of a chatbot that is too friendly, but they also don’t connect with an apathetic chatbot either.

The study found that people are more likely to share information with a chatbot that shows empathetic and sympathetic tendencies, suggesting that it all depends on how comfortable a person is with a robot showing feelings.

The researchers hope these findings help healthcare providers use conversational technologies that patients can trust. The goal is to have patients share basic information about their physical and mental health with a chatbot before seeing a doctor.

"Increasingly, as we have more and more chatbots and more AI-driven conversational agents in our midst," said study author S. Shyam Sundar. "And, as more people begin to turn to their smart speaker or chatbot on a health forum for advice, or for social and emotional support, the question becomes: To what extent should these chatbots and smart speakers express human-like emotions?"

"The majority of people in our sample did not really believe in machine emotion, so, in our interpretation, they took those expressions of empathy and sympathy as courtesies," said study author Bingjie Liu. "When we looked at people who have different beliefs, however, we found that people who think it's possible that machines could have emotions had negative reactions to these expressions of sympathy and empathy from the chatbots."

During the study, 88 volunteers were given an online task program led by a chatbot. Each volunteer was given one of four kinds of chatbots to interact with, which was loaded to have one of four responses: sympathetic, cognitive empathetic, affective empathetic or advice-only. The advice-only chatbot was the control while the sympathetic chatbot gave responses like, “I’m sorry to hear that.” The cognitive empathy chatbot said things like, “That issue can be quite disturbing.” The affective empathy chatbot had responses like, “I understand your anxiety about your situation.” The study showed that the affective empathy and sympathy chatbots had the most success.

"We found that the cognitive empathy — where the response is somewhat detached and it's approaching the problem from a thoughtful, but almost antiseptic way — did not quite work," said Sundar." Of course, chatbots and robots do that quite well, but that is also the stereotype of machines. And it doesn't seem to be as effective. What seems to work best is affective empathy, or an expression of sympathy."

The team said that further research could determine how chatbots like these could work in situations beyond healthcare.

"We want to see if this is a consistent pattern in how humans react to machine emotions," said Liu.

The paper on this research was published in Cyber Psychology, Behavior and Social Networking.