Believe me i’m a chatbot

PICTURE: In this research study, test subjects chatted with a chatbot – but only half knew it was a non-human interlocutor. view After
Credit: Mozafari
More and more companies are using chatbots in customer service. Due to advances in artificial intelligence and natural language processing, chatbots are often indistinguishable from humans when it comes to communication. But do businesses need to let their customers know they’re communicating with machines, not humans? Researchers from the University of Göttingen investigated. Their research found that consumers tend to react negatively when they learn that the person they’re talking to is, in fact, a chatbot. However, if the chatbot makes mistakes and cannot resolve a customer’s issue, the disclosure triggers a positive reaction. The results of the study were published in the Service management log.
Previous studies have shown that consumers react negatively when they learn they are communicating with chatbots – it seems consumers are inherently averse to the technology. In two experimental studies, the team at the University of Göttingen investigated whether this was still the case. Each study had 200 participants, each put into the scenario where they had to contact their energy supplier via online chat to update their address on their electricity contract following a move. In the chat, they met a chatbot – but only half of them were told they were chatting online with a non-human contact. The first study examined the impact of this disclosure based on the importance the customer perceives in resolving their service request. In a second study, the team studied the impact of this disclosure depending on whether the chatbot was able to resolve the customer’s request or not. To study the effects, the team used statistical analyzes such as analysis of covariance and mediation.
The result: more specifically, if the service issues are perceived as particularly important or critical, there is a negative reaction when it is revealed that the interlocutor is a chatbot. This scenario weakens customer confidence. Interestingly, however, the results also show that revealing that the contact was a chatbot results in positive customer feedback in cases where the chatbot cannot resolve the customer’s issue. âIf their problem is not resolved, revealing that they were speaking with a chatbot makes it easier for the consumer to understand the root cause of the error,â says first author Nika Mozafari of the University of Göttingen. “A chatbot is more likely to be forgiven for making a mistake than a human.” In this scenario, customer loyalty may even improve.
###
Original publication: Mozafari, Nika, Weiger, Welf H. and Hammerschmidt, Maik (2021), “Trust me, I am a bot – repercussions of the disclosure of chatbots in different frontline service contexts”, Service management log. https: /
Contact:
Nika Mozafari, M.Sc.
University of Göttingen
Marketing and innovation management
Platz der Göttinger Sieben 3, 37073 Göttingen, Germany
Phone. : +49 (0) 551 39-39-26546
Email: [email protected]
Warning: AAAS and EurekAlert! are not responsible for the accuracy of any press releases posted on EurekAlert! by contributing institutions or for the use of any information via the EurekAlert system.