Qian Chen, Yeming Gong, Yaobin Lu and Xin (Robert) Luo
The purpose of this study is twofold: first, to identify the categories of artificial intelligence (AI) chatbot service failures in frontline, and second, to examine the effect of…
Abstract
Purpose
The purpose of this study is twofold: first, to identify the categories of artificial intelligence (AI) chatbot service failures in frontline, and second, to examine the effect of the intensity of AI emotion exhibited on the effectiveness of the chatbots’ autonomous service recovery process.
Design/methodology/approach
We adopt a mixed-methods research approach, starting with a qualitative research, the purpose of which is to identify specific categories of AI chatbot service failures. In the second stage, we conduct experiments to investigate the impact of AI chatbot service failures on consumers’ psychological perceptions, with a focus on the moderating influence of chatbot’s emotional expression. This sequential approach enabled us to incorporate both qualitative and quantitative aspects for a comprehensive research perspective.
Findings
The results suggest that, from the analysis of interview data, AI chatbot service failures mainly include four categories: failure to understand, failure to personalize, lack of competence, and lack of assurance. The results also reveal that AI chatbot service failures positively affect dehumanization and increase customers’ perceptions of service failure severity. However, AI chatbots can autonomously remedy service failures through moderate AI emotion. An interesting golden zone of AI’s emotional expression in chatbot service failures was discovered, indicating that extremely weak or strong intensity of AI’s emotional expression can be counterproductive.
Originality/value
This study contributes to the burgeoning AI literature by identifying four types of AI service failure, developing dehumanization theory in the context of smart services, and demonstrating the nonlinear effects of AI emotion. The findings also offer valuable insights for organizations that rely on AI chatbots in terms of designing chatbots that effectively address and remediate service failures.
Details
Keywords
Qian Chen, Changqin Yin and Yeming Gong
This study investigates how artificial intelligence (AI) chatbots persuade customers to accept their recommendations in the online shopping context.
Abstract
Purpose
This study investigates how artificial intelligence (AI) chatbots persuade customers to accept their recommendations in the online shopping context.
Design/methodology/approach
Drawing on the elaboration likelihood model, this study establishes a research model to reveal the antecedents and internal mechanisms of customers' adoption of AI chatbot recommendations. The authors tested the model with survey data from 530 AI chatbot users.
Findings
The results show that in the AI chatbot recommendation adoption process, central and peripheral cues significantly affected a customer's intention to adopt an AI chatbot's recommendation, and a customer's cognitive and emotional trust in the AI chatbot mediated the relationships. Moreover, a customer's mind perception of the AI chatbot, including perceived agency and perceived experience, moderated the central and peripheral paths, respectively.
Originality/value
This study has theoretical and practical implications for AI chatbot designers and provides management insights for practitioners to enhance a customer's intention to adopt an AI chatbot's recommendation.
Research highlights
The study investigates customers' adoption of AI chatbots' recommendation.
The authors develop research model based on ELM theory to reveal central and peripheral cues and paths.
The central and peripheral cues are generalized according to cooperative principle theory.
Central cues include recommendation reliability and accuracy, and peripheral cues include human-like empathy and recommendation choice.
Central and peripheral cues affect customers' adoption to recommendation through trust in AI.
Customers' mind perception positively moderates the central and peripheral paths.
The study investigates customers' adoption of AI chatbots' recommendation.
The authors develop research model based on ELM theory to reveal central and peripheral cues and paths.
The central and peripheral cues are generalized according to cooperative principle theory.
Central cues include recommendation reliability and accuracy, and peripheral cues include human-like empathy and recommendation choice.
Central and peripheral cues affect customers' adoption to recommendation through trust in AI.
Customers' mind perception positively moderates the central and peripheral paths.