Search results

1 – 10 of over 2000
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Book part
Publication date: 4 April 2024

Caroline Fell Kurban and Muhammed Şahin

Abstract

Details

The Impact of ChatGPT on Higher Education
Type: Book
ISBN: 978-1-83797-648-5

Access Restricted. View access options
Article
Publication date: 4 March 2025

Irene Roozen, Mariet Raedts, Christel Claeys and Giulia Di Gennaro

This study explores whether the gender of a financial chatbot influences how competent potential users perceive the chatbot to be and whether they would choose to use the chatbot

17

Abstract

Purpose

This study explores whether the gender of a financial chatbot influences how competent potential users perceive the chatbot to be and whether they would choose to use the chatbot themselves.

Design/methodology/approach

The study had a between-subjects design: participants (N = 420, ages between 18 and 75) viewed and evaluated either a male or a female financial chatbot. Data were collected via an online questionnaire.

Findings

Male chatbots led to a significantly higher willingness to consult the service and were perceived as more competent. Furthermore, AI-literacy and sensitivity to gender perspectives significantly influenced these findings.

Practical implications

The findings offer actionable insights for financial institutions to optimise chatbot interactions by considering user preferences for male versus female chatbots, potentially guiding the development of more effective AI-driven financial services. Companies can use these insights to tailor chatbot gender strategies to meet user expectations better and enhance service satisfaction.

Originality/value

This study provides novel empirical evidence on the impact of chatbot gender in male-dominated financial services, revealing how AI literacy and gender sensitivity influence consumer behaviour and perceptions. Additionally, it contributes to the theoretical understanding of AI gendering and its societal implications.

Details

International Journal of Bank Marketing, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-2323

Keywords

Access Restricted. View access options
Article
Publication date: 4 March 2025

Ahmed Mostafa Abdelwaged Elayat and Reem Mohamed Elalfy

This study aims to provide empirical evidence to verify the dimensional structure of artificial intelligence (AI) Chatbot quality and examine the impact of these dimensions on…

75

Abstract

Purpose

This study aims to provide empirical evidence to verify the dimensional structure of artificial intelligence (AI) Chatbot quality and examine the impact of these dimensions on consumer satisfaction and brand advocacy among Gen Z in the fast food industry in Egypt.

Design/methodology/approach

The empirical data was obtained with an electronic self-administered survey instrument from 397 young consumers who had prior experience using AI Chatbots across multiple fast food brands in Egypt. Structural equation modeling was used to analyze the formulated hypotheses.

Findings

The results showed that AI Chatbot quality dimensions, specifically information authenticity and system compliance, significantly enhance young consumers’ satisfaction. In addition, information authenticity of AI Chatbot quality was observed to wield a significant influence on young consumers’ advocacy. In contrast, an insignificant relationship was noticed between satisfaction and advocacy. Moreover, the mediating role of consumer satisfaction was not established.

Practical implications

Given that Gen Z is more technology savvy and computer literate, marketers and practitioners of fast food brands should invest in AI tools to respond to young consumers’ expectations and improve their perception of their services.

Originality/value

This study uses stimulus-organism-response theory to understand the mediating effect of young consumers’ satisfaction in the relationship between AI Chatbot quality and consumer brand advocacy within the fast food industry. Also, it introduced two novel main constructs of AI Chatbot quality, namely, information authenticity and system compliance.

Details

Young Consumers, vol. 26 no. 2
Type: Research Article
ISSN: 1747-3616

Keywords

Available. Open Access. Open Access
Article
Publication date: 29 January 2025

Marialuisa Saviano, Asha Thomas, Marzia Del Prete, Daniele Verderese and Pasquale Sasso

This paper aims to contribute to the discussion on integrating humans and technology in customer service within the framework of Society 5.0, which emphasizes the growing role of…

350

Abstract

Purpose

This paper aims to contribute to the discussion on integrating humans and technology in customer service within the framework of Society 5.0, which emphasizes the growing role of artificial intelligence (AI). It examines how effectively new generative AI-based chatbots can handle customer emotions and explores their impact on determining the point at which a customer–machine interaction should be transferred to a human agent to prevent customer disengagement, referred to as the Switch Point (SP).

Design/methodology/approach

To evaluate the capabilities of new generative AI-based chatbots in managing emotions, ChatGPT-3.5, Gemini and Copilot are tested using the Trait Emotional Intelligence Questionnaire Short-Form (TEIQue-SF). A reference framework is developed to illustrate the shift in the Switch Point (SP).

Findings

Using the four-intelligence framework (mechanical, analytical, intuitive and empathetic), this study demonstrates that, despite advancements in AI’s ability to address emotions in customer service, even the most advanced chatbots—such as ChatGPT, Gemini and Copilot—still fall short of replicating the empathetic capabilities of human intelligence (HI). The concept of artificial emotional awareness (AEA) is introduced to characterize the intuitive intelligence of new generative AI chatbots in understanding customer emotions and triggering the SP. A complementary rather than replacement perspective of HI and AI is proposed, highlighting the impact of generative AI on the SP.

Research limitations/implications

This study is exploratory in nature and requires further theoretical development and empirical validation.

Practical implications

The study has only an exploratory character with respect to the possible real impact of the introduction of the new generative AI-based chatbots on collaborative approaches to the integration of humans and technology in Society 5.0.

Originality/value

Customer Relationship Management managers can use the proposed framework as a guide to adopt a dynamic approach to HI–AI collaboration in AI-driven customer service.

Details

European Journal of Innovation Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1460-1060

Keywords

Access Restricted. View access options
Article
Publication date: 26 September 2024

Lidia Plotkina and Subramaniam Sri Ramalu

Recent advances in coaching technology enhanced its accessibility and affordability for a broader population. In the imposing growth of economy and the demand for extensive…

328

Abstract

Purpose

Recent advances in coaching technology enhanced its accessibility and affordability for a broader population. In the imposing growth of economy and the demand for extensive coaching intervention for executives, artificial intelligence (AI)-based coaching is one of the possible solutions. While the evidence of AI coaching effectiveness is expanding, a comprehensive understanding of the field remains elusive. In particular, the true potential of AI coaching tools, ethical considerations and their current functionality are subjects of ongoing investigation.

Design/methodology/approach

The systematic literature review was conducted to extract experimental results and concepts about utilizing AI in coaching practice. The paper presents the primary capabilities of state-of-the-art coaching tools and compares them with human coaching.

Findings

The review shows that AI coaching chatbots and tools are effective for narrow tasks such as goal attainment, support for various psychological conditions and induction of reflection processes. Whereas, deep long-term coaching, working alliance and individualized approach are out of current AI coaching competence. In the current state, AI coaching tools serve as complementary helping tools that cannot replace human coaching. However, they have the potential to enhance the coach’s performance and serve as valuable assistants in intricate coaching interventions.

Originality/value

The review offered insights into the current capabilities of AI coaching chatbots, aligned with International Coaching Federation set of competencies. The review outlined the drawbacks and benefits of chatbots and their areas of application in coaching.

Details

Journal of Management Development, vol. 43 no. 6
Type: Research Article
ISSN: 0262-1711

Keywords

Access Restricted. View access options
Article
Publication date: 15 September 2023

Curtis C. Cain, Carlos D. Buskey and Gloria J. Washington

The purpose of this paper is to demonstrate the advancements in artificial intelligence (AI) and conversational agents, emphasizing their potential benefits while also…

1058

Abstract

Purpose

The purpose of this paper is to demonstrate the advancements in artificial intelligence (AI) and conversational agents, emphasizing their potential benefits while also highlighting the need for vigilant monitoring to prevent unethical applications.

Design/methodology/approach

As AI becomes more prevalent in academia and research, it is crucial to explore ways to ensure ethical usage of the technology and to identify potentially unethical usage. This manuscript uses a popular AI chatbot to write the introduction and parts of the body of a manuscript discussing conversational agents, the ethical usage of chatbots and ethical concerns for academic researchers.

Findings

The authors reveal which sections were written entirely by the AI using a conversational agent. This serves as a cautionary tale highlighting the importance of ethical considerations for researchers and students when using AI and how educators must be prepared for the increasing prevalence of AI in the academy and industry. Measures to mitigate potential unethical use of this evolving technology are also discussed in the manuscript.

Originality/value

As conversational agents and chatbots increase in society, it is crucial to understand how they will impact the community and how we can live with technology instead of fighting against it.

Details

Journal of Information, Communication and Ethics in Society, vol. 21 no. 4
Type: Research Article
ISSN: 1477-996X

Keywords

Access Restricted. View access options
Article
Publication date: 5 June 2020

Alyson Gamble

For decades, artificial intelligence (AI) has been utilized within the field of mental healthcare. This paper aims to examine AI chatbots, specifically as offered through mobile…

4915

Abstract

Purpose

For decades, artificial intelligence (AI) has been utilized within the field of mental healthcare. This paper aims to examine AI chatbots, specifically as offered through mobile applications for mental healthcare (MHapps), with attention to the social implications of these technologies. For example, AI chatbots in MHapps are programmed with therapeutic techniques to assist people with anxiety and depression, but the promise of this technology is tempered by concerns about the apps' efficacy, privacy, safety and security.

Design/methodology/approach

Utilizing a social informatics perspective, a literature review covering MHapps, with a focus on AI chatbots was conducted from the period of January–April 2019. A borrowed theory approach pairing information science and social work was applied to analyze the literature.

Findings

Rising needs for mental healthcare, combined with expanding technological developments, indicate continued growth of MHapps and chatbots. While an AI chatbot may provide a person with a place to access tools and a forum to discuss issues, as well as a way to track moods and increase mental health literacy, AI is not a replacement for a therapist or other mental health clinician. Ultimately, if AI chatbots and other MHapps are to have a positive impact, they must be regulated, and society must avoid techno-fundamentalism in relation to AI for mental health.

Originality/value

This study adds to a small but growing body of information science research into the role of AI in the support of mental health.

Details

Aslib Journal of Information Management, vol. 72 no. 4
Type: Research Article
ISSN: 2050-3806

Keywords

Access Restricted. View access options
Article
Publication date: 7 February 2023

Rajasshrie Pillai, Yamini Ghanghorkar, Brijesh Sivathanu, Raed Algharabat and Nripendra P. Rana

AI-based chatbots are revamping employee communication in organizations. This paper examines the adoption of AI-based employee experience chatbots by employees.

7595

Abstract

Purpose

AI-based chatbots are revamping employee communication in organizations. This paper examines the adoption of AI-based employee experience chatbots by employees.

Design/methodology/approach

The proposed model is developed using behavioral reasoning theory and empirically validated by surveying 1,130 employees and data was analyzed with PLS-SEM.

Findings

This research presents the “reasons for” and “reasons against” for the acceptance of AI-based employee experience chatbots. The “reasons for” are – personalization, interactivity, perceived intelligence and perceived anthropomorphism and the “reasons against” are perceived risk, language barrier and technological anxiety. It is found that “reasons for” have a positive association with attitude and adoption intention and “reasons against” have a negative association. Employees' values for openness to change are positively associated with “reasons for” and do not affect attitude and “reasons against”.

Originality/value

This is the first study exploring employees' attitude and adoption intention toward AI-based EEX chatbots using behavioral reasoning theory.

Details

Information Technology & People, vol. 37 no. 1
Type: Research Article
ISSN: 0959-3845

Keywords

Access Restricted. View access options
Article
Publication date: 17 January 2023

Qian Chen, Yaobin Lu, Yeming Gong and Jie Xiong

This study investigates whether and how the service quality of artificial intelligence (AI) chatbots affects customer loyalty to an organization.

11454

Abstract

Purpose

This study investigates whether and how the service quality of artificial intelligence (AI) chatbots affects customer loyalty to an organization.

Design/methodology/approach

Based on the sequential chain model of service quality loyalty, this study first classifies AI chatbot service quality into nine attributes and then develops a research model to explore the internal mechanism of how AI chatbot service quality affects customer loyalty. The analysis of survey data from 459 respondents provided insights into the interrelationships among AI chatbot service quality attributes, perceived value, cognitive and affective trust, satisfaction and customer loyalty.

Findings

The results show that AI chatbot service quality positively affects customer loyalty through perceived value, cognitive trust, affective trust and satisfaction.

Originality/value

This study captures the attributes of the service quality of AI chatbots and reveals the significant influence of service quality on customer loyalty. This study develops research on service quality in the information system (IS) field and extends the sequential chain model of quality loyalty to the context of AI services. The findings not only help an organization find a way to improve customers' perceived value, trust, satisfaction and loyalty but also provide guidance in the development, adoption, and post-adoption stages of AI chatbots.

Details

Internet Research, vol. 33 no. 6
Type: Research Article
ISSN: 1066-2243

Keywords

Access Restricted. View access options
Article
Publication date: 15 August 2024

Qian Chen, Yeming Gong, Yaobin Lu and Xin (Robert) Luo

The purpose of this study is twofold: first, to identify the categories of artificial intelligence (AI) chatbot service failures in frontline, and second, to examine the effect of…

1132

Abstract

Purpose

The purpose of this study is twofold: first, to identify the categories of artificial intelligence (AI) chatbot service failures in frontline, and second, to examine the effect of the intensity of AI emotion exhibited on the effectiveness of the chatbots’ autonomous service recovery process.

Design/methodology/approach

We adopt a mixed-methods research approach, starting with a qualitative research, the purpose of which is to identify specific categories of AI chatbot service failures. In the second stage, we conduct experiments to investigate the impact of AI chatbot service failures on consumers’ psychological perceptions, with a focus on the moderating influence of chatbot’s emotional expression. This sequential approach enabled us to incorporate both qualitative and quantitative aspects for a comprehensive research perspective.

Findings

The results suggest that, from the analysis of interview data, AI chatbot service failures mainly include four categories: failure to understand, failure to personalize, lack of competence, and lack of assurance. The results also reveal that AI chatbot service failures positively affect dehumanization and increase customers’ perceptions of service failure severity. However, AI chatbots can autonomously remedy service failures through moderate AI emotion. An interesting golden zone of AI’s emotional expression in chatbot service failures was discovered, indicating that extremely weak or strong intensity of AI’s emotional expression can be counterproductive.

Originality/value

This study contributes to the burgeoning AI literature by identifying four types of AI service failure, developing dehumanization theory in the context of smart services, and demonstrating the nonlinear effects of AI emotion. The findings also offer valuable insights for organizations that rely on AI chatbots in terms of designing chatbots that effectively address and remediate service failures.

Details

Internet Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1066-2243

Keywords

1 – 10 of over 2000
Per page
102050