Search results

1 – 2 of 2
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Article
Publication date: 26 February 2025

Xiaoxiao Meng and Jiaxin Liu

This study aims to explain the privacy paradox, wherein individuals, despite privacy concerns, are willing to share personal information while using AI chatbots. Departing from…

13

Abstract

Purpose

This study aims to explain the privacy paradox, wherein individuals, despite privacy concerns, are willing to share personal information while using AI chatbots. Departing from previous research that primarily viewed AI chatbots from a non-anthropomorphic approach, this paper contends that AI chatbots are taking on an emotional component for humans. This study thus explores this topic by considering both rational and non-rational perspectives, thereby providing a more comprehensive understanding of user behavior in digital environments.

Design/methodology/approach

Employing a questionnaire survey (N = 480), this research focuses on young users who regularly engage with AI chatbots. Drawing upon the parasocial interaction theory and privacy calculus theory, the study elucidates the mechanisms governing users’ willingness to disclose information.

Findings

Findings show that cognitive, emotional and behavioral dimensions all positively influence perceived benefits of using ChatGPT, which in turn enhances privacy disclosure. While cognitive, emotional and behavioral dimensions negatively impact perceived risks, only the emotional and behavioral dimensions significantly affect perceived risk, which in turn negatively influences privacy disclosure. Notably, the cognitive dimension’s lack of significant mediating effect suggests that users’ awareness of privacy risks does not deter disclosure. Instead, emotional factors drive privacy decisions, with users more likely to disclose personal information based on positive experiences and engagement with ChatGPT. This confirms the existence of the privacy paradox.

Research limitations/implications

This study acknowledges several limitations. While the sample was adequately stratified, the focus was primarily on young users in China. Future research should explore broader demographic groups, including elderly users, to understand how different age groups engage with AI chatbots. Additionally, although the study was conducted within the Chinese context, the findings have broader applicability, highlighting the potential for cross-cultural comparisons. Differences in user attitudes toward AI chatbots may arise due to cultural variations, with East Asian cultures typically exhibiting a more positive attitude toward social AI systems compared to Western cultures. This cultural distinction—rooted in Eastern philosophies such as animism in Shintoism and Buddhism—suggests that East Asians are more likely to anthropomorphize technology, unlike their Western counterparts (Yam et al., 2023; Folk et al., 2023).

Practical implications

The findings of this study offer valuable insights for developers, policymakers and educators navigating the rapidly evolving landscape of intelligent technologies. First, regarding technology design, the study suggests that AI chatbot developers should not focus solely on functional aspects but also consider emotional and social dimensions in user interactions. By enhancing emotional connection and ensuring transparent privacy communication, developers can significantly improve user experiences (Meng and Dai, 2021). Second, there is a pressing need for comprehensive user education programs. As users tend to prioritize perceived benefits over risks, it is essential to raise awareness about privacy risks while also emphasizing the positive outcomes of responsible information sharing. This can help foster a more informed and balanced approach to user engagement (Vimalkumar et al., 2021). Third, cultural and ethical considerations must be incorporated into AI chatbot design. In collectivist societies like China, users may prioritize emotional satisfaction and societal harmony over privacy concerns (Trepte, 2017; Johnston, 2009). Developers and policymakers should account for these cultural factors when designing AI systems. Furthermore, AI systems should communicate privacy policies clearly to users, addressing potential vulnerabilities and ensuring that users are aware of the extent to which their data may be exposed (Wu et al., 2024). Lastly, as AI chatbots become deeply integrated into daily life, there is a growing need for societal discussions on privacy norms and trust in AI systems. This research prompts a reflection on the evolving relationship between technology and personal privacy, especially in societies where trust is shaped by cultural and emotional factors. Developing frameworks to ensure responsible AI practices while fostering user trust is crucial for the long-term societal integration of AI technologies (Nah et al., 2023).

Originality/value

The study’s findings not only draw deeper theoretical insights into the role of emotions in generative artificial intelligence (gAI) chatbot engagement, enriching the emotional research orientation and framework concerning chatbots, but they also contribute to the literature on human–computer interaction and technology acceptance within the framework of the privacy calculus theory, providing practical insights for developers, policymakers and educators navigating the evolving landscape of intelligent technologies.

Details

Online Information Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1468-4527

Keywords

Access Restricted. View access options
Article
Publication date: 23 September 2024

Hsiu-Yu Teng and Chien-Yu Chen

Recognition of the complexity of job embeddedness in the work environment has grown, highlighting the need for a deeper understanding of the factors that contribute to this…

242

Abstract

Purpose

Recognition of the complexity of job embeddedness in the work environment has grown, highlighting the need for a deeper understanding of the factors that contribute to this phenomenon. This study analyzed how and when job crafting and leisure crafting are linked to job embeddedness by investigating employee resilience as a mediator and employee adaptivity as a moderator.

Design/methodology/approach

Data were gathered from 568 Taiwanese hotel employees. The PROCESS macro was used to verify all hypotheses.

Findings

Both job crafting and leisure crafting increased job embeddedness. Employee resilience mediated the impacts of job and leisure crafting on job embeddedness. The positive relationship between employee resilience and job embeddedness was stronger when employee adaptivity was high. Employee adaptivity moderated the indirect impacts of job and leisure crafting on job embeddedness through employee resilience.

Practical implications

Hotel managers should foster a workplace culture that encourages employees to engage in job crafting. Additionally, managers can offer employee assistance programs to proactively encourage workers to participate in leisure crafting. Providing training and wellness programs to strengthen employee resilience, along with allocating resources and designing learning programs to enhance employee adaptability, can further promote job embeddedness.

Originality/value

This research contributes to the literature through the construction of a moderated mediation model that explored how and when job and leisure crafting affect job embeddedness.

Details

Journal of Hospitality and Tourism Insights, vol. 8 no. 3
Type: Research Article
ISSN: 2514-9792

Keywords

1 – 2 of 2
Per page
102050