Xi Chen, Maomao Wu, Chen Cheng and Jian Mou
With the widespread collection and utilization of user data, privacy security has become a crucial factor influencing online engagement. In response to the growing concern about…
Abstract
Purpose
With the widespread collection and utilization of user data, privacy security has become a crucial factor influencing online engagement. In response to the growing concern about privacy security issues on social media, this research aims to examine the key causes of social media users' privacy calculus and how the balance between perceived privacy risks and benefits affects users' privacy concerns and their subsequent willingness to disclose personal information.
Design/methodology/approach
The characteristics of the privacy calculus were extracted through partially structured interviews. A research model derived from privacy calculus theory was constructed, and latent variable modeling was employed to validate the proposed hypotheses.
Findings
Information sensitivity, experiences of privacy violations, social influence and the effectiveness of privacy policies influence users' privacy calculus. Privacy risk positively influences privacy concerns. Personal information disclosure willingness is positively influenced by privacy benefits and negatively influenced by privacy concerns, with both paths moderated by social media identification.
Originality/value
This study explores the key antecedents of users' privacy calculus and how these factors influence privacy concerns and subsequent willingness to disclose information on social media. It offers new insights into the privacy paradox observed within social media by validating the moderating role of social media identification on users' information disclosure willingness.
Details
Keywords
Hao Xin, FengTao Liu and ZiXiang Wei
This paper proposes that the trade-off between medical benefits and privacy concerns among mHealth users extends to their disclosure intentions, manifested as individuals…
Abstract
Purpose
This paper proposes that the trade-off between medical benefits and privacy concerns among mHealth users extends to their disclosure intentions, manifested as individuals simultaneously holding intentions to tend to disclose in the near future and to reduce disclosure in the distant future. Consequently, this paper aims to explore the privacy decision-making process of mHealth users from the perspective of a dual trade-off.
Design/methodology/approach
This paper constructs the model using the privacy calculus theory and the antecedent-privacy concern-outcome framework. It employs the construal level theory to evaluate the impact of privacy calculus on two types of disclosure intentions. The study empirically tests the model using a data sample of 386 mHealth users.
Findings
The results indicate that perceived benefits positively affect both near-future and distant-future disclosure intentions. In contrast, perceived risks just negatively affect distant-future disclosure intention. Additionally, perceived benefits, near-future and distant-future disclosure intentions positively affect disclosure behavior. The findings also reveal that privacy management perception positively affects perceived benefits. Personalized services and privacy invasion experience positively affect perceived benefits and risks, while trust negatively affects perceived risks.
Originality/value
This paper considers the trade-off in the privacy calculus phase as the first trade-off. On this basis, this trade-off will extend to the disclosure intention. The individuals’ two times of trade-offs between privacy concerns and medical benefits constitute the dual trade-off perspective. This paper first uses this perspective to explore the privacy decision-making process of mHealth users. This paper employs the construal level theory to effectively evaluate the impact of privacy calculus on both disclosure intentions in mHealth, extending the theory’s applicability. Moreover, we introduce antecedents of privacy calculus from the perspectives of platform, society, and individuals, enhancing the study’s realism. The research findings provide a basis for mHealth platforms to better cater to users’ privacy needs.
Details
Keywords
Yafei Feng, Yongqiang Sun, Nan Wang and Xiao-Liang Shen
Sharing co-owned information on social network platforms has become a common and inevitable phenomenon. However, due to the uniqueness of co-owned information, the privacy calculus…
Abstract
Purpose
Sharing co-owned information on social network platforms has become a common and inevitable phenomenon. However, due to the uniqueness of co-owned information, the privacy calculus theory based on a single information owner cannot explain co-owned information disclosure. Therefore, this study tries to investigate the underlying mechanism of users’ co-owned information disclosure from a collective privacy calculus perspective.
Design/methodology/approach
Through a survey of 740 participants, covariance-based structural equation modeling (CB-SEM) was used to verify the proposed model and hypotheses.
Findings
The results show that personal benefit, others’ benefit and relationship benefit promote users’ co-owned information disclosure by positively affecting personal distributive fairness and others’ distributive fairness perception. Meanwhile, personal privacy risk and others’ privacy risk prevent users’ co-owned information disclosure by negatively affecting personal distributive fairness and others’ distributive fairness perception. Besides, others’ information ownership perception enhances the positive effect of others’ distributive fairness perception on co-owned information disclosure intention. Furthermore, others’ information ownership strengthens the mediating role of others’ distributive fairness.
Research limitations/implications
The findings of this study enrich the research scope of information disclosure and privacy calculus theory and help social network platform developers design collective privacy protection functions.
Originality/value
This study develops a collective privacy calculus model to understand users’ co-owned information disclosure on social network platforms, confirming the mediating role of collective distributive fairness and the moderating role of others’ information ownership perception in the process of collective privacy calculus.
Details
Keywords
This study aims to explain the privacy paradox, wherein individuals, despite privacy concerns, are willing to share personal information while using AI chatbots. Departing from…
Abstract
Purpose
This study aims to explain the privacy paradox, wherein individuals, despite privacy concerns, are willing to share personal information while using AI chatbots. Departing from previous research that primarily viewed AI chatbots from a non-anthropomorphic approach, this paper contends that AI chatbots are taking on an emotional component for humans. This study thus explores this topic by considering both rational and non-rational perspectives, thereby providing a more comprehensive understanding of user behavior in digital environments.
Design/methodology/approach
Employing a questionnaire survey (N = 480), this research focuses on young users who regularly engage with AI chatbots. Drawing upon the parasocial interaction theory and privacy calculus theory, the study elucidates the mechanisms governing users’ willingness to disclose information.
Findings
Findings show that cognitive, emotional and behavioral dimensions all positively influence perceived benefits of using ChatGPT, which in turn enhances privacy disclosure. While cognitive, emotional and behavioral dimensions negatively impact perceived risks, only the emotional and behavioral dimensions significantly affect perceived risk, which in turn negatively influences privacy disclosure. Notably, the cognitive dimension’s lack of significant mediating effect suggests that users’ awareness of privacy risks does not deter disclosure. Instead, emotional factors drive privacy decisions, with users more likely to disclose personal information based on positive experiences and engagement with ChatGPT. This confirms the existence of the privacy paradox.
Research limitations/implications
This study acknowledges several limitations. While the sample was adequately stratified, the focus was primarily on young users in China. Future research should explore broader demographic groups, including elderly users, to understand how different age groups engage with AI chatbots. Additionally, although the study was conducted within the Chinese context, the findings have broader applicability, highlighting the potential for cross-cultural comparisons. Differences in user attitudes toward AI chatbots may arise due to cultural variations, with East Asian cultures typically exhibiting a more positive attitude toward social AI systems compared to Western cultures. This cultural distinction—rooted in Eastern philosophies such as animism in Shintoism and Buddhism—suggests that East Asians are more likely to anthropomorphize technology, unlike their Western counterparts (Yam et al., 2023; Folk et al., 2023).
Practical implications
The findings of this study offer valuable insights for developers, policymakers and educators navigating the rapidly evolving landscape of intelligent technologies. First, regarding technology design, the study suggests that AI chatbot developers should not focus solely on functional aspects but also consider emotional and social dimensions in user interactions. By enhancing emotional connection and ensuring transparent privacy communication, developers can significantly improve user experiences (Meng and Dai, 2021). Second, there is a pressing need for comprehensive user education programs. As users tend to prioritize perceived benefits over risks, it is essential to raise awareness about privacy risks while also emphasizing the positive outcomes of responsible information sharing. This can help foster a more informed and balanced approach to user engagement (Vimalkumar et al., 2021). Third, cultural and ethical considerations must be incorporated into AI chatbot design. In collectivist societies like China, users may prioritize emotional satisfaction and societal harmony over privacy concerns (Trepte, 2017; Johnston, 2009). Developers and policymakers should account for these cultural factors when designing AI systems. Furthermore, AI systems should communicate privacy policies clearly to users, addressing potential vulnerabilities and ensuring that users are aware of the extent to which their data may be exposed (Wu et al., 2024). Lastly, as AI chatbots become deeply integrated into daily life, there is a growing need for societal discussions on privacy norms and trust in AI systems. This research prompts a reflection on the evolving relationship between technology and personal privacy, especially in societies where trust is shaped by cultural and emotional factors. Developing frameworks to ensure responsible AI practices while fostering user trust is crucial for the long-term societal integration of AI technologies (Nah et al., 2023).
Originality/value
The study’s findings not only draw deeper theoretical insights into the role of emotions in generative artificial intelligence (gAI) chatbot engagement, enriching the emotional research orientation and framework concerning chatbots, but they also contribute to the literature on human–computer interaction and technology acceptance within the framework of the privacy calculus theory, providing practical insights for developers, policymakers and educators navigating the evolving landscape of intelligent technologies.
Details
Keywords
Yafei Feng, Yan Zhang and Lifu Li
The privacy calculus based on a single stakeholder failed to explain users' co-owned information disclosure owing to the uniqueness of co-owned information. Drawing on collective…
Abstract
Purpose
The privacy calculus based on a single stakeholder failed to explain users' co-owned information disclosure owing to the uniqueness of co-owned information. Drawing on collective privacy calculus theory and impression management theory, this study attempts to explore the co-owned information disclosure of social network platform users from a collective perspective rather than an individual perspective.
Design/methodology/approach
Drawing on collective privacy calculus theory and impression management theory, this study explores the co-owned information disclosure of social network platform users from a collective perspective rather than an individual perspective based on a survey of 740 respondents.
Findings
This study finds that self-presentation and others presentation directly positively affect users' co-owned information disclosure. Also, self-presentation, others presentation and relationship presentation indirectly positively affect users' co-owned information disclosure via relationship support. Furthermore, personal privacy concern, others' privacy concern and relationship privacy concern indirectly negatively affect users' co-owned information disclosure via relationship risk.
Originality/value
The findings develop the theory of collective privacy calculus and impression management, which offer insights into the design of the collective privacy protection function of social network platform service providers.
Details
Keywords
Yongqiang Sun, Fei Zhang and Yafei Feng
This paper aimed to explain why individuals still tend to disclose their privacy information even when privacy risks are high and whether individuals disclose or withhold…
Abstract
Purpose
This paper aimed to explain why individuals still tend to disclose their privacy information even when privacy risks are high and whether individuals disclose or withhold information following the same logic.
Design/methodology/approach
This study develops a configurational decision tree model (CDTM) for precisely understanding individuals' decision-making process of privacy disclosure. A survey of location-based social network service (LBSNS) users was conducted to collect data, and fuzzy-set qualitative comparative analysis (fsQCA) was adopted to validate the hypotheses.
Findings
This paper identified two configurations for high and low disclosure, respectively, and found that the benefits and the risks did not function independently but interdependently, and the justice would play a crucial role when both the benefits and the risks were high. Furthermore, the authors found that there were asymmetric mechanisms for high disclosure and low disclosure, and males focused more on perceived usefulness, while females concerned more about perceived enjoyment, privacy risks and perceived justice.
Originality/value
This paper further extends privacy calculus model (PCM) and deepens the understanding of the privacy calculus process from a configurational perspective. In addition, this study also provides guidance for future research on how to adopt the configurational approach with qualitative comparative analysis (QCA) to revise and improve relevant theories for information systems (IS) behavioral research.
Details
Keywords
Sophia Xiaoxia Duan and Hepu Deng
Understanding the privacy concerns of individuals in the adoption of contact tracing apps is critical for the successful control of pandemics like COVID-19. This paper explores…
Abstract
Purpose
Understanding the privacy concerns of individuals in the adoption of contact tracing apps is critical for the successful control of pandemics like COVID-19. This paper explores the privacy paradox in the adoption of contact tracing apps in Australia.
Design/methodology/approach
A comprehensive review of the related literature has been conducted, leading to the development of a conceptual model based on the privacy calculus theory and the antecedent-privacy concern-outcome framework. Such a model is then tested and validated using structural equation modelling on the survey data collected in Australia.
Findings
The study shows that perceived benefit, perceived privacy risk and trust have significant influences on the adoption of contact tracing apps. It reveals that personal innovativeness and trust have significant and negative influences on perceived privacy risk. The study further finds out that personal innovativeness is insignificant to perceived benefit. It states that perceived ease of use has an insignificant influence on perceived privacy risk in the adoption of contact tracing apps.
Originality/value
This study is the first attempt to use the privacy calculus theory and the antecedent–privacy concern–outcome framework for exploring the privacy paradox in adopting contact tracing apps. This leads to a better understanding of the privacy concerns of individuals in the adoption of contact tracing apps. Such an understanding can help formulate targeted strategies and policies for promoting the adoption of contact tracing apps and inform future epidemic control through effective contact tracing for better emergency management.
Details
Keywords
Ching-Hsuan Yeh, Yi-Shun Wang, Shin-Jeng Lin, Timmy H. Tseng, Hsin-Hui Lin, Ying-Wei Shih and Yi-Hsuan Lai
Considering that users’ information privacy concerns may affect the development of e-commerce, the purpose of this paper is to explore what drives internet users’ willingness to…
Abstract
Purpose
Considering that users’ information privacy concerns may affect the development of e-commerce, the purpose of this paper is to explore what drives internet users’ willingness to provide personal information; further, the paper examines how extrinsic rewards moderate the relationship between users’ information privacy concerns and willingness to provide personal information.
Design/methodology/approach
Data collected from 345 valid internet users in the context of electronic commerce were analyzed using the partial least squares approach.
Findings
The result showed that agreeableness, risk-taking propensity and experience of privacy invasion were three main antecedents of information privacy concerns among the seven individual factors. Additionally, information privacy concerns did not significantly affect users’ willingness to provide personal information in the privacy calculation mechanism; however, extrinsic rewards directly affected users’ disclosure intention. The authors found that extrinsic rewards had not moderated the relationship between users’ information privacy concerns and their willingness to provide personal information.
Originality/value
This study is an exploratory effort to develop and validate a model for explaining why internet users were willing to provide personal information. The results of this study are helpful to researchers in developing theories of information privacy concerns and to practitioners in promoting internet users’ willingness to provide personal information in an e-commerce context.
Details
Keywords
Digital device recycling platforms (DDRPs) are customer-to-business online marketplaces that allow consumers to trade in or sell their used electronics, like smartphones and…
Abstract
Purpose
Digital device recycling platforms (DDRPs) are customer-to-business online marketplaces that allow consumers to trade in or sell their used electronics, like smartphones and laptops, directly to a business for cash or credit. Guaranteed data destruction is a service provided by most DDRPs to securely erase all data on devices being recycled or traded in. Perceived credibility of the service refers to the extent to which customers are confident in the effectiveness and reliability of the service offered by a given DDRP. Grounded in privacy calculus theory, the current study aims to explore the influence of perceived credibility of guaranteed data destruction service (GDDS) on one’s intention to use a DDRP.
Design/methodology/approach
An empirical study was conducted through an online survey of Chinese DDRP users. The proposed model was tested by analyzing the collected data using the structural equation modeling approach.
Findings
Our results indicate that perceived credibility of GDDS affects users’ intention to use DDRPs by decreasing privacy concerns and increasing perceived convenience and environmental benefits of these platforms.
Research limitations/implications
This study’s findings are based on data collected from Chinese DDRP users, which may limit the generalizability of the results to other cultural or market contexts.
Practical implications
This study provides practical guidance for DDRPs, emphasizing the importance of enhancing perceived credibility through transparent data destruction practices and certifications.
Originality/value
The findings of the current study offer implications for theory development in sustainable information technology and e-commerce as well as practical suggestions for increasing usage of DDRPs.
Details
Keywords
Teresa Fernandes and Marta Costa
The COVID-19 pandemic represents a unique challenge for public health worldwide. In this context, smartphone-based tracking apps play an important role in controlling…
Abstract
Purpose
The COVID-19 pandemic represents a unique challenge for public health worldwide. In this context, smartphone-based tracking apps play an important role in controlling transmission. However, privacy concerns may compromise the population’s willingness to adopt this mobile health (mHealth) technology. Based on the privacy calculus theory, this study aims to examine what factors drive or hinder adoption and disclosure, considering the moderating role of age and health status.
Design/methodology/approach
A cross-sectional survey was conducted in a European country hit by the pandemic that has recently launched a COVID-19 contact-tracing app. Data from 504 potential users was analyzed through partial least squares structural equation modeling.
Findings
Results indicate that perceived benefits and privacy concerns impact adoption and disclosure and confirm the existence of a privacy paradox. However, for young and healthy users, only benefits have a significant effect. Moreover, older people value more personal than societal benefits while for respondents with a chronical disease privacy concerns outweigh personal benefits.
Originality/value
The study contributes to consumer privacy research and to the mHealth literature, where privacy issues have been rarely explored, particularly regarding COVID-19 contact-tracing apps. The study re-examines the privacy calculus by incorporating societal benefits and moving from a traditional “self-focus” approach to an “other-focus” perspective. This study further adds to prior research by examining the moderating role of age and health condition, two COVID-19 risk factors. This study thus offers critical insights for governments and health organizations aiming to use these tools to reduce COVID-19 transmission rates.