Search results

1 – 1 of 1
Article
Publication date: 12 December 2024

Tao Zhou and Hailin Lu

The purpose of this study is to examine the effect of trust on user adoption of artificial intelligence-generated content (AIGC) based on the stimulus–organism–response.

Abstract

Purpose

The purpose of this study is to examine the effect of trust on user adoption of artificial intelligence-generated content (AIGC) based on the stimulus–organism–response.

Design/methodology/approach

The authors conducted an online survey in China, which is a highly competitive AI market, and obtained 504 valid responses. Both structural equation modelling and fuzzy-set qualitative comparative analysis (fsQCA) were used to conduct data analysis.

Findings

The results indicated that perceived intelligence, perceived transparency and knowledge hallucination influence cognitive trust in platform, whereas perceived empathy influences affective trust in platform. Both cognitive trust and affective trust in platform lead to trust in AIGC. Algorithm bias negatively moderates the effect of cognitive trust in platform on trust in AIGC. The fsQCA identified three configurations leading to adoption intention.

Research limitations/implications

The main limitation is that more factors such as culture need to be included to examine their possible effects on trust. The implication is that generative AI platforms need to improve the intelligence, transparency and empathy, and mitigate knowledge hallucination to engender users’ trust in AIGC and facilitate their adoption.

Originality/value

Existing research has mainly used technology adoption theories such as unified theory of acceptance and use of technology to examine AIGC user behaviour and has seldom examined user trust development in the AIGC context. This research tries to fill the gap by disclosing the mechanism underlying AIGC user trust formation.

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

1 – 1 of 1