Ritika Chopra, Seema Bhardwaj, Park Thaichon and Kiran Nair
The present study undertakes an extensive review of the causes of service failures in artificial intelligence (AI) technology literature.
Abstract
Purpose
The present study undertakes an extensive review of the causes of service failures in artificial intelligence (AI) technology literature.
Design/methodology/approach
A hybrid review has been employed which includes descriptive analysis, and bibliometric analysis with content analysis of the literature approach to synthesizing existing research on a certain topic. The study has followed the SPAR-4-SLR protocol as outlined by Paul et al. (2021). The search period encompasses the progression of service failure in AI from 2001 to 2023.
Findings
From identified theories, theoretical implications are derived, and thematic maps direct future research on topics such as data mining, smart factories, and among others. The key themes are being proposed incorporates technological elements, ethical deliberations, and cooperative endeavours.
Originality/value
This research study makes a valuable contribution to understanding and reducing service defects in AI by providing insights that can inform future investigations and practical implementations. Six key future research directions are derived from the thematic and cluster discussions presented in the content analysis.
Details
Keywords
Yi Xiang, Chengzhi Zhang and Heng Zhang
Highlights in academic papers serve as condensed summaries of the author’s key work, allowing readers to quickly grasp the paper’s focus. However, many journals do not currently…
Abstract
Purpose
Highlights in academic papers serve as condensed summaries of the author’s key work, allowing readers to quickly grasp the paper’s focus. However, many journals do not currently offer highlights for their articles. To address this gap, some scholars have explored using supervised learning methods to extract highlights from academic papers. A significant challenge in this approach is the need for substantial amounts of training data.
Design/methodology/approach
This study examines the effectiveness of prompt-based learning for generating highlights. We develop task-specific prompt templates, populate them with paper abstracts and use them as input for language models. We employ both locally inferable pre-trained models, such as GPT-2 and T5, and the ChatGPT model accessed via API.
Findings
By evaluating the model’s performance across three datasets, we find that the ChatGPT model performed comparably to traditional supervised learning methods, even in the absence of training samples. Introducing a small number of training samples further enhanced the model’s performance. We also investigate the impact of prompt template content on model performance, revealing that ChatGPT’s effectiveness on specific tasks is highly contingent on the information embedded in the prompts.
Originality/value
This study advances the field of automatic highlights generation by pioneering the application of prompt learning. We employ several mainstream pre-trained language models, including the widely used ChatGPT, to facilitate text generation. A key advantage of our method is its ability to generate highlights without the need for training on domain-specific corpora, thereby broadening its applicability.