Search results

1 – 3 of 3
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Article
Publication date: 25 February 2020

Isabella Seeber, Lena Waizenegger, Stefan Seidel, Stefan Morana, Izak Benbasat and Paul Benjamin Lowry

This article reports the results from a panel discussion held at the 2019 European Conference on Information Systems (ECIS) on the use of technology-based autonomous agents in…

2475

Abstract

Purpose

This article reports the results from a panel discussion held at the 2019 European Conference on Information Systems (ECIS) on the use of technology-based autonomous agents in collaborative work.

Design/methodology/approach

The panelists (Drs Izak Benbasat, Paul Benjamin Lowry, Stefan Morana, and Stefan Seidel) presented ideas related to affective and cognitive implications of using autonomous technology-based agents in terms of (1) emotional connection with these agents, (2) decision-making, and (3) knowledge and learning in settings with autonomous agents. These ideas provided the basis for a moderated panel discussion (the moderators were Drs Isabella Seeber and Lena Waizenegger), during which the initial position statements were elaborated on and additional issues were raised.

Findings

Through the discussion, a set of additional issues were identified. These issues related to (1) the design of autonomous technology-based agents in terms of human–machine workplace configurations, as well as transparency and explainability, and (2) the unintended consequences of using autonomous technology-based agents in terms of de-evolution of social interaction, prioritization of machine teammates, psychological health, and biased algorithms.

Originality/value

Key issues related to the affective and cognitive implications of using autonomous technology-based agents, design issues, and unintended consequences highlight key contemporary research challenges that allow researchers in this area to leverage compelling questions that can guide further research in this field.

Available. Open Access. Open Access
Article
Publication date: 18 July 2024

Christine Dagmar Malin, Jürgen Fleiß, Isabella Seeber, Bettina Kubicek, Cordula Kupfer and Stefan Thalmann

How to embed artificial intelligence (AI) in human resource management (HRM) is one of the core challenges of digital HRM. Despite regulations demanding humans in the loop to…

1926

Abstract

Purpose

How to embed artificial intelligence (AI) in human resource management (HRM) is one of the core challenges of digital HRM. Despite regulations demanding humans in the loop to ensure human oversight of AI-based decisions, it is still unknown how much decision-makers rely on information provided by AI and how this affects (personnel) selection quality.

Design/methodology/approach

This paper presents an experimental study using vignettes of dashboard prototypes to investigate the effect of AI on decision-makers’ overreliance in personnel selection, particularly the impact of decision-makers’ information search behavior on selection quality.

Findings

Our study revealed decision-makers’ tendency towards status quo bias when using an AI-based ranking system, meaning that they paid more attention to applicants that were ranked higher than those ranked lower. We identified three information search strategies that have different effects on selection quality: (1) homogeneous search coverage, (2) heterogeneous search coverage, and (3) no information search. The more applicants were searched equally often (i.e. homogeneous) as when certain applicants received more search views than others (i.e. heterogeneous) the higher the search intensity was, resulting in higher selection quality. No information search is characterized by low search intensity and low selection quality. Priming decision-makers towards carrying responsibility for their decisions or explaining potential AI shortcomings had no moderating effect on the relationship between search coverage and selection quality.

Originality/value

Our study highlights the presence of status quo bias in personnel selection given AI-based applicant rankings, emphasizing the danger that decision-makers over-rely on AI-based recommendations.

Details

Business Process Management Journal, vol. 30 no. 8
Type: Research Article
ISSN: 1463-7154

Keywords

Access Restricted. View access options
Article
Publication date: 29 May 2024

Elena Mazurova and Willem Standaert

This study aims to uncover the constraints of automation and the affordances of augmentation related to implementing artificial intelligence (AI)-powered systems across different…

430

Abstract

Purpose

This study aims to uncover the constraints of automation and the affordances of augmentation related to implementing artificial intelligence (AI)-powered systems across different task types: mechanical, thinking and feeling.

Design/methodology/approach

Qualitative study involving 45 interviews with various stakeholders in artistic gymnastics, for which AI-powered systems for the judging process are currently developed and tested. Stakeholders include judges, gymnasts, coaches and a technology vendor.

Findings

We identify perceived constraints of automation, such as too much mechanization, preciseness and inability of the system to evaluate artistry or to provide human interaction. Moreover, we find that the complexity and impreciseness of the rules prevent automation. In addition, we identify affordances of augmentation such as speedier, fault-less, more accurate and objective evaluation. Moreover, augmentation affords to provide an explanation, which in turn may decrease the number of decision disputes.

Research limitations/implications

While the unique context of our study is revealing, the generalizability of our specific findings still needs to be established. However, the approach of considering task types is readily applicable in other contexts.

Practical implications

Our research provides useful insights for organizations that consider implementing AI for evaluation in terms of possible constraints, risks and implications of automation for the organizational practices and human agents while suggesting augmented AI-human work as a more beneficial approach in the long term.

Originality/value

Our granular approach provides a novel point of view on AI implementation, as our findings challenge the notion of full automation of mechanical and partial automation of thinking tasks. Therefore, we put forward augmentation as the most viable AI implementation approach. In addition, we developed a rich understanding of the perception of various stakeholders with a similar institutional background, which responds to recent calls in socio-technical research.

Details

Information Technology & People, vol. 37 no. 7
Type: Research Article
ISSN: 0959-3845

Keywords

1 – 3 of 3
Per page
102050