Artur Modliński and Rebecca K. Trump
The marketplace is becoming increasingly automated, with consumers frequently expected to interact with machines. Not all consumers are receptive to this trend. We examine how the…
Abstract
Purpose
The marketplace is becoming increasingly automated, with consumers frequently expected to interact with machines. Not all consumers are receptive to this trend. We examine how the individual difference of speciesism impacts consumer reactions to automation in the marketplace.
Design/methodology/approach
We conducted three studies, including an exploratory correlational survey and two two-factor studies.
Findings
Study 1 provides survey evidence of a positive relationship between one’s level of speciesism and their belief that customer service automation is justified. Study 2 finds that speciesists have more favorable attitudes toward brands using automated (vs human) customer service. Study 3 finds that the more speciesists perceive that tasks they are required to perform at their own work are illegitimate (i.e. unreasonable), the more favorable their reactions to automation, which provides support for our theorizing that speciesists appreciate automation’s ability to relieve humans of such work tasks.
Practical implications
We recommend that marketers target speciesists as early adopters of chatbots. Further, brands targeting customers likely to be high on speciesism can benefit from adopting chatbots for routine tasks, as this can improve this segment’s brand attitudes.
Originality/value
This research identifies that speciesists, people who strongly ascribe to the belief that humans are superior to other species, are particularly receptive to automation in customer service (in the form of chatbots). We provide evidence suggesting that speciesists appreciate that automation relieves their fellow humans of automatable tasks.
Details
Keywords
Marcin Lukasz Bartosiak and Artur Modlinski
The importance of artificial intelligence in human resource management has grown substantially. Previous literature discusses the advantages of AI implementation at a workplace…
Abstract
Purpose
The importance of artificial intelligence in human resource management has grown substantially. Previous literature discusses the advantages of AI implementation at a workplace and its various consequences, often hostile, for employees. However, there is little empirical research on the topic. The authors address this gap by studying if individuals oppose biased algorithm recommendations regarding disciplinary actions in an organisation.
Design/methodology/approach
The authors conducted an exploratory experiment in which the authors evaluated 76 subjects over a set of 5 scenarios in which a biased algorithm gave strict recommendations regarding disciplinary actions at a workplace.
Findings
The authors’ results suggest that biased suggestions from intelligent agents can influence individuals who make disciplinary decisions.
Social implications
The authors’ results contribute to the ongoing debate on applying AI solutions to HR problems. The authors demonstrate that biased algorithms may substantially change how employees are treated and show that human conformity towards intelligent decision support systems is broader than expected.
Originality/value
The authors’ paper is among the first to show that people may accept recommendations that provoke moral dilemmas, bring adverse outcomes, or harm employees. The authors introduce the problem of “algorithmic conformism” and discuss its consequences for HRM.
Details
Keywords
Artur Modliński, Joanna Kedziora and Damian Kedziora
Techno-empowerment refers to giving intelligent technology a decision-making power. It is a growing trend, with algorithms being developed to handle tasks like ordering products…
Abstract
Techno-empowerment refers to giving intelligent technology a decision-making power. It is a growing trend, with algorithms being developed to handle tasks like ordering products or investing in stocks without human consent. Nevertheless, people may feel averse to transfer decision-making autonomy to technology. Unfortunately, little attention was paid in the literature regarding what tasks people exclude from being performed autonomously by non-human intelligent actors. Our chapter presents two qualitative studies: the first one examining what decisions people think autonomous technology (AT) should not make, and the another asking workers which tasks they would not transfer to AT. Results show people oppose AT making decisions when task is perceived as (a) requiring empathy, (b) human experience, (c) intuition, (d) complex, (e) potentially harming human life, (f) having long-term effects, (g) affecting personal space, or (h) leading to loss of control. Workers are not willing to delegate such tasks to AT they perceive as (1) time-consuming, (2) demanding social interaction, (3) providing pleasure, (4) difficult, (5) risky, and (6) responsible. Exclusions are driven by three types of perceived risks: material, contextual, and competitive.