Abstract
Purpose
Firms have already begun integrating artificial intelligence (AI) as a replacement for conventional performance management systems owing to its technological superiority. This transition has sparked a growing interest in determining how employees perceive and respond to performance feedback provided by AI as opposed to human supervisors.
Design/methodology/approach
A 2 x 2 between-subject experimental design was employed that was manipulated into four experimental conditions: AI algorithms, AI data, highly experienced human supervisors and low-experience human supervisor conditions. A one-way ANOVA and Welch t-test were used to analyze data.
Findings
Our findings revealed that with a predefined fixed formula employed for performance feedback, employees exhibited higher levels of trust in AI algorithms, had greater performance expectations and showed stronger intentions to seek performance feedback from AI algorithms than highly experienced human supervisors. Conversely, when performance feedback was provided by human supervisors, even those with less experience, in a discretionary manner, employees' perceptions were higher compared to similar feedback provided by AI data. Moreover, additional analysis findings indicated that combined AI-human performance feedback led to higher levels of employees' perceptions compared to performance feedback solely by AI or humans.
Practical implications
The findings of our study advocate the incorporation of AI in performance management systems and the implementation of AI-human combined feedback approaches as a potential strategy to alleviate the negative perception of employees, thereby increasing firms' return on AI investment.
Originality/value
Our study represents one of the initial endeavors exploring the integration of AI in performance management systems and AI-human collaboration in providing performance feedback to employees.
Keywords
Citation
Biswas, M.I., Talukder, M.S. and Khan, A.R. (2024), "Who do you choose? Employees' perceptions of artificial intelligence versus humans in performance feedback", China Accounting and Finance Review, Vol. 26 No. 4, pp. 512-532. https://doi.org/10.1108/CAFR-08-2023-0095
Publisher
:Emerald Publishing Limited
Copyright © 2024, Mohammad Islam Biswas, Md. Shamim Talukder and Atikur Rahman Khan
License
Published in China Accounting and Finance Review. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
1. Introduction
In today’s dynamic and technologically sophisticated workplace, the integration of artificial intelligence (AI) has reshaped different aspects of performance management systems (Commerford, Dennis, Joe, & Ulla, 2022; Luo, Fang, & Peng, 2022; Tong, Jia, Luo, & Fang, 2021). An emerging field of AI integration in performance management systems is to carry out job evaluation processes and provide employees with performance feedback. AI algorithms can analyze huge amounts of data, provide real-time feedback and make customized suggestions for improvement, potentially enhancing the trust, accuracy and transparency of performance feedback (Fountaine, McCarthy, & Saleh, 2019; Jarrahi, 2018; Tong et al., 2021). As a result, certain leading companies in the world, such as Alibaba, Amazon, IBM and Microsoft, have incorporated AI into their performance management systems (Heaven, 2020; Marr, 2019; Roose, 2020). Compared to conventional performance management systems, AI is an advanced technology that has the capability to analyze both unstructured (such as audio, video and text) and structured big data pertaining to employee behaviors (Luo et al., 2022; Rivera, Qiu, Kumar, & Petrucci, 2021). By doing so, it is able to identify intricate and concealed patterns of employee performance feedback that may not be discernible through traditional systems.
However, there exists a contentious debate in the literature and practice about the replacement of humans by AI in performance management systems within the workplace. For example, some scholars posit that AI has superior data analytics capabilities over humans, enabling it to assess employee objective performance and provide personalized suggestions with greater accuracy and transparency, resulting in more precise performance feedback (Luo, Tong, Fang, & Qu, 2019; Mahmud, Islam, Ahmed, & Smolander, 2022; Tong et al., 2021). Others argue that humans have unique soft skills, such as interpersonal communication proficiency, and a greater propensity for managing uncertainty and ambiguity in organizational decision-making, resulting in higher employee trust (Fehrenbacher, Schulz, & Rotaru, 2018; Leicht-Deobald et al., 2019). On the other hand, feedback provided by AI could be subject to debate due to its potential lack of subjective judgment capability, limited experience of emotions and physical sensations (e.g. pleasure, hunger and pain) and moderate agency in thinking, planning and acting (Gray, Gray, & Wegner, 2007; Yam et al., 2021). Prior research in both psychology and accounting has consistently demonstrated that individuals exhibit algorithmic aversion and are hesitant to rely solely on algorithms, preferring human judgment (Dietvorst, Simmons, & Massey, 2015; Mahmud et al., 2022). However, firms have already started integrating AI as a replacement for conventional performance management systems owing to its technological superiority. Therefore, the transition from humans to AI has sparked a growing interest in determining how employees perceive and respond to performance feedback provided by AI as opposed to human supervisors.
The ongoing public interest surrounding AI-human collaboration in decision-making processes is indicative of a burgeoning discourse shaped by the intersection of technology and human interaction (Harrell, 2023; Lancaster, 2023). In an era characterized by rapid advancements in AI and machine learning, the question of how individuals perceive AI-human collaboration in critical decision-making contexts has become increasingly salient. As organizations across diverse sectors integrate AI-driven systems into their operations, the implications for workforce dynamics and human-machine collaboration are profound (Castelo, Bos, & Lehmann, 2019; Makarius, Mukherjee, Fox, & Fox, 2020). The public’s interest in understanding the dynamics of AI-human collaboration in decision-making stems from concerns regarding job displacement, the ethical implications of AI algorithms and the potential for augmenting human capabilities through technological innovation (Lancaster, 2023; Makarius et al., 2020). This multifaceted dialog underscores the need for empirical research to elucidate employees' perceptions of AI versus humans in performance feedback scenarios, shedding light on the evolving landscape of human-AI interaction and its ramifications for organizational effectiveness and societal progress.
We aim to address this research gap. First, drawing on existing research on the extensive capabilities of AI in data mining and analytics (Jarrahi, 2018; Luo, Qin, Fang, & Qu, 2021; Tong et al., 2021), we argued that, compared with human supervisors, the implementation of a predefined fixed formula by AI algorithms for providing employee performance feedback enables a more consistent consideration of a substantial volume of data with heightened precision. Consequently, this augmentation in data processing fosters an enhanced perception among employees concerning trust, performance expectancy and intention to receive feedback generated by AI algorithms compared with highly experienced human supervisors. Second, drawing on the potential lack of subjective judgment capability, limited experience of emotions and physical sensations and moderate levels of agency of AI (Gray et al., 2007; Yam et al., 2021), we argue that the discretionary performance feedback provided by AI data, compared with human supervisors, even those with less experience, fosters a lower level of employee perception concerning trust, performance expectancy and intention to receive performance feedback. Moreover, we expect that a combination of AI-human performance feedback could lead to enhanced employees' perceptions compared with performance feedback solely by AI or humans.
We conducted two field experiments to test the research hypotheses. A 2 x 2 between-subject experimental design was employed to achieve our research goal. We manipulated AI supervisors by classifying them into Al algorithm and AI data, while human supervisors were categorized into highly experienced and low experienced. In experimental Study 1, we used a male name (Mr. Robert) as the human supervisor compared to the AI supervisor to obtain employees' perceptions about trustworthiness, performance expectancy and intention to receive feedback by asking a single question regarding supervisor feedback. As 20% of the top decision-makers are female (Adams, Barber, & Odean, 2018; Zhang, Pentina, & Fan, 2021), in experimental Study 2, we replaced the male name (Mr. Robert) with a female name (Mrs. Emma) to ensure the robustness of the results of experimental study 1, while all other measures remained the same.
Our findings revealed that when a predefined fixed formula was used for performance feedback, it enhanced employees' trust in AI algorithms in the evaluation process, had greater performance expectations and showed stronger intentions to seek performance feedback from AI algorithms compared with highly experienced human supervisors. Conversely, when discretionary performance feedback was provided by AI data, employees expressed lower levels of trust, performance expectancy and intention to receive performance feedback compared with similar performance feedback provided by human supervisors, even with low experience. Moreover, additional analysis findings indicated that a combination of AI-human performance feedback led to a higher level of employees' perceptions compared with combined performance feedback solely by AI or human supervisors.
Our study adds to the growing body of literature exploring the impact of AI integration in performance management systems. To the best of our understanding, this study represents one of the initial investigations into the novel and significant areas of employees' trust, performance expectancy and intention to receive performance feedback provided by AI. Advances in deep learning and neural network techniques enable AI to assume managerial responsibilities (Chen, Biswas, & Talukder, 2022; Tong et al., 2021), particularly in the area of providing feedback. This includes not only monitoring employee performance but also generating customized performance evaluations and personalized recommendations to improve employees' job skills on a large scale. This involvement of AI in performance management systems presents a remarkable opportunity to create firm value (Tong et al., 2021). Thus, this study takes an initial step to extend existing research on AI integration in performance management systems (Commerford et al., 2022; Jarrahi, 2018; Luo et al., 2022; Tong et al., 2021) to investigate employees' perceptions of AI performance feedback.
Second, this study addresses the current body of literature by introducing an AI–human combination in performance management systems. Our additional analysis findings indicated that a combination of AI-human performance feedback led to higher levels of employees' trust, performance expectancy and intention to receive feedback compared with combined performance feedback generated by AI algorithms and AI data. Furthermore, AI-human combined performance feedback resulted in higher employees' perceptions compared with combined performance feedback provided by human supervisors, with both highly experienced and less experienced. We empirically and theoretically disentangle the coexistence of the two performance feedbacks, thereby revealing the novel aspect of combined AI-human feedback. Thus, we contribute to the existing literature by proposing AI-human combined performance feedback process within the performance management systems.
Third, our findings hold significant implications for firms. Firms have already invested hundreds of millions of dollars in AI development and implementation, and there are plans to invest even more in the future (Bloomberg Tax, 2020). Our findings suggest that using a predefined fixed formula used by AI algorithms to generate performance feedback raises employees' perceptions beyond that of human feedback, suggesting substantial returns from investing in AI in a firm's performance management system. However, our findings pertaining to the negative employee perceptions of discretionary performance feedback by AI data underscore the significance for firms to be cognizant of these adverse employee perceptions. The findings of our study advocate an AI-human combined performance feedback process as a potential strategy to alleviate the negative perception of employees. Thus, the combined performance feedback process may increase employee perception, thereby increasing firms' return on AI investment.
Finally, our study offers significant implications for public policy-making. The increasing prevalence of AI in the workplace has piqued the interest of policymakers, who are concerned about its potential impact on employee well-being (Commerford et al., 2022; Tong et al., 2021). As a result, regulations have been enacted to increase the transparency of AI implementation. Prior studies claim that individuals are reluctant to rely on algorithms; they prefer human judgment and exhibit algorithmic aversion (Dietvorst et al., 2015; Mahmud et al., 2022). The findings of this study advocate that combining AI-generated feedback with human feedback might be a viable solution to address the concerns of algorithm aversion that optimize the performance feedback process. Thus, this study provides valuable insights for policymakers to develop a strategy aimed at reducing employees' negative perceptions of algorithm aversion in the performance feedback process.
2. Theoretical background and hypotheses development
2.1 Artificial intelligence in performance management systems
The rapid development of technology has resulted in a notable increase in the automation of various aspects of a firm's performance management systems (Jarrahi, 2018; Luo et al., 2019), especially the employee performance feedback process. As firms look to improve the efficacy and precision of the performance feedback process, the integration of AI into the performance management system has become a prevalent practice. This shift toward automation represents the increasing recognition of AI’s potential to expedite and optimize the evaluation of performance procedures, thereby providing a more data-driven and objective method for assessing managerial effectiveness (Chen et al., 2022; Tong et al., 2021; Zhang et al., 2021). As a result, the adoption of AI-based performance feedback processes is a crucial research topic in the field of firm performance management.
Prior research in the field of AI integration in firms' management has primarily explored the perceptions and attitudes of employees toward AI implementation in organizations. However, a notable gap in the literature is the lack of comparative studies that directly assess employees' perceptions of AI supervisors compared to their perceptions of human supervisors. For example, Danner, Hadžić, Weber, Zhu, and Rätsch (2023) claim that AI is capable of analyzing large amounts of data in a fair and unbiased way, leading to employees' perceptions of fairness and reliability. Similarly, the studies conducted by Ding, Lev, Peng, Sun, and Vasarhelyi (2020) and Li, Hou, Yu, Lu, and Yang (2017) argue that AI systems are designed to provide explanations for their decisions, thereby enhancing employees' perceptions of transparency. Similarly, Tong et al. (2021) reported that the high data-driven capability of AI enhances employees' perceived AI-based feedback as more accurate and reliable. They extended the AI feedback research from the customer level to the employee level through a field experiment at a fintech company and focused on the deployment and disclosure effects of using AI in the workplace to provide feedback on employee performance.
Reynolds and Beatty (1999) argue that human satisfaction is fundamentally driven by the social interaction between people and their points of agreement with organizations. The use of AI in the workplace presupposes that there is an interaction between managers and AI supervisors without any direct involvement. As AI lacks emotional intelligence and human-like characteristics, it may be difficult for managers to interact on a personal level, and this limitation may affect their overall satisfaction and engagement with AI-based systems (Gray et al., 2007; Yam et al., 2021). Previous research has proposed a framework for adopting AI technology in performance management systems by examining issues that affect managers' intention to adopt AI (Jarrahi, 2018; Luo et al., 2019). In contrast, some employees may still prefer human supervisors who can provide more explicit justifications and context-specific insights regarding evaluations (Hu, Kaplan, Wei, & Vega, 2014). Others argue that AI integration in firms' management systems increases employees' concern about the potential impact on their job stability and future career prospects (Kirkos, Spathis, & Manolopoulos, 2007). Existing research also indicates that workers harbor anxiety or fear about potential displacement by AI technology at the individual level (Beare, O’Raghallaigh, McAvoy, & Hayes, 2020; Marginson, McAulay, Roush, & van Zijl, 2014). Moreover, certain managers deliberately refrain from adopting AI because of concerns related to reduced social interaction, privacy issues and a perceived lack of personal control (Cao, Duan, Edwards, & Dwivedi, 2021; Elliott & Soifer, 2022; Meuter, Bitner, Ostrom, & Brown, 2018). However, some scholars argue that the most substantial performance improvements are attained by organizations when AI and humans collaborate synergistically (Trunk, Birkel, & Hartmann, 2020; Wang, Khosla, Gargeya, Irshad, & Beck, 2016; Wilson & Daugherty, 2018).
However, the debate on the complete replacement of humans with AI in firms' management systems remains unsettled among academics and practitioners. This divide has led researchers and practitioners to hold differing views on the effectiveness of AI in replacing humans within performance management systems. To contribute to this disclosure, our research endeavors to fill this debate by conducting a comprehensive investigation into the distinct impacts of AI and human supervisors on employee trust, performance expectancy and intention to receive feedback in the context of performance management systems.
2.2 Employee perceptions of AI-human collaboration
In contemporary organizational contexts, the integration of AI into collaborative processes between humans and machines has garnered considerable scholarly attention, particularly concerning employee perceptions and interactions within such AI-human collaborations. Studies underscore the pivotal role of employees' trust in AI technologies, which fundamentally shapes their perceptions of collaborative endeavors involving human-machine interactions (Zhang et al., 2021). Research findings reveal that factors such as the transparency, explainability and reliability of AI-human collaboration exert significant influence on employees' willingness to embrace and engage with AI-human collaboration processes (Li, Pan, Xin, & Deng, 2020; Liu, Li, Wang, & Li, 2023). Moreover, organizational initiatives focusing on communication and training, aimed at elucidating AI functionalities and alleviating employee concerns, are instrumental in cultivating a culture of trust and acceptance toward AI-human collaboration within the workplace (Chui et al., 2018; Zhang et al., 2021).
Furthermore, employee perceptions of AI-human collaboration are deeply influenced by concerns revolving around job security, autonomy and professional identity. Within organizational settings, employees may perceive AI technologies as potential threats to their roles and responsibilities, particularly in contexts where automation is envisaged to supplant human labor (Davenport & Ronanki, 2018; Jia, Luo, Fang, & Liao, 2023). Conversely, empirical evidence suggests that employees' perceptions of AI-human collaboration as facilitators of task efficiency, skill development and work-life balance positively impact overall job satisfaction and psychological well-being (Fisher, 2019; Wilson & Daugherty, 2018). However, apprehensions regarding heightened workload, job stress and diminished job control in AI-driven environments may contribute to adverse perceptions and employee burnout (Dietvorst et al., 2015; Mahmud et al., 2022; Marsh, Vallejos, & Spence, 2022).
Moreover, research underscores the significance of transparent and ethical AI governance frameworks in bolstering employee trust and confidence in decision-making processes driven by AI technologies (Jarrahi, 2018; Luo et al., 2019; Tong et al., 2021). Organizational commitments toward addressing biases, promoting diversity and ensuring algorithmic accountability further augment employees' perceptions of fairness and social responsibility within the context of AI-human collaboration (Castelo et al., 2019; Jarrahi et al., 2021; Jia et al., 2023). Such endeavors are integral to fostering an environment conducive to effective and harmonious collaboration between humans and AI systems, thereby maximizing the potential benefits of AI-driven initiatives while mitigating associated challenges and concerns within organizational settings.
The integration of AI into performance management systems represents a paradigm shift in organizational practices, reshaping the dynamics of employee evaluation, feedback and development. While AI technologies offer unprecedented opportunities for data-driven decision-making and talent optimization, they also raise complex ethical, social and psychological considerations that necessitate careful scrutiny and deliberation. As organizations navigate the complexities of AI-human collaboration, research needs to focus on conducting a comprehensive investigation into the distinct impacts of AI-human collaboration on employee trust, performance expectancy and the intention to receive feedback in the context of performance management systems.
2.3 Hypotheses development
2.3.1 Trust
Trust refers to an individual’s willingness to rely on, have confidence in and believe in the reliability, security and effectiveness of a particular system (Mcknight, Carter, Thatcher, & Clay, 2011). Trust is regarded as a necessary antecedent for employees of AI integration in performance management systems (Johnson, Bardhi, & Dunn, 2008; Zhang et al., 2021). Trust in the feedback provider influences employees' willingness to take and act upon receiving feedback. Trust in AI feedback can be affected by factors like the reliability of algorithms and perceived lack of bias. AI feedback is designed to objectively, impartially and methodically process data by leveraging algorithms (Jarrahi, 2018; Luo et al., 2022; Tong et al., 2021). AI algorithms are capable of analyzing large amounts of data and generating objective performance feedback using logic, mathematics and a predefined fixed formula (Jarrahi et al., 2021), which we call AI algorithms. Similarly, objective performance feedback provided by highly experienced human supervisors is based on logic, mathematics and a predefined fixed formula. When AI algorithms generate performance feedback, they consider all available data and are free from potential biases that humans, even those with highly experienced human supervisors, might inadvertently introduce (Tong et al., 2021). Furthermore, compared with highly experienced human supervisors, AI algorithms possess the capability to process extensive datasets in real-time, enabling more prompt and accurate feedback, ultimately leading to improved employee performance and increased trust.
AI algorithms' transparency, which can provide insight into the criteria used for evaluation, contributes further to fostering employee trust in the feedback process (Jarrahi et al., 2021). Moreover, the objectivity and impartiality of AI algorithms compared to humans, even those with highly experienced human supervisors, can foster a sense of equity among employees who believe their performance is being evaluated without bias or favoritism. In contrast, highly experienced human supervisors frequently lack the ability to appropriately weigh or combine relevant information when relying on human judgment to make decisions (Hilbert & López, 2011; Rahwan et al., 2019). We argue that although highly experienced human supervisors possess beneficial insights and knowledge, their feedback may lack vast data processing capacity, which potentially leads to lower levels of perceived trust. Thus, we propose that AI algorithms, with their high data-driven capability, have the potential to foster higher levels of employee trust in the objective-based performance feedback process because of their perceived objectivity, transparency and fairness compared with feedback from highly experienced human supervisors.
Performance feedback generated by the AI algorithm will lead to higher employee trust than feedback provided by highly experienced human supervisors.
Trust in supervisors is also built on their interpersonal communication skills, leadership abilities and emotional interactions with employees. When AI relies purely on AI data for employees' subjective evaluation, such as the performance data of other participants or historical data pertaining to similar tasks, which we call AI data, employees may perceive the feedback as lacking fairness. On the other hand, humans' subjective performance is based on their judgment, knowledge and discretion (Bol, 2008; Fehrenbacher et al., 2018), even if they possess less experience. We argue that subjective performance feedback from humans, even those with less experience, takes into account employees' distinct capabilities and individual circumstances that AI data might inadvertently introduce. Moreover, the interpersonal communication skills of humans, even those with less experience, may foster an environment in which employees feel more at ease discussing their concerns and requesting clarifications, leading to an increase in trust in the feedback procedure, which is absent in feedback generated by AI data. This individualized approach by humans could enhance greater levels of employee trust in the feedback provided by humans with less experience compared to AI data. Thus, we propose the following hypothesis:
Performance feedback generated by AI data will lead to lower employee trust than performance feedback provided by less experienced human supervisors.
2.3.2 Performance expectancy
The degree to which people believe that technology can facilitate them to execute a task is known as performance expectancy (Zuiderwijk, Janssen, & Dwivedi, 2015). Performance expectancy has a significant impact on AI integration in performance management systems (Zhang et al., 2021). The integration of AI in the performance feedback process is influenced by several factors, including the reliability and accuracy of the feedback (Jarrahi, 2018; Tong et al., 2021), which serve as an important indicator of performance expectancy. We argue that the ability of AI algorithms, compared to human supervisors, even those with high experience, to process large amounts of information objectively and impartially through logical reasoning and mathematical computations, giving the impression of accuracy and reliability, enhances the performance expectations of workers.
When employees obtain performance feedback from an AI algorithm, they may perceive it as a comprehensive assessment that takes into account all available data, leading to higher levels of accuracy and reliability without any bias. As a result, employees may develop a stronger belief in the credibility of the feedback and become more inspired to boost their performance, thereby increasing their performance expectancy. In contrast, human supervisors, even those with highly experienced feedback, may be influenced by limitations in huge data processing capability, which may introduce variability and potential biases, resulting in lower levels of accuracy and reliability. Consequently, employees may develop a pessimistic outlook on highly experienced human supervisors' work and have lower performance expectations compared to AI algorithms. Thus, the following hypothesis is proposed:
Performance feedback generated by the AI algorithm will lead to higher levels of employee performance expectancy than performance feedback provided by highly experienced human supervisors.
The interpersonal communication skills of human supervisors, even those with less experience, might foster a supportive environment where employees feel encouraged and motivated to enhance their performance, leading to higher performance expectancy (Zhang et al., 2021). We argue that when employees receive feedback regarding subjective performance based on AI data, they may find it lacking in contextual insights compared to human supervisors with less experience, resulting in lowered performance expectancy. The perceived generic nature of AI data may undermine employees' belief in the feedback’s accuracy and relevance, resulting in lower levels of performance expectancy. Therefore, we proposed the following hypothesis:
Performance feedback provided by AI data will lead to lower levels of employee performance expectancy than performance feedback provided by less experienced human supervisors.
2.3.3 Intention to receive performance feedback
Intention refers to a person’s plan or willingness to accept and utilize technology in the future (Venkatesh, Morris, Davis, & Davis, 2003). Intention to receive feedback refers to an individual’s willingness and readiness to actively seek and accept performance-related feedback with the aim of enhancing their skills and job performance. AI feedback and human feedback often offer employees different values. For example, AI feedback is characterized by reliability, credibility and efficiency (Adam, Wessel, & Benlian, 2021; Agrawal, Gans, & Goldfarb, 2019; Luo et al., 2019), while human feedback is characterized by social interaction, responsiveness, adaptability and flexibility (Meuter, Ostrom, Roundtree, & Bitner, 2000; Zhang et al., 2021). The technical advantages of AI algorithms enable the processing of vast amounts of data faster than human supervisors, even those with high experience, resulting in feedback that employees may perceive as more accurate and trustworthy (Jarrahi, 2018; Tong et al., 2021). In contrast, human supervisors with high experience, no matter how knowledgeable they may be, struggle to effectively process massive amounts of data (Hilbert & López, 2011; Rahwan et al., 2019). Employees may assume that AI-generated feedback is devoid of the potential biases inherent to human evaluations, which may increase their intention to seek feedback from AI algorithms. Moreover, the superior capability of AI algorithms compared with humans, even highly experienced human supervisors, to provide real-time feedback could have a positive effect on employees' motivation and commitment to continuous improvement, resulting in improved job performance and a greater intention to participate in the AI algorithm feedback process. Thus, we propose the following hypothesis:
The intention to receive performance feedback from the AI algorithm will be greater among employees compared with their intention to receive feedback from human supervisors with high expertise.
While AI can process vast amounts of data, the lack of subjective evaluation of AI data may lead employees to perceive the feedback as less tailored to some circumstances (Tong et al., 2021). Humans, even less experienced supervisors, may be able to provide more pertinent and actionable feedback since they have a better grasp of each employee’s unique strengths, limitations and development needs because of their interpersonal communication skills. This increased comprehension of employees fosters greater levels of trust and encourages employees to seek feedback from human supervisors with less experience compared with AI data. Thus, we propose the following hypothesis:
The intention to receive performance feedback from AI data will be lower among employees compared with their intention to receive feedback from human supervisors with less experience.
3. Methodology
We conducted two experimental studies to examine our hypotheses. The goal of our study was to compare employees' perceptions of performance feedback from AI supervisors to those from human supervisors. As a result, the participants recruited for this study were required to possess a minimum bachelor’s degree with a minimum of three years of work experience. Participants were recruited from Bangladesh, a South Asian region country [1], resulting in a total of 320 individuals who completed the experiment. We reached out to senior management from different firms to inquire about the possibility of employee participation in the experiment. Upon obtaining consent from the management, we communicated directly with the employees to request information regarding their educational qualifications and work experience. Those individuals who fulfilled the prerequisites were requested to participate in the experiment. To enhance participants' engagement in the experimental task, they were informed that their participation would be rewarded with a gift worth 5 RMB [2].
We conducted a 2 x 2 between-subject experimental design to achieve our research goal. We manipulated AI supervisors by classifying them into Al algorithms and AI data, while human supervisors were categorized into highly experienced and low experienced. The performance feedback provided by the AI algorithm and highly experienced human supervisor is based on logical reasoning, mathematical computation and a fixed formula that considers all available information. Performance feedback provided by AI data is based on the performance data of other participants or historical data pertaining to similar tasks, while performance feedback provided by low-experienced human supervisors is based on experience, judgment and discretion. A pretest was conducted to examine the perceptions of AI supervisors in scenarios involving AI algorithm versus AI data. In total, 30 professional managers employed in the manufacturing industry were randomly assigned to either AI algorithm or AI data scenarios. Half of the participants were informed that their objective performance would be evaluated by an AI supervisor based on logical reasoning, mathematical computation and a predefined fixed formula, considering all available information. This AI supervisor utilized advanced technologies such as deep learning neural network algorithms, natural language analysis and big data analysis capabilities to undertake managerial responsibilities. In contrast, the remaining participants were informed that their AI supervisor evaluated performance based on other participants or historical performance data for similar tasks, generating subjective performance evaluations. Endowed with superior technical abilities, this supervisor exhibited the requisite attributes for effective subjective performance evaluation. Participants rated their perceived AI supervisor performance evaluation style on a 1–7 Likert scale, where 1 represented AI data and 7 represented AI algorithm. The pilot test manipulation was successful (AI algorithm = 5.47 versus AI data = 3.32, F = 10.76, p = 0.003).
Human supervisors' experiences were adapted and revised from previous studies by White (2005) and Zhang et al. (2021). A pilot test was conducted to determine the human supervisor about the high vs low experience scenarios. In total, 30 professional managers employed in the manufacturing industry were randomly assigned to either high- or low-experience scenarios. Half of the participants were apprised that their supervisor possessed over a decade of experience evaluating employees' performance within the same industry. In addition to this extensive experience, the supervisor demonstrated commendable proficiency in leadership, management, communication and technical skills. Conversely, the remaining half of the participants were informed that their supervisor, responsible for evaluating employees' performance, held a university degree and accrued two years of work experience in a small firm within the same industry. Endowed with commendable communication and technical abilities, this supervisor exhibited the requisite attributes for effective performance in the role. They read a description of a human and scored the supervisor’s experience level on a 1–7 Likert scale, where 1 = low experienced and 7 = highly experienced. The manipulation of the pilot test was successful (M high = 6.12 versus M low = 3.98, F = 11.82, p = 0.000).
3.1 Study 1
3.1.1 Participants
For Study 1, a total of 164 participants were recruited, primarily drawn from the manufacturing and service-providing industries, constituting 82% and the remaining portion, respectively. These participants held executive positions within their respective firms. Voluntary participation was emphasized throughout the recruitment process. The average age of the participants was 32.6 years, with females comprising 35.2% of the sample. Furthermore, 56.2% of the participants possessed graduate-level education, coupled with an average job experience of 6.34 years.
3.1.2 Experimental task and procedure
Participants took on the role of a sales executive at a large company. They had been working for the company for a long time and were known among the company’s top-performing employees. The evaluation of employee performance by their respective supervisors (Mr. Robert, as the human supervisor compared to the AI supervisor) would determine their salary and additional bonus. As sales executives, their main duty was to make more sales to maximize their salary. Employees would receive a fixed salary plus a target bonus. In addition, they would get another bonus based on their personalities, work attitudes, leadership, teamwork, creativity and communication skills.
The participants were randomly assigned to one of the four experimental groups. Specifically, one-fourth of the participants were assigned to the AI algorithm, another one-fourth were assigned to the AI data condition, the subsequent one-four were assigned to humans with highly experienced conditions and the remaining participants were assigned to humans with low-experienced conditions. Participants in the AI algorithms and humans with highly experienced conditions were informed that their objective performance feedback would be determined using a predefined fixed formula. Participants in the AI data condition were informed that their AI supervisor evaluated performance based on data from other participants or historical performance data of similar tasks, generating subjective performance evaluations, while participants in the condition involving humans with less experience were informed that their subjective performance feedback would be determined based on supervisor discretion. Participants responded to three questions about their perceptions of supervisors' performance feedback styles. After answering the experimental task question, they were requested to answer a post-experimental questionnaire, including a manipulation check and demographic inquiries.
3.1.3 Measurement of variables
3.1.3.1 Independent variable
AI algorithm feedback = an indicator variable that takes the value of 1 (0) if performance feedback is provided by AI algorithms (performance feedback provided by human supervisors with highly experienced).
AI data feedback = an indicator variable that takes the value of 1 (0) if performance feedback is provided by AI data (performance feedback provided by human supervisors with low experience).
3.1.3.2 Dependent variable
Participants were asked to rate their trust toward the supervisor’s feedback (0 –no trust at all, 100–100% trust). Performance expectancy was measured by the likelihood that the supervisor’s feedback would lead to a positive outcome (1: very unlikely to 7: very likely), and the intention was measured by the likelihood that the firm should employ AI/a human supervisor for measuring their performance feedback (1: very unlikely to 7: very likely).
3.1.4 Results and discussion
To assess the effective manipulation of the four performance feedback providers, the participants were asked about the individuals who were responsible for providing their performance feedback. After completing the experimental task, the participants were given four options to choose from: (1) AI algorithms, (2) AI data, (3) humans with high experience and (4) humans with low experience. It was noted that all participants provided accurate responses to this question.
Figure 1 depicts the distribution of trust, expectancy and intention scores for Study 1. Within the context of Study 1, a male supervisor is assumed for the scenarios involving humans with high experience (HH) and humans with low experience (HL), whereas AI supervisors are categorized as AI algorithms (AL) and AI data (AD). Notably, in this study, AL exhibited higher levels of trust, expectancy and intention compared to AD, HH and HL. Furthermore, the trust, expectancy and intention scores attributed to AL far surpass those associated with HH. Remarkably, the first quartile of the trust distribution of AL even surpasses the median trust, expectancy and intention observed in scenarios involving HH. Conversely, employees exhibit a preference for receiving performance feedback from supervisors with lower levels of experience (HL) compared with AI data (AD), primarily for two reasons. First, in situations where discretionary performance feedback is provided by HL, employees tend to view them as more empathetic and possess a greater capacity to understand their individual circumstances and concerns, utilizing their judgment and discretion accordingly. Second, HL delivers extra efforts to accommodate employees’ needs due to their limited experience in delivering performance feedback. Although this reasoning appears to contradict previous research findings that emphasize the significance of expertise in shaping recipients' perceptions, for example, White (2005) and Zhang et al. (2021) suggest a positive relationship between agents’ expertise and recipients’ perceptions of their advice. Based on the Box and Whisker plots depicted in Figure 1, compelling evidence emerges to support the hypotheses positing that employees manifested higher levels of trust in AL compared to human supervisors with extensive experience, particularly when feedback was delivered by AL.
Table 1 provides the variation of mean trust, expectancy and intention across the four feedback providers. One-way ANOVA was used to assess the variation in employee perceptions using trust, expectancy and intention as dependent variables and four feedback providers as independent variables. A one-way ANOVA results in Table 1 show that there is a significant difference in trust (F3,160 = 9.154, p = 0.000), expectancy (F3,160 = 4.589, p = 0.004) and intention (F3,160 = 6.784, p = 0.000) among the four feedback providers. We calculated t-values and significance levels using the Welch t-test results to test our hypotheses. Specifically, the results of the analysis are summarized in Table 2, providing support for H1, H2 and H3.
The results of Study 1 presented that employees exhibited higher levels of trust in AI algorithms, had greater performance expectations and stronger intent to receive performance feedback from AI algorithms than human supervisors with high experience when feedback was provided based on a predefined fixed formula (H1a, H2a and H3a). These findings indicate the perceived objectivity and reliability of AI algorithms (Jarrahi et al., 2021; Tong et al., 2021). When AI algorithms generate feedback based on a predefined formula, employees may perceive it to be devoid of bias, personal judgment and favoritism, which are typically associated with human supervisors. This perceived fairness and uniformity of AI algorithm-generated feedback could enhance trust levels. Additionally, the accuracy and precision of AI algorithms in providing feedback might contribute to higher performance expectations and a stronger intention to receive feedback from them. The predefined fixed formula-based AI algorithms feedback could engender a perception among employees that the feedback is reliable and unbiased, hence fostering heightened trust and positive expectations.
Conversely, when discretionary performance feedback was provided by human supervisors, even with low experience, employees expressed higher trust, had greater performance expectations and showed stronger intentions to seek performance feedback from him than AI data (H1b, H2b & H3b). The findings highlight the psychological and relational dimensions of human interaction. Employees might regard human supervisors as being more empathic and have a greater capacity to comprehend their specific circumstances and issues (Gray et al., 2007; Yam et al., 2021). This human element may have contributed to the higher levels of trust observed, even though the supervisors possessed less experience. On the other hand, as a technological entity, AI may be perceived as lacking emotional comprehension and a personal touch, resulting in a lower level of perceived trust despite its potential accuracy. The familiarity and social connection associated with human supervisors may outweigh the expertise factor, which could explain the difference in trust levels between humans and AI, particularly when providing discretionary feedback. Based on the findings of study 1, it can be inferred that a combination of AI and human-generated feedback (AI algorithms for feedback based on a predefined fixed formula and human supervisors, even with low experience for discretionary feedback) is more favored than relying solely on either AI or human feedback.
3.2 Study 2
We examined our hypotheses through Study 1. But in Study 1, we used a male name (Mr. Robert) as a human supervisor. Study 2 investigated whether these impacts were consistent with the gender of the human supervisor. Firms are traditionally male-dominated, with females under 20% of the top decision-makers (Adams et al., 2018; Zhang et al., 2021). To ensure that the results were robust regardless of the gender of the human supervisor, the name of the human supervisor was replaced with a female named Mrs. Emma.
3.2.1 Participants
Study 2 enlisted 156 participants, predominantly sourced from the manufacturing and service-providing sectors, comprising 85% and the remaining portion, respectively. These individuals occupied executive roles within their respective organizations. Participation was voluntary. The average age of the participants was 31.8 years, with 38.2% being female. Moreover, 54.6% of the participants were graduates and had 6.73 years of job experience. The experimental task, procedure and variables remained the same as those in Study 1.
3.2.2 Results and discussion
In Study 2, the participants were asked the same manipulation check question as in Study 1. All the participants provided accurate responses to the manipulation check question.
Figure 2 presents the distribution of trust, expectancy and intention scores for Study 2. In this study, AI algorithms (AL) show higher trust than AI data (AD), and AL shows much higher trust than humans with high experience (HH), where the first quartile of trust distribution of AL is even higher than the median trust of HH. In this study, AL showed a higher expectancy than HH. When expectancy distributions for HL and AD are compared between studies, the highest level of expectancy is depicted for HL, and a lower degree of expectancy is observed for AD. Regardless of the gender of a supervisor, HL depicts an intention distribution with the first quartile much higher than the median of the intention distribution for AD, AL and HH. Conversely, employees show a stronger preference for receiving performance feedback from HL compared to other conditions. This inclination stems from the perception that HL, given their limited experience in delivering performance feedback, may seek to please employees at times. According to the Box and Whisker plots illustrated in Figure 2, substantial evidence supports the hypotheses suggesting that employees demonstrated increased trust in AL in comparison to human supervisors with extensive experience, especially when the feedback originated from AL.
Table 3 provides the variation of mean trust, expectancy and intention across the four feedback providers. To check the robustness of the results, a one-way ANOVA was employed to test hypotheses using the same dependent variables and independent variables as in Study 1. The one-way ANOVA results in Table 3 show that there is a significant difference in trust (F3,152 = 9.247, p = 0.000), expectancy (F3,152 = 7.070, p = 0.000) and intention (F3,152 = 5.285, p = 0.001) among the four feedback providers. Furthermore, we calculated t-values and significance levels using the Welch t-test results to test our hypotheses. Specifically, the results of the analysis are summarized in Table 4, providing support for H1, H2 and H3 again.
The findings of Study 2 were consistent with those of Study 1. Study 2 showed that employees trusted AI algorithms and had greater performance expectations and a stronger intent to receive performance feedback from AI algorithms than highly experienced human supervisors when feedback was provided based on a predefined fixed formula. Conversely, when discretionary performance feedback was provided by human supervisors, even with low experience, employees expressed higher levels of perception compared with performance feedback generated by AI data. Study 2 also provides evidence that a combination of AI and human-generated feedback is more favorable than relying solely on either AI or human feedback.
3.3 Additional analysis
The extant scholarly literature consistently illustrates the superiority of AI over human decision-making in optimizing objective-based decisions (Haesevoets, De Cremer, Dierckx, & Van Hiel, 2021; Jarrahi, 2018). Research emphasizes the crucial significance of employees' trust in AI technologies, which fundamentally molds their perspectives on collaborative initiatives involving human-machine interactions (Zhang et al., 2021). Recent studies indicate that elements like transparency, explainability and reliability within AI-human collaboration significantly impact employees' readiness to adopt and participate in such collaborative processes (Li et al., 2020; Liu et al., 2023). Empirical studies indicate that employees' perceptions of AI-human collaboration, fostering task efficiency, skill development and work-life balance, positively correlate with overall job satisfaction and psychological well-being (Fisher, 2019; Wilson & Daugherty, 2018). However, a contingent of scholars posits that the most significant performance enhancements occur through synergistic collaboration between AI and humans (Trunk et al., 2020; Wang et al., 2016; Wilson & Daugherty, 2018). In line with prior research on AI-human collaboration, we conducted an additional experiment. In total, 89 participants were recruited, predominantly from the manufacturing and service-providing industries, comprising 88% and the remainder, respectively. These participants held executive positions within their respective organizations. The mean age of participants was 34.3 years, with females constituting 29% of the sample. Furthermore, 66.2% of participants held graduate-level qualifications, with an average job tenure of 6.88 years. The experimental task, procedure and variables remained consistent with those of Study 1.
We hypothesize that performance feedback derived from AI-human collaboration will elicit more favorable employee perceptions compared to feedback generated by either AI or a human supervisor individually. To examine these predictions, we computed t-values and significance levels utilizing the results of the Welch t-test. The outcomes of this analysis, presented in Table 5, indicate that employees demonstrated elevated levels of trust in AI-human collaborative feedback, exhibited heightened performance expectations and expressed a stronger intent to receive performance feedback from AI-human collaboration as opposed to feedback solely from an AI supervisor or a human supervisor. These findings align with previous research suggesting that the most substantial performance improvements arise through the synergistic collaboration between AI and humans (Trunk et al., 2020; Wang et al., 2016; Wilson & Daugherty, 2018). This perceived fairness and consistency in AI-human collaborative feedback could contribute to heightened levels of trust. Furthermore, the precision and accuracy demonstrated by AI-human collaboration in delivering feedback may foster increased performance expectations and a more resolute intention to receive feedback from this collaborative source. The collaborative nature of AI-human feedback could instill a perception among employees that the feedback is reliable and impartial, consequently enhancing trust and cultivating positive expectations.
4. Conclusion
Our study employed an experimental approach to examine the employees' perceptions of AI compared to humans regarding trust, performance expectancy and intention to receive feedback in the context of the performance feedback process. In Study 1, a male name was used for the human supervisor compared to the AI supervisor, whereas in Study 2, the male name was replaced with a female name to compare the AI supervisor. In both studies, a single question was used to gauge employees' perceptions regarding their supervisors.
The two experimental studies align in their findings, indicating that employees exhibited higher levels of trust in AI algorithms, had greater performance expectations and showed stronger intentions to seek performance feedback from AI algorithms than highly experienced human supervisors when feedback was provided based on a predefined fixed formula. These findings are consistent with previous studies (Luo et al., 2019; Mahmud et al., 2022; Tong et al., 2021), which show that the superior data analytics capabilities of AI over humans enable it to assess employee objective performance and provide personalized suggestions with greater accuracy and transparency, resulting in more precise performance feedback. On the other hand, when comparing AI data to human supervisors, even with low-experienced supervisors, under the discretionary performance feedback process, employees expressed higher trust in humans with low-experienced supervisors, had greater performance expectations and showed stronger intentions to seek performance feedback from them. These findings corroborate the previous research (Davenport & Ronanki, 2018; Leicht-Deobald et al., 2019; Schrage, 2019) and show that interpersonal communication skills and transformational leadership styles of humans are capable of effectively conveying performance-related information to employees, thereby having positive perceptions on human supervisors. Moreover, additional analysis findings indicated that a combination of AI-human performance feedback led to higher levels of employees' perceptions compared to performance feedback solely by AI or humans. These findings are consistent with previous studies (Trunk et al., 2020; Wang et al., 2016; Wilson & Daugherty, 2018), showing that organizations achieve the most significant performance improvements through a synergistic collaboration between AI and humans.
Our study makes a significant contribution to the field of management accounting research and practice. Our study takes an initial step to extend existing research on AI integration in performance management systems (Commerford et al., 2022; Jarrahi, 2018; Luo et al., 2022; Tong et al., 2021) to investigate employees' perceptions of AI performance feedback. The findings of our study advocate an AI-human combined performance feedback process as a potential strategy to alleviate the negative perception of employees. Thus, the combined performance feedback processes increase employee perception, thereby resulting in firms' return on AI investment. Moreover, this study provides valuable insights for policymakers to develop a strategy aimed at reducing employees' negative perceptions of algorithm aversion in the performance feedback process.
Our study encompasses some limitations that offer several interesting avenues for future research. First, our both studies were based on employees' perceptions rather than their actual behavior. Future research should explore real-world reactions to AI versus human supervisors to validate and calibrate our findings. Second, this study’s data collection was restricted to a single country. Moreover, we employed a single question for each variable and an indirect questioning technique in which participants were asked to answer questions from the perspective of human supervisors (Mr. Robert and Mrs. Emma, as described in the scenario) to compare with AI. However, it should be noted that responses obtained through a single question for each variable and indirect questioning may not fully represent the participants' actual perceptions. Future research can be conducted with samples from other countries, asking more questions for each variable and using a more appropriate sample (i.e. employees from firms that are fully automated) to examine the employees' perceptions. Third, the manipulation of human supervisors regarding high versus low experience scenarios was solely predicated upon the human supervisor’s job tenure, communication skills and technical proficiency. Future investigations may explore alternative manipulations of human supervisors utilizing various variables, including their educational qualifications, participation in formal training or professional development programs, prior occupational roles and their knowledge and expertise in the field.
Finally, future research could replicate our findings in other contexts involving decision-making (e.g. recruitment, employee target setting, privacy and law enforcement). Moreover, there is an opportunity for an investigation into the connections between employees' attitudes and the use of both human and AI-based technologies. It is also crucial to examine the sources and interactions of other criteria (such as advisor similarity, likeability and tangibility). The process underlying the preference between AI and human supervisors can be better understood with the help of such an investigation. Other factors that may be explored in influencing the adoption of AI-based services include perceived costs and risks, perceived integrity and dependability and perceived innovation. Expanding AI adoption research may necessitate the development of a separate theoretical structure that unifies the various settings, factors and mechanisms that have been reported in this emerging research domain.
Figures
One-way ANOVA
Measure | Variation source | DF | SS | MSS | F | p-value |
---|---|---|---|---|---|---|
Trust | Between | 3 | 1764 | 588.1 | 9.154 | 0.000*** |
Within | 160 | 10279 | 64.2 | |||
Expectancy | Between | 3 | 23.58 | 7.860 | 4.589 | 0.004*** |
Within | 160 | 274.05 | 1.713 | |||
Intention | Between | 3 | 37.87 | 12.624 | 6.784 | 0.000*** |
Within | 160 | 297.76 | 1.861 |
Note(s): ***, **, * indicate significant at the 1, 5 and 10% levels, respectively
Source(s): Table by authors
Welch t-test
Measure | Hypothesis | Statistical form | Welch t-test | Supports the hypothesis | ||
---|---|---|---|---|---|---|
t-statistic | DF | p-value | ||||
Trust | 3.212 | 78 | 0.000*** | Yes | ||
−3.611 | 77 | 0.000*** | Yes | |||
Expectancy | 2.916 | 71 | 0.002*** | Yes | ||
−2.131 | 79 | 0.018** | Yes | |||
Intention | 3.035 | 78 | 0.001*** | Yes | ||
−2.703 | 77 | 0.004*** | Yes |
Note(s): ***, ** and * indicate significance at the 1, 5 and 10% levels, respectively. Any p-value less than 0.10 refers to the rejection of the null hypothesis defined against the hypotheses
Source(s): Table by authors
One-way ANOVA
Measure | Variation source | DF | SS | MSS | F | p-value |
---|---|---|---|---|---|---|
Trust | Between | 3 | 1844 | 614.7 | 9.247 | 0.000*** |
Within | 152 | 10104 | 66.5 | |||
Expectancy | Between | 3 | 31.87 | 10.622 | 7.070 | 0.000*** |
Within | 152 | 228.36 | 1.502 | |||
Intention | Between | 3 | 26.38 | 8.793 | 5.285 | 0.001*** |
Within | 152 | 252.87 | 1.664 |
Note(s): ***, ** and * indicate significant at the 1, 5 and 10% levels, respectively
Source(s): Table by authors
Welch t-test
Measure | Hypothesis | Statistical form | Welch t-test | Supports the hypothesis | ||
---|---|---|---|---|---|---|
t-statistic | DF | p-value | ||||
Trust | 3.752 | 75 | 0.000*** | Yes | ||
−2.892 | 75 | 0.003*** | Yes | |||
Expectancy | 3.523 | 75 | 0.000*** | Yes | ||
−2.485 | 75 | 0.007*** | Yes | |||
Intention | 3.441 | 75 | 0.000*** | Yes | ||
−1.961 | 74 | 0.027** | Yes |
Note(s): ***, ** and * indicate significance at the 1, 5 and 10% levels, respectively. Any p-value less than 0.10 refers to the rejection of the null hypothesis defined against the hypotheses
Source(s): Table by authors
Welch t-test
Measure | Statistical form | Welch t-test | ||
---|---|---|---|---|
t-statistic | DF | p-value | ||
Trust | 8.361 | 139 | 0.000*** | |
9.440 | 159 | 0.000*** | ||
Expectancy | 7.447 | 125 | 0.000*** | |
7.569 | 122 | 0.000*** | ||
Intention | 5.842 | 131 | 0.000*** | |
6.048 | 124 | 0.000*** |
Note(s): ***, ** and * indicate significance at the 1, 5 and 10% levels, respectively
Source(s): Table by authors
Notes
The key advantage of choosing managers from a specific region offers is that it helps mitigate the issue of sample heterogeneity (Chen, Jermias, & Nazari, 2021; Moores & Yuen, 2001).
1 RMB = $ 0.14.
References
Adam, M., Wessel, M., & Benlian, A. (2021). AI-based chatbots in customer service and their effects on user compliance. Electronic Markets, 31(2), 427–445. doi: 10.1007/S12525-020-00414-7/FIGURES/7.
Adams, R. B., Barber, B. M., & Odean, T. (2018). STEM parents and women in Finance. Financial Analysts Journal, 74(2), 84–97. doi: 10.2469/FAJ.V74.N2.3, CFA Institute.
Agrawal, A., Gans, J. S., & Goldfarb, A. (2019). Artificial intelligence: The ambiguous labor market impact of automating prediction. Journal of Economic Perspectives, 33(2), 31–50. doi: 10.1257/jep.33.2.31.
Beare, E. C., O’Raghallaigh, P., McAvoy, J., & Hayes, J. (2020). Employees’ emotional reactions to digitally enabled work events. Journal of Decision Systems, 29(1), 1–17. doi: 10.1080/12460125.2020.1782085.
Bloomberg Tax (2020). Big four invest billions in tech, reshaping their identities. Available from: Https://News.Bloombergtax.Com/Financial-Accounting/Big-Four-Invest-Billions-in-Tech-Reshaping-Their-Identities
Bol, J. C. (2008). Subjectivity in compensation contracting. Journal of Accounting Literature, 27(2), 1–24.
Cao, G., Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2021). Understanding managers’ attitudes and behavioral intentions towards using artificial intelligence for organizational decision-making. Technovation, 106, 101–115. doi: 10.1016/j.technovation.2021.102312.
Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825. doi: 10.1177/0022243719851788.
Chen, Y., Jermias, J., & Nazari, J. A. (2021). The effects of reporting frameworks and a company’s financial position on managers’ willingness to invest in corporate social responsibility projects. Accounting and Finance, 61(2), 3385–3425. doi: 10.1111/acfi.12706.
Chen, Y., Biswas, M. I., & Talukder, M. S. (2022). The role of artificial intelligence in effective business operations during COVID-19. International Journal of Emerging Markets, 18(12), 6368–6387. doi: 10.1108/IJOEM-11-2021-1666.
Chui, M., Manyika, J., Miremadi, M., Henke, N., Chung, R., & Nel, P. (2018). Notes from the AI frontier: Applications and value of deep learning. McKinsey Global Institute Discussion Paper. Available from: https://www.mckinsey.com/featured-insights/artificial-intelligence/notes-from-the-ai-frontier-applications-and-value-of-deep-learning (accessed 17 April 2023).
Commerford, B. P., Dennis, S. A., Joe, J. R., & Ulla, J. W. (2022). Man versus machine: Complex estimates and auditor reliance on artificial intelligence. Journal of Accounting Research, 60(1), 171–201. doi: 10.1111/1475-679X.12407.
Danner, M., Hadžić, B., Weber, T., Zhu, X., & Rätsch, M. (2023). Towards equitable AI in HR: Designing a fair, reliable, and transparent human resource management application. In Deep Learning Theory and Applications. DeLTA 2023 (pp. 308–325). doi: 10.4018/978-1-7998-1554-9.ch002.
Davenport, T. H., & Ronanki, R. (2018). Artificial intelligence for the real world. Harvard Business Review, 96(1), 108–116.
Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126. doi: 10.1037/xge0000033.
Ding, K., Lev, B., Peng, X., Sun, T., & Vasarhelyi, M. A. (2020). Machine learning improves accounting estimates: Evidence from insurance payments. Review of Accounting Studies, 25(3), 1098–1134. doi: 10.1007/S11142-020-09546-9.
Elliott, D., & Soifer, E. (2022). AI technologies, privacy, and security. Frontiers in Artificial Intelligence, Frontiers, 5, 826737. doi: 10.3389/FRAI.2022.826737.
Fehrenbacher, D. D., Schulz, A. K.-D., & Rotaru, K. (2018). The moderating role of decision mode in subjective performance evaluation. Management Accounting Research, 41(3), 1–10. doi: 10.1016/j.mar.2018.03.001.
Fisher, A. (2019). Putting A.I. And algorithms to work for performance reviews. Fortune. Available from: https://fortune.com/2019/07/14/artificial-intelligence-workplace-ibm-annual-review/(accessed 13 February 2023).
Fountaine, T., McCarthy, B., & Saleh, T. (2019). Building the AI-powered organization. Harvard Business Review, 97(4), 62–73.
Gray, H. M., Gray, K., & Wegner, D. M. (2007). Dimensions of mind perception. Science, 315(5812), 619. doi: 10.1126/SCIENCE.1134475.
Haesevoets, T., De Cremer, D., Dierckx, K., & Van Hiel, A. (2021). Human-machine collaboration in managerial decision making. Computers in Human Behavior, 119, 106730. doi: 10.1016/J.CHB.2021.106730, Pergamon.
Harrell, M. B. (2023). Office of the mayor city of Seattle releases generative artificial intelligence policy defining responsible use for city employees. The Guardian. Available from: https:www.theGuardian.Com/Commentisfree/2023/Aug/18/Ai-Society-Humans-MachinesCulture-Ethics
Heaven, W. D. (2020), “This startup is using AI to give workers a ‘productivity score’”, Technology News. Available from: https://www.technologyreview.com/2020/06/04/1002671/startup-ai-workers-productivity-score-bias-machine-learning-business-covid/ (accessed 10 December 2021).
Hilbert, M., & López, P. (2011). The world’s technological capacity to store, communicate, and compute information. Science, 332(6025), 60–65. doi: 10.1126/science.1200970.
Hu, X., Kaplan, S., Wei, F., & Vega, R. P. (2014). Do employees know how their supervisors view them? A study examining metaperceptions of job performance. Human Performance, 27(5), 435–457. doi: 10.1080/08959285.2014.956177.
Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586. doi: 10.1016/j.bushor.2018.03.007.
Jarrahi, M. H., Newlands, G., Lee, M. K., Wolf, C. T., Kinder, E., & Sutherland, W. (2021). Algorithmic management in a work context. Big Data and Society, 8(2), 205395172110203. doi: 10.1177/20539517211020332.
Jia, N., Luo, X., Fang, Z., & Liao, C. (2023). When and how artificial intelligence augments employee creativity. Academy of Management Journal, 67(Forthcoming), 5–32. doi: 10.5465/amj.2022.0426.
Johnson, D. S., Bardhi, F., & Dunn, D. T. (2008). Understanding how technology paradoxes affect customer satisfaction with self-service technology: The role of performance ambiguity and trust in technology. Psychology & Marketing, 25(5), 416–443. doi: 10.1002/MAR.20218.
Kirkos, E., Spathis, C., & Manolopoulos, Y. (2007). Data Mining techniques for the detection of fraudulent financial statements. Expert Systems with Applications, 32(4), 995–1003. doi: 10.1016/J.ESWA.2006.02.016.
Lancaster, A. (2023). The future of the workforce: How human-AI collaboration will redefine the industry. Available from: https://www.Forbes.Com/Sites/Forbestechcouncil/2023/05/04/the-Future-of-the-WorkforceHow-Human-Ai-Collaboration-Will-Redefine-the-Industry/?Sh=4c37b15d7983
Leicht-Deobald, U., Busch, T., Schank, C., Weibel, A., Schafheitle, S., Wildhaber, I., & Kasper, G. (2019). The challenges of algorithm - based HR decision - making for personal integrity. Journal of Business Ethics, 160(2), 377–392. doi: 10.1007/s10551-019-04204-w.
Li, B. hu, Hou, B. cun, Yu, W. tao, Lu, X. bing, & Yang, C. wei (2017). Applications of artificial intelligence in intelligent manufacturing: A review. Frontiers of Information Technology and Electronic Engineering, 18(1), 86–96. doi: 10.1631/FITEE.1601885.
Li, C., Pan, R., Xin, H., & Deng, Z. (2020). Research on artificial intelligence customer service on consumer attitude and its impact during online shopping. Journal of Physics: Conference Series, 15(7), 12192. doi: 10.1088/1742-6596/1575/1/012192.
Liu, M., Li, C., Wang, S., & Li, Q. (2023). Digital transformation, risk-taking, and innovation: Evidence from data on listed enterprises in China. Journal of Innovation & Knowledge, 8(1), 100332. doi: 10.1016/J.JIK.2023.100332.
Luo, X., Tong, S., Fang, Z., & Qu, Z. (2019). Frontiers: Machines vs humans: The impact of artificial intelligence chatbot disclosure on customer purchases. Marketing Science, 38(6), 937–947. doi: 10.1287/mksc.2019.1192.
Luo, X., Qin, M. S., Fang, Z., & Qu, Z. (2021). Artificial intelligence coaches for sales agents: Caveats and solutions. Journal of Marketing, 85(2), 14–32. doi: 10.1177/0022242920956676.
Luo, X., Fang, Z., & Peng, H. (2022). When artificial intelligence backfires: The effects of dual AI-human supervision on employee performance, MIS Quarterly, No. Forthcoming.
Mahmud, H., Islam, A. K. M. N., Ahmed, S. I., & Smolander, K. (2022). What influences algorithmic decision-making? A systematic literature review on algorithm aversion. Technological Forecasting and Social Change, 175, 121390. doi: 10.1016/j.techfore.2021.121390.
Makarius, E. E., Mukherjee, D., Fox, J. D., & Fox, A. K. (2020). Rising with the machines: A sociotechnical framework for bringing artificial intelligence into the organization. Journal of Business Research, 120(11), 262–273. doi: 10.1016/j.jbusres.2020.07.045.
Marginson, D., McAulay, L., Roush, M., & van Zijl, T. (2014). Examining a positive psychological role for performance measures. Management Accounting Research, 25(1), 63–75. doi: 10.1016/j.mar.2013.10.002.
Marr, B. (2019). How Chinese retailer JD.com uses AI, big data & robotics to take on Amazon. Forbes.
Marsh, E., Vallejos, E. P., & Spence, A. (2022). The digital workplace and its dark side: An integrative review. Computers in Human Behavior, 128, 107118. doi: 10.1016/j.chb.2021.107118.
Mcknight, D. H., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems, 2(2), 1–25. doi: 10.1145/1985347.1985353.
Meuter, M. L., Ostrom, A. L., Roundtree, R. I., & Bitner, M. J. (2000). Self-service technologies: Understanding customer satisfaction with technology-based service encounters. Journal of Marketing, 64(3), 50–64. doi: 10.1509/JMKG.64.3.50.18024.
Meuter, M. L., Bitner, M. J., Ostrom, A. L., & Brown, S. W. (2018). Choosing among alternative service delivery modes: An investigation of customer trial of self-service technologies. Journal of Marketing, 69(2), 61–83. doi: 10.1509/JMKG.69.2.61.60759.
Moores, K., & Yuen, S. (2001). Management accounting systems and organizational configuration: A life-cycle perspective. Accounting, Organizations and Society, 26(4-5), 351–389. doi: 10.1016/S0361-3682(00)00040-4.
Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J. F., Breazeal, C., … Wellman, M. (2019). Machine behaviour. Nature, 568(7753), 477–486. doi: 10.1038/s41586-019-1138-y.
Reynolds, K. E., & Beatty, S. E. (1999). A relationship customer typology. Journal of Retailing, 75(4), 523–531. doi: 10.1016/s0022-4359(99)00016-0.
Rivera, M., Qiu, L., Kumar, S., & Petrucci, T. (2021). Are traditional performance reviews outdated? An empirical analysis on continuous, real-time feedback in the workplace. Information Systems Research, 32(2), 517–540. doi: 10.1287/ISRE.2020.0979.
Roose, K. (2020). A machine may not take your job, but one could become your boss. The New York Times.
Schrage, M. (2019). Does AI-flavored feedback require a human touch?. MIT Sloan Management Review, 49(3), 89–87.
Tong, S., Jia, N., Luo, X., & Fang, Z. (2021). The Janus face of artificial intelligence feedback: Deployment versus disclosure effects on employee performance. Strategic Management Journal, 42(9), 1600–1631. doi: 10.1002/smj.3322.
Trunk, A., Birkel, H., & Hartmann, E. (2020). On the current state of combining human and artificial intelligence for strategic organizational decision making. Business Research, 13(3), 875–919. doi: 10.1007/s40685-020-00133-x.
Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly, 27(3), 425–478. doi: 10.2307/30036540.
Wang, D., Khosla, A., Gargeya, R., Irshad, H., & Beck, A. H. (2016). Deep learning for identifying metastatic breast cancer. ArXiv. doi: 10.48550/arXiv.1606.05718.
White, T. B. (2005). Consumer trust and advice acceptance: The moderating roles of benevolence, expertise, and negative emotions. Journal of Consumer Psychology, 15(2), 141–148. doi: 10.1207/S15327663JCP1502_6.
Wilson, J. H., & Daugherty, P. R. (2018). Human plus Machine: Reimagining work in the age of AI. Harvard Business Review.
Yam, K. C., Bigman, Y. E., Tang, P. M., Llies, R., Cremer, D. De, Soh, H., & Gray, K. (2021). Robots at work: People prefer—and forgive—service robots with perceived feelings. Journal of Applied Psychology, 106(10), 1557–1572. doi: 10.1037/apl0000834.
Zhang, L., Pentina, I., & Fan, Y. (2021). Who do you choose? Comparing perceptions of human vs robo-advisor in the context of financial services. Journal of Services Marketing, 35(5), 634–646. doi: 10.1108/JSM-05-2020-0162.
Zuiderwijk, A., Janssen, M., & Dwivedi, Y. K. (2015). Acceptance and use predictors of open data technologies: Drawing upon the unified theory of acceptance and use of technology. Government Information Quarterly, 32(4), 429–440. doi: 10.1016/j.giq.2015.09.005.