Abstract
Purpose
Starting from the relevance of ethics to the application of artificial intelligence (AI) in the context of employee recruitment and selection (R&S), in this article, we aim to provide a comprehensive review of the literature in light of the main ethical theories (utilitarian theories, theories of justice, and theories of rights) to identify a future research agenda and practical implications.
Design/methodology/approach
On the basis of the best-quality and most influential journals, we conducted a systematic review of 120 articles from two databases (Web of Science and Scopus) to provide descriptive results and adopt a framework for deductive classification of the main topics.
Findings
Inspired by the three ethical theories, we identified three thematic lines of enquiry for the debate on AI in R&S: (1) the utilitarian view: the efficient optimisation of R&S through AI; (2) the justice view: the perceptions of justice and fairness related to AI techniques; and (3) the rights view: the respect for legal and human rights requirements when AI is applied.
Originality/value
This article provides a detailed assessment of the adoption of AI in the R&S process from the standpoint of traditional ethics theories and offers an integrative theoretical framework for future research on AI in the broader field of HRM.
Keywords
Citation
Mori, M., Sassetti, S., Cavaliere, V. and Bonti, M. (2024), "A systematic literature review on artificial intelligence in recruiting and selection: a matter of ethics", Personnel Review, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/PR-03-2023-0257
Publisher
:Emerald Publishing Limited
Copyright © 2024, Martina Mori, Sara Sassetti, Vincenzo Cavaliere and Mariacristina Bonti
License
Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode
Introduction
A February 2022 survey conducted by the Society of Human Resources Management (HRM) found that 79% of employers use artificial intelligence (AI) and/or automation for recruitment and selection (R&S; Friedman, 2023). The potential benefits for organisations that implement this new technology in HRM have increased, especially under the pressure of the COVID-19 pandemic, along with the interest of researchers and practitioners in AI, culminating in a growing debate on this theme (Makarius et al., 2020).
Scholars have started to sort and systematise knowledge regarding integrating AI into HRM (Gélinas et al., 2022; Kaushal et al., 2021; Qamar and Samad, 2022; Vrontis et al., 2022). These contributions have focused on the R&S process, which is considered the backbone of HRM systems of any organisation, as one of the most prominent integrations of AI into HRM. In this regard, AI can deliver an enhanced candidate experience that is seamless, simple, and intuitive (Meister, 2019).
More specifically, a recent review contributed to the understanding of the antecedents and outcomes of the use of AI in staffing (Nguyen and Park, 2022) and suggested ethics as a future research avenue for understanding this specific research field. Similar conclusions and suggestions for future research were indicated by Malik et al. (2023) in their recent review of the general relationship between AI and HRM. These authors considered the research on ethical aspects of adopting and implementing AI in human resources (HR) as one of the main priorities in the field. Moreover, Hunkenschroer and Luetge (2022) directly investigated the ethical side of the application of AI in the R&S process, concluding that exploring the relevant aspects of AI in R&S is crucial and should be approached through the perspective of ethics theories. Indeed, scholars have noted that a comprehensive analysis of AI within the framework of traditional ethics theories is absent in this literature (Hunkenschroer and Luetge, 2022; Prikshat et al., 2023). Motivated by this research gap identified in the existing literature, the present study aims to answer this question: What are the key relevant aspects of AI in R&S in light of the main ethical theories?
Therefore, inspired by previous studies (Kaushal et al., 2021; Nguyen and Park, 2022; Vrontis et al., 2022), we adopted a systematic literature review approach (Kunisch et al., 2023; Paul et al., 2021; Simsek et al., 2023) to provide a comprehensive review of research on AI in the context of the R&S of candidates in light of ethical theories. Indeed, we systematise our review results using well-known ethical theories in the field of organisational theory and HRM (Cavanagh et al., 1981; Greenwood, 2002, 2013; Winstanley et al., 1996), namely utilitarian theories (which evaluate behaviour in terms of its social consequences), theories of justice (which focus on the distributional effects of actions or policies), and theories of rights (which emphasise the entitlements of individuals). Inspired by these three ethical theories we proposed three thematic lines of enquiry for the debate on the use of AI in R&S.
Accordingly, this review systematises the existing literature on the subject, revealing and exploring the significant theoretical and practical implications of AI in R&S. Moreover, the study offers an integrative framework for addressing ethical issues of AI within the broader field of HRM.
Artificial intelligence in R&S
In the literature, AI is defined as implementing digital technology to develop systems able to perform tasks that traditionally require human intelligence (Tambe et al., 2019). Indeed, AI is constantly evolving, enabling the processing of large amounts of data, identifying patterns, and performing repetitive tasks without human involvement or supervision. Literature mentions various terms to refer to AI, including “algorithm”, “analytics”, and “digital” (Meijerink et al., 2021). When applied in the field of HRM, AI generates an integration of the traditional people-orientated approach with greater emphasis on data and analytics (Gélinas et al., 2022). One of the most prominent applications of this new tool is in R&S, considered the HRM backbone of any organisation. Recruiting is defined as those practices and activities carried out by the organisation to identify and attract a pool of potential applicants (Barber, 1998, p. 5), from which the organisation identifies the best candidate to join the organisation through the subsequent selection process.
AI has undergone substantial advancements in R&S due to persistent research contributions. However, despite the increasing literature on this theme, scholars emphasise the need for meticulous scrutiny of the ethical underpinnings of this technology (Malik et al., 2023; Nguyen and Park, 2022; Qamar and Samad, 2022).
Research protocol
Consistent with recent trends in HRM systematic reviews (Sharma and Chillakuri, 2022; Sokolov and Zavyalova, 2022), we conducted a classifying literature review (Kunisch et al., 2023) to provide a comprehensive review of AI research in the context of R&S. We adopted the Scientific Procedures and Rationales for Systematic Literature Reviews (SPAR-4-SLR, Paul et al., 2021), a protocol suitable for the social sciences (Palumbo et al., 2023).
Method
The method’s reliability and systematicity is a cornerstone in our literature review’s architecture. Embracing the comprehensive SPAR-4-SLR protocol, our methodology incorporated the essential practical steps outlined by Simsek et al. (2023), underscoring that the two approaches are complementary and, when assembled together (Figure 1), significantly enhance the overall reliability of the adopted research protocol.
After envisioning our research question, the second step was to define the boundary conditions of the review (explicating). In this regard, according to the suggestions of previous reviews on AI in the field of HRM (Kaushal et al., 2021) and earlier works (Qamar and Samad, 2022; Sharma and Chillakuri, 2022; Sokolov and Zavyalova, 2022), this review concerned a comprehensive search using the two major databases: (1) the Web of Science (WoS) Social Science Citation Index; and (2) Scopus, focusing on business and management subject areas. To select papers on the basis of the best relevance in quality rating (Le Brocq et al., 2023; Sokolov and Zavyalova, 2022), we adopted the 2021 Academic Journal Guide provided by the Chartered Association of Business Schools (CABS, 2021) and focused on specific management subcategories as shown in Figure 1.
Central in the subsequent executing step, “is the development of a strategy that guides the keyword searches that constitute the bulk of the search process” (Simsek et al., 2023, p. 297). In this third step, based on the literature about the relationship between AI and R&S, we adopted an iterative process to select the keywords for the search string to provide a focused and comprehensive peer-reviewed literature base on AI in R&S (Meijerink et al., 2021). Figure 1 shows the optimal combination of keywords used in WoS, cross-validated and integrated with Scopus results. The study intentionally avoids using ethics-related keywords to ensure a broad exploration of AI in R&S beyond articles specifically focused on ethical aspects. This deliberate omission allows the inclusion of studies addressing AI in R&S, even if they do not explicitly discuss ethical issues, aligning with the research objective. After merging the WoS and Scopus results and removing duplicates, we obtained a data set of 1,492 articles at the end of this step.
During the fourth step, we established the exclusion criteria by evaluating the relevance of the articles’ content by considering the definition of the application of AI in R&S. Three exclusion criteria guided the evaluating step as shown in Figure 1: off-topic, off-scope and off-focus (Palumbo et al., 2023). A two-stage evaluating procedure was adopted (Simsek et al., 2023): each researcher manually selected documents to include in the analysis by reading the title and abstract, followed by a refined quality assessment based on a full-text review. During the review, some articles aligned with multiple perspectives, such as utilitarian aspects coexisting with discussions on justice and rights. In these cases, we adopted an “on balance” classification, prioritising the prevailing emphasis emerging from the article under review. To ensure the best fit of the papers included in our database, we compared the data sets, discussing and solving any disagreements about the composition of the final dataset.
To ensure that this work contains all the relevant and previous review articles on R&S, we also searched for other reviews published in CABS journals regardless of the sub-research field criteria. We found one additional relevant review on this theme (Kaushal et al., 2021), which we thus included in the final database. After an ultimate screening of the entire corpus of selected articles to ensure the best relevance of the documents, the final data set was composed of 120 papers.
The subsequent stage of our systematic literature review involved the encoding. Aligned with our research question regarding the comprehension of pivotal facets of AI in R&S within prevalent ethical theories, we adhered to the methodology employed in previous studies investigations (Schumann, 2001). Specifically, the most effective way to grapple with ethical issues is to deductively apply a framework of the main theories (Simsek et al., 2023) that have been examined and used to analyse ethical issues in other aspects of human life (Hunkenschroer and Luetge, 2022).
In this regard, the literature about the ethics in business in general, and in HRM in particular, can be summarised around three main ethical theories proposed by Cavanagh et al. (1981) and also discussed by Winstanley et al. (1996) and Greenwood (2002, 2013): utilitarian theories, theories of justice and theories of right.
The utilitarian theory asserts that the virtue of actions or behaviours is established exclusively through their outcomes. It introduces the principle of generating maximal benefit for the largest portion of society (Legge, 1998). In the context of HRM, this ethical perspective is contingent upon demonstrating outcomes that maximise utility. Expanding on this, based on Greenwood (2002), our approach to encoding articles from a utilitarian perspective is centred on the utility of AI in R&S for those involved, namely the organisation, recruiters and the candidates foremost.
The theory of justice (Rawls, 1971) is based on principles such as equity, fairness, and impartiality. Within the realm of HRM, these principles offer a robust framework for evaluating the ethical underpinnings of organisational practices, ensuring equitable treatment among the employees (Cavanagh et al., 1981; Winstanley et al., 1996). Finally, the third main theory refers to the Kantian view of ethics. Based on the respect-for-persons principle, Kant’s ethical theory (1964) stipulates that individuals should always be treated as ends in themselves, not merely as a means to an end. This doctrine insists on respecting human beings due to their inherent moral dignity, transcending conditional value (Legge, 1998). Known as the theory of rights, it asserts that fundamental human rights, applicable in various contexts, including HRM, should be upheld in all decision-making (Cavanagh et al., 1981).
As for the elaborating steps, we analysed and extracted themes from the articles under review, clustering them according to the above ethical perspectives and synthesising them (Paul et al., 2021; Simsek et al., 2023), as shown in Table 1.
Finally, the exposing step represents the culmination of our systematic literature review, providing a comprehensive delineation of our findings and insights while identifying gaps and delineating areas for future research.
Results
Descriptive results
Considering some descriptive results before presenting the literature review results allows having a prior snapshot of the phenomenon under investigation. The analysis of the publication trend provides a picture of the evolution of research on R&S focused on AI and presents the trends in this field (Figure 2).
Before 2019, few articles discussed AI in R&S. The pivotal year was 2020, marked by increased digitalisation due to the challenges posed by the COVID-19 pandemic. This shift prompted a surge in literature exploring new approaches to remote work and human resource management, resulting in a notable increase in publications in subsequent years.
Figure 3 shows the distribution based on the CABS (2021) research fields adopted as selection criteria in our review.
AI in R&S is studied across diverse journal fields, with “Information Systems” leading at 32% of articles. “Psychology (Organisational)” is the second field, while “Human Resource Management and Employment Studies” ranks third, emphasising insights for HR professionals on the advantages and disadvantages of AI in R&S.
Review results: an interpretative framework
The theoretical approaches explained in the method section offer the opportunity to frame the literature about AI in R&S around three main lines of ethical enquiries: (1) the utilitarian view – the efficient optimisation of R&S through AI; (2) the justice view – the perceptions of justice and fairness related to AI techniques; and (3) the rights view – the respect for legal and human rights requirements when AI is applied.
According to the above thematic lines, we systematised the articles in our review to create a constructive debate on this topic. This systematisation is summarised in Table 1, which offers a comprehensive overview of the literature supporting each theme presented in the subsequent pages.
The utilitarian view: the efficient optimisation of R&S through AI
Some early applications of AI in R&S occurred in the military sector (Hooper et al., 1998). Over two decades since these initial applications, the debate about the benefit of the application of AI in HRM (Gélinas et al., 2022; Malik et al., 2023; Vrontis et al., 2022) has become a trending topic. This includes the benefit from the application R&S processes, discussed in the 29 articles of Table 1 (Allal-Chérif et al., 2021; Nguyen and Park, 2022; Ore and Sposato, 2022).
The literature agrees that, in the field of HRM, R&S is the dominant domain involved with the application of AI (Malik et al., 2023; Vrontis et al., 2022). The main benefits relate to cost reduction, the possibility of accessing more applicants, getting quicker responses, increased positive perceptions of the company by applicants (Vrontis et al., 2022), and enhancing the evaluation validity (Thompson et al., 2023). Specifically, Koenig et al. (2023) demonstrated that machine learning (ML) can assess candidates’ narrative responses to assessment questions as accurately as humans but with greater efficiency. Another study demonstrated that AI is believed to provide efficiency by automating ordinary screening tasks, allowing recruiters to spend more time on strategy formulation and implementation (Ore and Sposato, 2022). Moreover, Kot et al. (2021) demonstrated the significant relationship between perceived AI quality, AI adoption, and employer reputation.
Another critical topic that explicitly emerged in five papers included in Table 1, is the context in which this technology is adopted. In this regard, Pan et al. (2022) confirmed the importance of government support, that relevant technological resources are essential for AI adoption, and simplifying AI’s technical complexity is encouraged. In addition, research has called attention to the importance of contextual elements to understand the impact of this technology in the complex sociotechnical system in which it is implemented (Bankins, 2021), such as global south economies (Kshetri, 2021), and developed countries (Islam et al., 2022).
Focusing on recruitment, Allal-Chérif et al. (2021) compared four case studies from different organisations adopting various digital technologies such as social networks, MOOCs, serious games, chatbots, and big data analysis matching systems for talent identification, selection, and retention purposes. Their findings suggest that integrating AI in recruitment facilitates a more comprehensive evaluation of emotional intelligence, fosters greater alignment with moral values, and enhances employee engagement. Consequently, this integration is posited to contribute to financial and social sustainability within organisations.
The above advantages have nurtured the interest of HRM researchers in AI-enabled recruiting due to their higher speed and efficiency in traditional screening and assessment practices compared with traditional practices (Black and van Esch, 2020). The theme of “Optimizing Recruitment Process” is explored in 25 articles in Table 1. This literature suggests that suggesting that AI-enabled recruiting systems can help companies access a wider and more diverse talent pool (Black and van Esch, 2020; Van Esch and Black, 2019) and bypass search firm fees cheaply, accessing hundreds of millions of passive candidates with profiles on social media platforms (Vardarlier and Ozsahin, 2021).
However, most of the contributions to this topic come from the automation literature, which focuses on developing chatbots, machine learning, and mathematical modelling to support the best fit between the candidate and the position the organisation offers (Martinez-Gil et al., 2020). Automation techniques specialising in developing information extraction from resumés allow more candidates to be considered. They foster both person–job fitting for any job position (Barducci et al., 2022), and person–team fit, namely the fit between an individual and the team members with whom the individual is supposed to work (Malinowski et al., 2008).
Regarding optimising the selection process, this theme is discussed in 28 articles in Table 1; the literature has mainly focused on applying AI to the candidates’ interviews (Kim and Heo, 2022). Studies compare digital and in-person interviews in candidate reactions and rater evaluations, revealing similarities and differences in results (Langer et al., 2019; Suen et al., 2019). In general, applicants react negatively to digital interviews due to concerns about privacy, authenticity, limited interpersonal communication (Langer et al., 2017), and perceived lack of control during this interview type (Langer et al., 2019). In addition, studies found that an asynchronous mode can decrease the candidates' perceptions of the impression they can make and the effect this may have on evaluating their competencies, thus penalising their chances of being hired (Suen et al., 2019). As a result, using asynchronous interviews to preselect applicants may still have negative consequences for organisations, which may be perceived as less attractive when using these interviews instead of online tests or online application documents (Basch et al., 2022).
Moreover, despite acknowledging the superior objectivity of AI evaluation, Mirowska and Mesnet (2022) demonstrated that participants expressed a desire for the maintenance of human elements in the evaluation process, seemingly preferring “the devil they know” (human biases and intuition) rather than the one they do not know (AI algorithm).
The above results confirm that applicants need to be informed and aware of the AI approach taken by the organisation (Köchling et al., 2023). In addition, organisations need to consider not only the kind of information they present but also the total amount of information offered to increase fairness and the perception of privacy being respected (Langer et al., 2021). These considerations open avenues for exploring the theme through the next lines of inquiry.
The justice view: the perceptions of justice and fairness related to AI techniques
The second ethical line of enquiry about the application of AI in the R&S process encompasses the potential biases of the algorithms implemented in these HR practices, involving justice and fairness concerns. Our review highlights AI bias as an emerging dominant theme through the justice lens, discussed by 13 articles in Table 1. Different algorithm pathways may influence the strategies used by HRM decision-makers (Rodgers et al., 2023). As for humans, AI algorithms might be affected by a selection bias because they are trained with data from a privileged group only (i.e. high socio-economic status, Pessach and Shmueli, 2021). Consequently, it would lead to high levels of unfairness against candidates that belong to subgroups based on race (Köchling et al., 2021), gender (Pethig and Kroenung, 2023) and disabilities (Tilmes, 2022).
To overcome these AI biases, Soleimani et al. (2022) proposed a model of knowledge sharing between HR personnel and AI developers to tackle AI selection biases in recruitment systems. Indeed, to improve the ML models, AI developers need to engage with HR managers and employees in the same or similar roles, who thus are familiar with job functions and required criteria (Rodgers et al., 2023; Soleimani et al., 2022).
Another crucial aspect explicitly emerging in 7 papers listed in Table 1 is trustworthiness (Kares et al., 2023), encompassing reliability and credibility. Trust depends on more than just effectiveness and efficiency; it is primarily rooted in mostly on ethical (Langer et al., 2023) and moral (Feldkamp et al., 2023) considerations. By fostering trust in applying AI in the staffing process, organisations can become more attractive and fulfilling workplaces (da Motta Veiga et al., 2023).
Finally, 13 studies in Table 1 have explored the theme of justice perceptions in AI-driven hiring processes. These investigations primarily focus on distributive justice, examining candidates’ perceptions of AI’s fairness in hiring decisions. Additionally, procedural justice is addressed by studying the potential for discrimination and bias in AI algorithms during candidate evaluations (Bankins, 2021). Other studies of interpersonal justice have dealt with the role of humans in the selection process (Noble et al., 2021), and informational justice researchers have focused on candidates’ perceptions of explanations received about evaluation criteria, the interview process, and resulting hiring decisions (Langer et al., 2021). In general, studies emphasise the impact of the type of interviews, particularly two-way communication and justice dimensions, on applicant reactions to AI in recruitment processes (Acikgoz et al., 2020; Noble et al., 2021).
The rights view: the respect for legal and human rights requirements when AI is applied
A final ethical line of enquiry about AI in R&S refers to the accountability of these technologies regarding the protection of individual privacy and the transparency of staffing decisions, with particular attention paid to the legal effects that these decisions consequently produce for candidates regarding discrimination against them.
In this regard, an emerging topic addressed by 4 papers of Table 1 is the employers’ use of informal online sources for decisions, known as cybervetting (da Motta Veiga and Figueroa-Armijos, 2022; Demir and Günaydın, 2023). Cybervetting practices highlight a shift in the social contract, which prescribes normative expectations for workers’ digital visibility and data usage (Berkelaar, 2014). While a Kantian approach promotes fulfilling expectations of mutual transparency, human dignity, and universal application, even in cybervetting, asymmetrical expectations of transparency exist. Candidates anticipate transparency in employers’ communication regarding cybervetting practices. However, they do not hold the same expectation for transparency from the cybervetting process itself, as they perceive it as not ensuring ethical transparency (Berkelaar, 2014). On the other side, from the employers’ perspective, the strength of workers’ online information lies in the higher availability of work and non-work information, such as interests, hobbies, interpersonal interactions, religious/political views, relationship/parental status, and sexual orientation. However, this information leads to varied assessments of job candidates' competence, character, and motivation (Berkelaar and Buzzanell, 2015).
In this regard, a relevant topic addressed by two articles in Table 1 discussing AI in R&S is rights violation. Yam and Skorburg (2021) suggested that organisations must identify the potential rights violations their hiring algorithms can cause against candidates. Among these, the authors extensively discussed the “Five Human Rights” of job applicants, including the rights to equality and non-discrimination, privacy, free expression, and free association.
Five papers (Table 1) surfed the adjacent line of enquiry of “Data protection”. In this regard, Todolí-Signes (2019) analysed the safeguarding protections of employees against discrimination established in the European Union’s General Data Protection Regulation (GDPR). In his article, the author described the protections ensured by the GDPR and the requirements it makes for those who use AI to make decisions about hiring in terms of transparency. Nevertheless, the existing legal framework emphasises the individual legal protection of workers as citizens, a focus that might prove insufficient to guarantee the safeguarding of workers' rights, especially considering the inherent power imbalance between employers and employees. In this regard, Todolí-Signes (2019) underlined that legal issues are particularly linked to AI-based interviews in their phenomenological contribution. At present, job-seekers have no right to demand disclosure of the algorithm’s working procedure, and developers of AI interviews have no obligation to comply with such disclosure norms because no legal and institutional rules have been defined. In this regard, governmental regulations are needed to protect job-seekers, companies, developers, and especially candidates.
Discussion
Building upon recent calls emerging from the literature, this work aimed to address the relevant aspects of AI in the R&S process through the lens of prominent ethical theories (Hunkenschroer and Luetge, 2022; Prikshat et al., 2023), namely the utilitarian theories, theories of justice and theories of rights (Cavanagh et al., 1981; Greenwood, 2002, 2013; Winstanley et al., 1996).
The consequent systematisation of our review into three lines of inquiry allowed us to debate AI in R&S through the main findings detailed in the results section. Table 2 summarises the key issues for each line of inquiry, along with their theoretical and practical implications, which will be discussed in this section. Finally, based on this discussion, we offer an integrative theoretical framework for future research on AI in the broader field of HRM.
The utilitarian view: main issues, theoretical and practical implications
Looking at the utilitarian point of view of Table 2, our results underlined that AI contributes to the optimisation and efficiency of the R&S process through the faster and more efficient elaboration of a massive amount of candidates’ data. Nevertheless, the review results of previous pages suggest that the related advantages consider the organisations’ point of view, overlooking the main consequences of this technology on the other party involved in the processes: the candidates. Studies have indicated candidates’ tendency to avoid applying for jobs when AI supports the R&S processes (Mirowska and Mesnet, 2022). In addition, it is noteworthy that AI in recruitment often streamlines the process for the organisation by selecting a candidate pool that aligns with the set of defined criteria for the job, thereby excluding many potential candidates. This suggests that the efficient optimisation of these practices for organisations, thanks to AI, might be to the detriment of candidates’ optimisation of interests in job-seeking. In this regard, researchers and practitioners should consider the different interests at play in the process to advance the integration of AI in R&S. These technologies should ensure the optimisation of the techniques both for organisations’ interests and for the other entities involved in the process, namely, candidates, consistently valuing their potential.
From a theoretical point of view, the sociotechnical perspective represents a supporting line for future investigations of this topic because it highlights the advantages that can result from the combination of technology and people (Shrestha et al., 2019), as research demonstrated the same levels of trust in hybrid systems compared with human-only support (Kares et al., 2023). In this regard, it is essential to understand how AI affects organisational roles and relationships, which become more complex. Sociotechnical capital, the successful collaboration between AI technology and people, is critical to firms’ long-term competitiveness (Makarius et al., 2020).
Regarding the implications of this ethical approach, considering the potential benefit of AI, and given that organisations need to remain competitive globally, the adoption of automation in management practices will continue to increase. Nevertheless, there is a risk that businesses may seek automation in R&S for short-term financial gain while ignoring greater macro-effects on their main stakeholder – first of all, the candidates (Koch-Bayram and Kaibel, 2023). Listening to the voices of potential employees can help organisations improve their image and reputation. More specifically, the attractiveness of an organisation implementing AI in the recruitment process influences applicants’ likelihood to apply. Candidates seem to be more accepting of AI support for CV and résumé screening if adequately informed in advance (Koch-Bayram and Kaibel, 2023; Köchling et al., 2023), as they see human recruiters as error-prone and biased in this phase. Nevertheless, their acceptance diminishes regarding AI assistance in interviews (Koch-Bayram and Kaibel, 2023; Köchling et al., 2023), whereby the error committed by an algorithm generated less acceptance and more negative feelings compared with human error.
In general, implementing AI without further explanation to candidates compared with a human condition diminished organisational attractiveness and the intention to proceed with the application process. Therefore, showcasing and communicating how the organisation utilises AI in their R&S enhances candidates’ ethical perceptions of these practices, thus representing a lever to improve organisational attractiveness.
Moreover, because algorithms can learn from the input data but are not capable of judging and making decisions, a necessity arises for collaboration between HR professionals and AI developers, which could benefit both in terms of improvement, adaption, and learning to make better hiring decisions (Soleimani et al., 2022). Although AI is considered a tool to legitimise an objective decision-making power over R&S, it does not feel the pressure of power as a human would perceive; neither does it pose the problem of decision-making bias. Despite its potential benefits in mitigating human recruiter bias in favour of objectivity, AI introduces a distinct challenge concerning algorithmic bias. The technical tool cannot capture critical elements but collects the information it needs from others. Therefore, the tool does not provide a neutral and perfectly objective basis for decision-making, especially regarding decision-making power. This is consistent with Cavanagh et al. (1981), who argued that “decision-makers may be only in partial control of a certain decision and thus unable to use a specific ethical criterion” (p. 371). Decisions based on AI processing have consistently partial control over the information processed. It follows that, although managers make the final decision about candidates based on AI processing, designers generate the AI algorithm tool (Soleimani et al., 2022), set the processing criteria, and thus shape the consequent results. The consequence is that although AI legitimises the decision-making power of managers through the objectivity of algorithms in data analysis, the indirectly dominant power over the decision is that of designers, who set the operating criteria of the algorithm for hiring decisions.
All the above considered, the collaboration between HR managers, who are familiar with job functions and required hiring criteria, and developers of AI, who design the criteria of AI processing, can contribute to the strengthening of valuable AI systems to support the creation of effective sociotechnical capital for the firm.
The justice view: main issues, theoretical and practical implications
Table 2 also suggests that using AI in the R&S process not only introduces efficiency benefits and trade-offs but also raises significant ethical questions, particularly regarding justice in various aspects of this construct (Colquitt, 2001). In this regard, automated systems, though effective and efficient, may encounter challenges in engendering a comparable level of trust or mistrust as human decision-making, especially in ethical (Langer et al., 2023) and moral considerations (Feldkamp et al., 2023), due to the apparent absence of evaluative ability or transparency within automated systems.
Moreover, machine learning models are designed to make decisions and predictions based on patterns identified in large data sets, resulting in potential selection bias (Pessach and Shmueli, 2021) and unfair treatment. As a result, procedural justice is crucial, as AI algorithms have the potential to discriminate and be biased in the candidate evaluation process (Bankins, 2021). Interpersonal justice involving the role of humans in the selection process (Noble et al., 2021) and informational justice regarding the clear communication of the evaluation criteria, interview process, and hiring decisions (Langer et al., 2021) are emerging aspects related to candidates’ justice perceptions.
Consistent with the tendency in organisational justice research, the studies in our review used the terms justice and fairness interchangeably, whereby one is the synonym for the other (Mirowska and Mesnet, 2022): the fairness perceptions about AI systems applications in R&S involve the ethical aspect that is concerned with people’s equal access and distribution of rights (Varma et al., 2023); in other words, it is a justice issue. Nevertheless, from a theoretical point of view, considering the multidimensional debate of AI applications, we argue that a more concise distinction between justice and fairness might offer new and different insights for future research. Goldman and Cropanzano (2015) differentiated justice from fairness concepts, proposing the former as referring to “events in the work environment that are morally required and involve normative standards” and the latter as related to “a subjective assessment of these events and whether the events as implemented are morally praiseworthy” (p. 317). This distinction might be fruitful for future research advancements in AI exploration in R&S and the overall HRM field.
This theoretical distinction would have also practical implications. First, the specific focus on AI organisational justice in R&S as a distinct construct from fairness perceptions might contribute to practice in structuring appropriate organisational codes of conduct addressing and regulating the critical ethical and moral AI-related issues in HRM.
Second, exploring fairness could serve as a valuable direction for future research into AI perceptions among diverse actors engaged in hiring processes. This perspective line of inquiry, employing a combination of quantitative and qualitative methods across various organisational settings, could provide further insights into the relevance of organisational transparency. Organisational communication transparency necessitates a clear and detailed description of the AI methodology in R&S. This comprehensive disclosure is essential for making candidates fully cognisant of the criteria, legal prerequisites, and outcomes associated with the use of AI systems in R&S. In this way, as considered above, organisations might highlight the potential benefits that a candidate gains in the selection process through AI rather than only describing what AI will involve in the R&S process (Tursunbayeva et al., 2022), thus breaking the barrier of perceived unfairness bias of AI techniques.
The right view: main issues, theoretical and practical implications
Finally, respect for legal and human rights is another important issue of Table 2, as emerged in our review. When adopting AI in the R&S processes, this main theme is even more critical in light of the emerging employers’ use of informal online sources for hiring decisions, known as cybervetting (da Motta Veiga and Figueroa-Armijos, 2022; Demir and Günaydın, 2023). This practice occurs without workers’ knowledge or consent. As a result, the greatest criticism is the perceived invasiveness and/or unfairness of this practice by applicants, leading to decreased acceptance rates and potential legal claims. In this regard, the absence of specific regulations in the law allowing the collective protection of employees’ interests has inspired scholars to create a specific regulation for the protection of workers’ data and rights, such as the international human rights law proposed as a consistent and universal standard (Todolí-Signes, 2019). Ensuring legal and human rights compliance is crucial when using AI for R&S processes, as it is the foundation of any HR data policy (Tursunbayeva et al., 2022). According to our review, research suggests that algorithms might not only cause harm to human fundamental rights against candidates but also result in discrimination and disrespect of moral rights (Varma et al., 2023), which laws need to protect. It is even more critical regarding cybervetting (da Motta Veiga and Figueroa-Armijos, 2022; Demir and Günaydın, 2023), presenting organisations with dual challenges. Leveraging digital platforms, such as LinkedIn, organisations must not only communicate transparently about decisions involving cybervetting but also navigate the balance between the ethical imperative of transparency and the equal principles of privacy and confidentiality. It underscores the complex landscape organisations encounter while capitalising on the flexibility of digital tools.
Through the absence of specific regulations in the current law, scholars have taken the initiative to propose a specific regulation aimed at protecting the data and rights of workers based on international human rights law that has the potential to become a consistent and universal standard. This shows us that even in the face of challenges, we can always find ways to protect the interests of workers and ensure their rights are safeguarded (Todolí-Signes, 2019).
Despite these relevant propositions, from a theoretical perspective, further empirical research is needed to identify, update, strengthen, and adapt policies that effectively manage AI’s processes, effects, and potential outcomes in recruiting and selecting candidates (Kim and Heo, 2022). By doing this, future studies might enrich the current knowledge base by adopting a cross-fertilisation approach that involves different lenses of research, such as work sociologists, HRM, systems engineers, and law researchers, who could contribute to offer a more overarching perspective of the adoption of AI into the R&S process, and more generally in the field of HRM.
Furthermore, the rights view of ethics would help comprehend the challenges the workforce poses on digital platforms, commonly called “gig workers” (Duggan et al., 2020). Given the prevalent involvement of gig workers in the AI-driven recruitment processes, it becomes essential for future research to delve into the strategies through which gig workers can enhance their employability.
In this regard, from a practical point of view, organisations might benefit from improved instruments able to address the respect for job applicants’ rights in the context of R&S through AI techniques (such as the Algorithmic Impact Assessments, Yam and Skorburg, 2021). Policymakers might better identify and define the conditions determining the legal boundaries regarding the latitude of decisions made by AI systems in the R&S of workers, in addition to a generic one for all citizens.
Widening perspectives: AI in HRM through a framework for responsible and ethical decision-making
This study has navigated the complex landscape of AI implementation in R&S. Acknowledging the prevalence of the utilitarian perspective in both research and practice, we advocate for a more comprehensive approach that considers the broader ethical framework encompassing justice and rights. This shift is imperative for effectively managing the tensions inherent in, for example, the potential benefits of reducing human recruiter bias versus the drawbacks of algorithmic bias, as well as the trade-offs between time-saving advantages and the risk of excluding qualified candidates based on pre-established criteria. These tensions necessitate a more balanced exploration to ensure a holistic understanding of the implications of AI not only in R&S but also within the broader HRM.
An integrative framework, as shown in Figure 4, not only aligns with the multifaceted nature of the challenges posed by AI in R&S but also serves as a foundation for responsible and ethical decision-making in the broader HRM. As we move forward in integrating AI into HRM practices, it is crucial to recognise the interconnectedness of the three ethical perspectives investigated in this review and navigate them judiciously to foster sustainable and equitable outcomes for organisations, candidates, and society at large. Indeed, the discourse in the preceding pages on the theoretical implications within each prevailing theme prompts us to suggest theoretical connections for forthcoming research on AI in HRM. In doing so, we reinforce the theoretical starting point for building a solid, responsible AI theory and better supporting and guiding organisations, policymakers, and societies in general about applying this revolutionised technology.
As depicted in Figure 4, theoretical connections could potentially intertwine the three dominant perspectives for AI responsible and ethical decision-making into the broader HRM: Stakeholder Theory, the Sustainable framework of AI in HRM, and the Management of Information Asymmetry in HRM.
The Stakeholder theory (Parmar et al., 2010) offers a valuable perspective that helps link different ethical approaches while illuminating how increased reliance on AI affects the interests of various parties and the relationships companies share with them (Wright and Schultz, 2018). By adopting a stakeholder-centric approach within HRM, future research could play a role in mitigating instances where shareholder interests supersede those of employees, thus involving and enhancing perceptions of procedural and distributive justice (Greenwood, 2002; Guerci et al., 2014).
Furthermore, stakeholder theory could potentially enrich the literature by interconnecting with research on sustainable HRM (Lopez-Cabrales and Valle-Cabrera, 2020). In the context of our research topic, sustainable HRM pertains to the ethical and conscientious incorporation of AI into HRM systems, practices, and policies. Future research within this framework might ensure the presence of a resilient workforce that enhances the organisation’s sustainable competitive advantage, all while considering the economic, social, and environmental ramifications of these initiatives, as well as the adherence to legal requirements and respect for human rights.
From our review results of rights and justice perspectives, the need for more transparency of AI adoption in R&S is emerging. In this regard, involving information asymmetry management in future HRM research would contribute to increased transparency (Bergh et al., 2019), thus improving AI’s responsible and ethical HRM decision-making framework. Indeed, the concept of information asymmetry would be considered an additional linchpin for building bridges between the different perspectives investigated in this review. Based on Bergh et al. (2019), within the domain of HRM, future investigations might contribute to mitigating information asymmetry concerning AI by promoting increased transparency between organisations and individuals while ensuring the protection of sensitive data. Furthermore, this line of research has the potential to yield improved outcomes of AI in HRM on both individual and organisational fronts. At the individual level, this could manifest in heightened perceptions of fairness, greater respect for individual rights, and optimising interests for all involved parties in the HRM process. Meanwhile, at the organisational level, benefits may include optimising organisational outcomes, enhanced perceptions of justice, and adherence to legal requirements, thereby facilitating the implementation of responsible and ethical decision-making practices.
Taking into account all the aforementioned promising avenues and themes emerging in this review, it is essential to underline that the thematic lines of enquiry proposed represent a valuable integrative research framework for other HRM practices in general, always keeping in mind that the application of AI in HRM is a matter of ethics, and ethics is a matter of humans.
Figures
Figure 1
Research protocol based on SPAR-4-SLR and Simsek et al. (2023)
SLR elaboration scheme
Authors | Year | Sample keywords | Dominant theme | Line of ethical enquiries | |
---|---|---|---|---|---|
Bohmer and Schinnenburg | 2023 |
| Benefit of AI in R&S | Utilitarianism | |
Chen | 2023 |
| |||
da Costa et al. | 2023 |
| |||
Gelinas et al. | 2022 |
| |||
Giermindl et al. | 2022 |
| |||
Gonzalez et al. | 2022 |
| |||
Hooper et al. | 1998 |
| |||
Indarapu et al. | 2023 |
| |||
Jatoba et al. | 2023 |
| |||
Kaushal et al. | 2021 |
| |||
Kaushal et al. | 2023 |
| |||
Kilic et al. | 2020 |
| |||
Langer et al. | 2021 |
| |||
Malik et al. | 2023 |
| |||
Malik et al. | 2022 |
| |||
Malik et al. | 2023 |
| |||
Marks | 2022 |
| |||
Nguyen and Park | 2022 |
| |||
Niehueser and Boak | 2020 |
| |||
Ore and Sposato | 2022 |
| |||
Pan and Froese | 2023 |
| |||
Potocnik et al. | 2021 |
| |||
Prikshat et al. | 2023 |
| |||
Qamar et al. | 2021 |
| |||
Vrontis et al. | 2022 |
| |||
Wang et al. | 2021 |
| |||
Zhang et al. | 2021 |
| |||
Kot et al. | 2021 |
| |||
Islam et al. | 2022 |
| Importance of contextual factors | ||
Kim v | 2021 |
| |||
Kshetri | 2021 |
| |||
Pan et al. | 2022 |
| |||
Allal-Cherif et al. | 2021 |
| Optimising Recruitment process | ||
Barducci et al. | 2022 |
| |||
Black and van Esch | 2020 |
| |||
Black and van Esch | 2021 |
| |||
Bondielli and Marcelloni | 2021 |
| |||
Brandt and Herzberg | 2020 |
| |||
De Mauro et al. | 2018 |
| |||
Eckhardt et al. | 2014 |
| |||
Fritts and Cabrera | 2021 |
| |||
Fumagalli et al. | 2022 |
| |||
Gethe | 2022 |
| |||
Gupta et al. | 2018 |
| |||
Holm | 2014 |
| |||
Koivunen et al. | 2022 |
| |||
Malinowski et al. | 2008 |
| |||
Martinez-Gil et al. | 2020 |
| |||
Oberst et al. | 2021 |
| |||
Pessach et al. | 2020 |
| |||
Posthumus | 2019 |
| |||
Sharif and Ghodoosi | 2022 |
| |||
van Esch and Black | 2019 |
| |||
van Esch et al. | 2019 |
| |||
Vardarlier and Ozsahin | 2021 |
| |||
Wesche and Sonderegger | 2021 |
| |||
Balli and Korukoǧlu | 2014 |
| Optimising Selection process | ||
Basch et al. | 2022 |
| |||
Bhargava and Assadi | 2023 |
| |||
Celik et al. | 2009 |
| |||
Collis et al. | 1995 |
| |||
Dulebohn and Johnson | 2013 |
| |||
Dursun and Karsak | 2010 |
| |||
Hickman et al. | 2021 |
| |||
Kim and Heo | 2022 |
| |||
Koch-Bayram and Kaibel | 2023 |
| |||
Kochling et al. | 2023 |
| |||
Koenig et al. | 2023 |
| |||
Langer et al. | 2019 |
| |||
Langer et al. | 2020 |
| |||
Langer et al. | 2017 |
| |||
Lee et al. | 2022 |
| |||
Leutner et al. | 2021 |
| |||
Liu et al. | 2023 |
| |||
Lukacik et al. | 2022 |
| |||
Michelotti et al. | 2021 |
| |||
Mirowska | 2020 |
| |||
Pampouktsi et al. | 2021 |
| |||
Polychroniou and Giannikos | 2009 |
| |||
Shet and Nair | 2022 |
| |||
Suen et al. | 2019 |
| |||
Thompson et al. | 2023 |
| |||
Woods et al. | 2020 |
| |||
Mirowska and Mesnet | 2022 |
| |||
Budhwar et al. | 2023 |
| AI bias | Justice | |
Kelan | 2023 |
| |||
Lavanchy et al. | 2023 |
| |||
Pethig and Kroenung | 2023 |
| |||
Rodgers et al. | 2023 |
| |||
Simon et al. | 2023 |
| |||
Zhang et al. | 2023 |
| |||
Soleimani et al. | 2022 |
| |||
Tilmes | 2022 |
| |||
Kochling et al. | 2021 |
| |||
Pessach and Shmueli | 2021 |
| |||
Yarger et al. | 2020 |
| |||
Suen and Hung | 2023 |
| Trust perceptions | ||
Feldkamp et al. | 2023 |
| |||
Figueroa-Armijos et al. | 2023 |
| |||
Langer et al. | 2023 |
| |||
Kares et al. | 2023 |
| |||
da Motta Veiga et al. | 2023 |
| |||
Lee and Cha | 2023 |
| |||
Bankins | 2021 |
| Justice perceptions | ||
Koch-Bayram et al. | 2023 |
| |||
Folger et al. | 2022 |
| |||
Langer et al. | 2021 |
| |||
Noble et al. | 2021 |
| |||
Acikgoz et al. | 2020 |
| |||
Tambe et al. | 2019 |
| |||
Renier et al. | 2021 |
| |||
Kochling and Wehner | 2023 |
| |||
Demir and Gunaydin | 2023 |
| Cybervetting | Rights | |
da Motta Veiga and Figueroa-Armijos | 2022 |
| |||
Berkelaar and Buzzanell | 2015 |
| |||
Berkelaar | 2014 |
| |||
Todoli-Signes | 2019 |
| Data protection | ||
Koivunen et al. | 2023 |
| |||
Hunkenschroer and Luetge | 2022 |
| |||
Yam and Skorburg | 2021 |
| Rights violation |
Source(s): Authors own creation
Ethical decision-making using AI in recruitment and selection: main issues and implications for research and practice
Ethical theories | AI in recruiting and selection: main line of ethical enquiries | Main issues | Theoretical avenues for future development | Practical implications |
---|---|---|---|---|
Utilitarian theories | The utilitarian view: the efficient optimisation of R&S through AI |
|
|
|
Theories of justice | The justice view: the perceptions of justice and fairness related to AI techniques |
|
|
|
Theories of rights | The rights view: the respect for legal and human rights requirements when AI is applied |
|
|
|
Source(s): Authors own creation
References
Acikgoz, Y., Davison, K.H., Compagnone, M. and Laske, M. (2020), “Justice perceptions of artificial intelligence in selection”, International Journal of Selection and Assessment, Vol. 28 No. 4, pp. 399-416, doi: 10.1111/ijsa.12306.
Allal-Chérif, O., Yela Aránega, A. and Castaño Sánchez, R. (2021), “Intelligent recruitment: how to identify, select, and retain talents from around the world using artificial intelligence”, Technological Forecasting and Social Change, Vol. 169, 120822, doi: 10.1016/j.techfore.2021.120822.
Bankins, S. (2021), “The ethical use of artificial intelligence in human resource management: a decision-making framework”, Ethics and Information Technology, Vol. 23 No. 4, pp. 841-854, doi: 10.1007/s10676-021-09619-6.
Barber, A.E. (1998), Recruiting Employees: Individual and Organizational Perspectives, Sage Publications, Thousand Oaks.
Barducci, A., Iannaccone, S., La Gatta, V., Moscato, V., Sperlì, G. and Zavota, S. (2022), “An end-to-end framework for information extraction from Italian resumes”, Expert Systems With Applications, Vol. 210, 118487, doi: 10.1016/j.eswa.2022.118487.
Basch, J.M., Melchers, K.G. and Büttner, J.C. (2022), “Preselection in the digital age: a comparison of perceptions of asynchronous video interviews with online tests and online application documents in a simulation context”, International Journal of Selection and Assessment, Vol. 30 No. 4, pp. 639-652, doi: 10.1111/ijsa.12403.
Bergh, D.D., Ketchen, D.J., Orlandi, I., Heugens, P.P.M.A.R. and Boyd, B.K. (2019), “Information asymmetry in management research: past accomplishments and future opportunities”, Journal of Management, Vol. 45 No. 1, pp. 122-158, doi: 10.1177/0149206318798026.
Berkelaar, B.L. (2014), “Cybervetting, online information, and personnel selection: new transparency expectations and the emergence of a digital social contract”, Management Communication Quarterly, Vol. 28 No. 4, pp. 479-506, doi: 10.1177/0893318914541966.
Berkelaar, B.L. and Buzzanell, P.M. (2015), “Online employment screening and digital career capital: exploring employers' use of online information for personnel selection”, Management Communication Quarterly, Vol. 29 No. 1, pp. 84-113, doi: 10.1177/0893318914554657.
Black, J.S. and van Esch, P. (2020), “AI-enabled recruiting: what is it and how should a manager use it?”, Business Horizons, Vol. 63 No. 2, pp. 215-226, doi: 10.1016/j.bushor.2019.12.001.
CABS (2021), “Academic journal guide 2021”, Chartered Association of Business Schools.
Cavanagh, G.F., Moberg, D.J. and Velasquez, M. (1981), “The ethics of organizational politics”, Academy of Management. The Academy of Management Review (Pre-1986), Vol. 6 No. 3, pp. 363-374, 10.2307/257372.
Colquitt, J.A. (2001), “On the dimensionality of organizational justice: a construct validation of a measure”, Journal of Applied Psychology, Vol. 86 No. 3, pp. 386-400, doi: 10.1037//0021-9010.86.3.386.
da Motta Veiga, S.P. and Figueroa-Armijos, M. (2022), “Considering artificial intelligence in hiring for cybervetting purposes”, Industrial and Organizational Psychology, Vol. 15 No. 3, pp. 354-356, doi: 10.1017/iop.2022.54.
da Motta Veiga, S.P., Figueroa-Armijos, M. and Clark, B.B. (2023), “Seeming ethical makes you attractive: unraveling how ethical perceptions of AI in hiring impacts organizational innovativeness and attractiveness”, Journal of Business Ethics, Vol. 186 No. 1, pp. 199-216, doi: 10.1007/s10551-023-05380-6.
Demir, M. and Günaydın, Y. (2023), “A digital job application reference: how do social media posts affect the recruitment process?”, Employee Relations: The International Journal, Vol. 45 No. 2, pp. 457-477, doi: 10.1108/er-05-2022-0232.
Duggan, J., Sherman, U., Carbery, R. and McDonnell, A. (2020), “Algorithmic management and app-work in the gig economy: a research agenda for employment relations and HRM”, Human Resource Management Journal, Vol. 30 No. 1, pp. 114-132, doi: 10.1111/1748-8583.12258.
Feldkamp, T., Langer, M., Wies, L. and König, C.J. (2023), “Justice, trust, and moral judgements when personnel selection is supported by algorithms”, European Journal of Work and Organizational Psychology, Vol. 33 No. 2, pp. 1-16, doi: 10.1080/1359432x.2023.2169140.
Friedman, G.D. (2023), “Artificial intelligence is increasingly being used to make workplace decisions–but human intelligence remains vital”, Fortune, Online, 13 March.
Gélinas, D., Sadreddin, A. and Vahidov, R. (2022), “Artificial intelligence in human resources management: a review and research agenda”, Pacific Asia Journal of the Association for Information Systems, Vol. 14 No. 6, pp. 1-42, doi: 10.17705/1pais.14601.
Goldman, B. and Cropanzano, R. (2015), “‘Justice’ and ‘fairness’ are not the same thing”, Journal of Organizational Behavior, Vol. 36 No. 2, pp. 313-318, doi: 10.1002/job.1956.
Greenwood, M. (2002), “Ethics and HRM: a review and conceptual analysis”, Journal of Business Ethics, Vol. 36 No. 3, pp. 261-278, doi: 10.1023/a:1014090411946.
Greenwood, M. (2013), “Ethical analyses of HRM: a review and research agenda”, Journal of Business Ethics, Vol. 114 No. 2, pp. 355-366, doi: 10.1007/s10551-012-1354-y.
Guerci, M., Shani, A.B. and Solari, L. (2014), “A stakeholder perspective for sustainable HRM”, in Ehnert, I., Harry, W. and Zink, K.J. (Eds), Sustainability and Human Resource Management: Developing Sustainable Business Organizations, Springer Berlin Heidelberg, Berlin, Heidelberg, pp. 205-223.
Hooper, R.S., Galvin, T.P., Kilmer, R.A. and Liebowitz, J. (1998), “Use of an expert system in a personnel selection process”, Expert Systems with Applications, Vol. 14 No. 4, pp. 425-432, doi: 10.1016/s0957-4174(98)00002-5.
Hunkenschroer, A.L. and Luetge, C. (2022), “Ethics of AI-enabled recruiting and selection: a review and research agenda”, Journal of Business Ethics, Vol. 178 No. 4, pp. 977-1007, doi: 10.1007/s10551-022-05049-6.
Islam, M., Mamun, A.A., Afrin, S., Ali Quaosar, G.M.A. and Uddin, M.A. (2022), “Technology adoption and human resource management practices: the use of artificial intelligence for recruitment in Bangladesh”, South Asian Journal of Human Resources Management, Vol. 9 No. 2, pp. 324-349, doi: 10.1177/23220937221122329.
Kares, F., König, C.J., Bergs, R., Protzel, C. and Langer, M. (2023), “Trust in hybrid human-automated decision-support”, International Journal of Selection and Assessment, Vol. 31 No. 3, pp. 388-402, doi: 10.1111/ijsa.12423.
Kaushal, N., Kaurav, R.P.S., Sivathanu, B. and Kaushik, N. (2021), “Artificial intelligence and HRM: identifying future research Agenda using systematic literature review and bibliometric analysis”, Management Review Quarterly, Vol. 73 No. 2, pp. 455-493, doi: 10.1007/s11301-021-00249-2.
Kim, J.Y. and Heo, W. (2022), “Artificial intelligence video interviewing for employment: perspectives from applicants, companies, developer and academicians”, Information Technology and People, Vol. 35 No. 3, pp. 861-878, doi: 10.1108/itp-04-2019-0173.
Köchling, A., Riazy, S., Wehner, M.C. and Simbeck, K. (2021), “Highly accurate, but still discriminatory”, Business and Information Systems Engineering, Vol. 63 No. 1, pp. 39-54, doi: 10.1007/s12599-020-00673-w.
Köchling, A., Wehner, M.C. and Warkocz, J. (2023), “Can I show my skills? Affective responses to artificial intelligence in the recruitment process”, Review of Managerial Science, Vol. 17 No. 6, pp. 2109-2138, doi: 10.1007/s11846-021-00514-4.
Koch-Bayram, I.F. and Kaibel, C. (2023), “Algorithms in personnel selection, applicants’ attributions about organizations’ intents and organizational attractiveness: an experimental study”, Human Resource Management Journal, Published online 2023, pp. 1-20, doi: 10.1111/1748-8583.12528.
Koenig, N., Tonidandel, S., Thompson, I., Albritton, B., Koohifar, F., Yankov, G., Speer, A., Hardy, J.H., Gibson, C., Frost, C., Liu, M., McNeney, D., Capman, J., Lowery, S., Kitching, M., Nimbkar, A., Boyce, A., Sun, T., Guo, F., Min, H., Zhang, B., Lebanoff, L., Phillips, H. and Newton, C. (2023), “Improving measurement and prediction in personnel selection through the application of machine learning”, Personnel Psychology, Vol. 76 No. 4, pp. 1061-1123, doi: 10.1111/peps.12608.
Kot, S., Hussain, H.I., Bilan, S., Haseeb, M. and Mihardjo, L.W. (2021), “The role of artificial intelligence recruitment and quality to explain the phenomenon of employer reputation”, Journal of Business Economics and Management, Vol. 22 No. 4, pp. 867-883, doi: 10.3846/jbem.2021.14606.
Kshetri, N. (2021), “Evolving uses of artificial intelligence in human resource management in emerging economies in the global South: some preliminary evidence”, Management Research Review, Vol. 44 No. 7, pp. 970-990, doi: 10.1108/mrr-03-2020-0168.
Kunisch, S., Denyer, D., Bartunek, J.M., Menz, M. and Cardinal, L.B. (2023), “Review research as scientific inquiry”, Organizational Research Methods, Vol. 26 No. 1, pp. 3-45, doi: 10.1177/10944281221127292.
Langer, M., König, C.J. and Krause, K. (2017), “Examining digital interviews for personnel selection: applicant reactions and interviewer ratings”, International Journal of Selection and Assessment, Vol. 25 No. 4, pp. 371-382, doi: 10.1111/ijsa.12191.
Langer, M., König, C.J. and Papathanasiou, M. (2019), “Highly automated job interviews: acceptance under the influence of stakes”, International Journal of Selection and Assessment, Vol. 27 No. 3, pp. 217-234, doi: 10.1111/ijsa.12246.
Langer, M., Baum, K., König, C.J., Hähne, V., Oster, D. and Speith, T. (2021), “Spare me the details: how the type of information about automated interviews influences applicant reactions”, International Journal of Selection and Assessment, Vol. 29 No. 2, pp. 154-169, doi: 10.1111/ijsa.12325.
Langer, M., König, C.J., Back, C. and Hemsing, V. (2023), “Trust in artificial intelligence: comparing trust processes between human and automated trustees in light of unfair bias”, Journal of Business and Psychology, Vol. 38 No. 3, pp. 493-508, doi: 10.1007/s10869-022-09829-9.
Le Brocq, S., Hughes, E. and Donnelly, R. (2023), “Sharing in the gig economy: from equitable work relations to exploitative HRM”, Personnel Review, Vol. 52 No. 3, pp. 454-469, doi: 10.1108/pr-04-2019-0219.
Legge, K. (1998), “The morality of HRM”, in Mabey, C., Clark, T.A.R. and Skinner, D. (Eds), Experiencing Human Resource Management, pp. 14-32.
Lopez-Cabrales, A. and Valle-Cabrera, R. (2020), “Sustainable HRM strategies and employment relationships as drivers of the triple bottom line”, Human Resource Management Review, Vol. 30 No. 3, 100689, doi: 10.1016/j.hrmr.2019.100689.
Makarius, E.E., Mukherjee, D., Fox, J.D. and Fox, A.K. (2020), “Rising with the machines: a sociotechnical framework for bringing artificial intelligence into the organization”, Journal of Business Research, Vol. 120, pp. 262-273, doi: 10.1016/j.jbusres.2020.07.045.
Malik, A., Budhwar, P. and Kazmi, B.A. (2023), “Artificial intelligence (AI)-assisted HRM: towards an extended strategic framework”, Human Resource Management Review, Vol. 33 No. 1, 100940, doi: 10.1016/j.hrmr.2022.100940.
Malinowski, J., Weitzel, T. and Keim, T. (2008), “Decision support for team staffing: an automated relational recommendation approach”, Decision Support Systems, Vol. 45 No. 3, pp. 429-447, doi: 10.1016/j.dss.2007.05.005.
Martinez-Gil, J., Paoletti, A.L. and Pichler, M. (2020), “A novel approach for learning how to automatically match job offers and candidate profiles”, Information Systems Frontiers, Vol. 22 No. 6, pp. 1265-1274, doi: 10.1007/s10796-019-09929-7.
Meijerink, J., Boons, M., Keegan, A. and Marler, J. (2021), “Algorithmic human resource management: synthesizing developments and cross-disciplinary insights on digital HRM”, The International Journal of Human Resource Management, Vol. 32 No. 12, pp. 2545-2562, doi: 10.1080/09585192.2021.1925326.
Meister, J. (2019), “Ten HR trends in the age of artificial intelligence”, Forbes, Online, 8 January.
Mirowska, A. and Mesnet, L. (2022), “Preferring the devil you know: potential applicant reactions to artificial intelligence evaluation of interviews”, Human Resource Management Journal, Vol. 32 No. 2, pp. 364-383, doi: 10.1111/1748-8583.12393.
Nguyen, L.A. and Park, M. (2022), “Artificial intelligence in staffing”, Vision: The Journal of Business Perspective. doi: 10.1177/09722629221096803.
Noble, S.M., Foster, L.L. and Craig, S.B. (2021), “The procedural and interpersonal justice of automated application and resume screening”, International Journal of Selection and Assessment, Vol. 29 No. 2, pp. 139-153, doi: 10.1111/ijsa.12320.
Ore, O. and Sposato, M. (2022), “Opportunities and risks of artificial intelligence in recruitment and selection”, International Journal of Organizational Analysis, Vol. 30 No. 6, pp. 1771-1782, doi: 10.1108/ijoa-07-2020-2291.
Palumbo, R., Hinna, A. and Manesh, M.F. (2023), “Aiming at inclusive workplaces: a bibliometric and interpretive review at the crossroads of disability management and human resource management”, Journal of Management and Organization, Published online 2023, pp. 1-24, doi: 10.1017/jmo.2023.4.
Pan, Y., Froese, F., Liu, N., Hu, Y. and Ye, M. (2022), “The adoption of artificial intelligence in employee recruitment: the influence of contextual factors”, The International Journal of Human Resource Management, Vol. 33 No. 6, pp. 1125-1147, doi: 10.1080/09585192.2021.1879206.
Parmar, B.L., Freeman, R.E., Harrison, J.S., Wicks, A.C., Purnell, L. and De Colle, S. (2010), “Stakeholder theory: the state of the art”, Academy of Management Annals, Vol. 4 No. 1, pp. 403-445, doi: 10.1080/19416520.2010.495581.
Paul, J., Lim, W.M., O'Cass, A., Hao, A.W. and Bresciani, S. (2021), “Scientific procedures and rationales for systematic literature reviews (SPAR-4-SLR)”, International Journal of Consumer Studies, Vol. 45 No. 4, doi: 10.1111/ijcs.12695.
Pessach, D. and Shmueli, E. (2021), “Improving fairness of artificial intelligence algorithms in Privileged-Group Selection Bias data settings”, Expert Systems with Applications, Vol. 185, 115667, doi: 10.1016/j.eswa.2021.115667.
Pethig, F. and Kroenung, J. (2023), “Biased humans, (Un)Biased algorithms?”, Journal of Business Ethics, Vol. 183 No. 3, pp. 637-652, doi: 10.1007/s10551-022-05071-8.
Prikshat, V., Islam, M., Patel, P., Malik, A., Budhwar, P. and Gupta, S. (2023), “AI-Augmented HRM: literature review and a proposed multilevel framework for future research”, Technological Forecasting and Social Change, Vol. 193, 122645, doi: 10.1016/j.techfore.2023.122645.
Qamar, Y. and Samad, T.A. (2022), “Human resource analytics: a review and bibliometric analysis”, Personnel Review, Vol. 51 No. 1, pp. 251-283, doi: 10.1108/pr-04-2020-0247.
Rawls, J. (1971), “Atheory of justice”, Cambridge (Mass.).
Rodgers, W., Murray, J.M., Stefanidis, A., Degbey, W.Y. and Tarba, S.Y. (2023), “An artificial intelligence algorithmic approach to ethical decision-making in human resource management processes”, Human Resource Management Review, Vol. 33 No. 1, 100925, doi: 10.1016/j.hrmr.2022.100925.
Schumann, P.L. (2001), “A moral principles framework for human resource management ethics”, Human Resource Management Review, Vol. 11 No. 1, pp. 93-111, doi: 10.1016/s1053-4822(00)00042-5.
Sharma, N. and Chillakuri, B.K. (2022), “Positive deviance at work: a systematic review and directions for future research”, Personnel Review, Vol. 52 No. 4, pp. 933-954, Vol. ahead-of-print No. ahead-of-print, doi: 10.1108/pr-05-2020-0360.
Shrestha, Y.R., Ben-Menahem, S.M. and Von Krogh, G. (2019), “Organizational decision-making structures in the age of artificial intelligence”, California Management Review, Vol. 61 No. 4, pp. 66-83, doi: 10.1177/0008125619862257.
Simsek, Z., Fox, B. and Heavey, C. (2023), “Systematicity in organizational research literature reviews: a framework and assessment”, Organizational Research Methods, Vol. 26 No. 2, pp. 292-321, doi: 10.1177/10944281211008652.
Sokolov, D. and Zavyalova, E. (2022), “Trendsetters of HRM: a systematic review of how professional service firms manage people”, Personnel Review, Vol. 51 No. 2, pp. 564-583, doi: 10.1108/pr-08-2018-0314.
Soleimani, M., Intezari, A. and Pauleen, D.J. (2022), “Mitigating cognitive biases in developing AI-assisted recruitment systems: a knowledge-sharing approach”, International Journal of Knowledge Management (IJKM), Vol. 18 No. 1, pp. 1-18, doi: 10.4018/ijkm.290022.
Suen, H.-Y., Chen, M. Y.-C. and Lu, S.-H. (2019), “Does the use of synchrony and artificial intelligence in video interviews affect interview ratings and applicant attitudes?”, Computers in Human Behavior, Vol. 98, pp. 93-101, doi: 10.1016/j.chb.2019.04.012.
Tambe, P., Cappelli, P. and Yakubovich, V. (2019), “Artificial intelligence in human resources management: challenges and a path forward”, California Management Review, Vol. 61 No. 4, pp. 15-42, doi: 10.1177/0008125619867910.
Thompson, I., Koenig, N., Mracek, D.L. and Tonidandel, S. (2023), “Deep learning in employee selection: evaluation of algorithms to automate the scoring of open-ended assessments”, Journal of Business and Psychology, Vol. 38 No. 3, pp. 509-527, doi: 10.1007/s10869-023-09874-y.
Tilmes, N. (2022), “Disability, fairness, and algorithmic bias in AI recruitment”, Ethics and Information Technology, Vol. 24 No. 2, p. 21, doi: 10.1007/s10676-022-09633-2.
Todolí-Signes, A. (2019), “Algorithms, artificial intelligence and automated decisions concerning workers and the risks of discrimination: the necessary collective governance of data protection”, Transfer: European Review of Labour and Research, Vol. 25 No. 4, pp. 465-481, doi: 10.1177/1024258919876416.
Tursunbayeva, A., Pagliari, C., Di Lauro, S. and Antonelli, G. (2022), “The ethics of people analytics: risks, opportunities and recommendations”, Personnel Review, Vol. 51 No. 3, pp. 900-921, doi: 10.1108/pr-12-2019-0680.
Van Esch, P. and Black, J.S. (2019), “Factors that influence new generation candidates to engage with and complete digital, AI-enabled recruiting”, Business Horizons, Vol. 62 No. 6, pp. 729-739, doi: 10.1016/j.bushor.2019.07.004.
Vardarlier, P. and Ozsahin, M. (2021), “Digital transformation of human resource management: social media's performance effect”, International Journal of Innovation and Technology Management, Vol. 18 No. 3, 2150005, doi: 10.1142/s021987702150005x.
Varma, A., Dawkins, C. and Chaudhuri, K. (2023), “Artificial intelligence and people management: a critical assessment through the ethical lens”, Human Resource Management Review, Vol. 33 No. 1, 100923, doi: 10.1016/j.hrmr.2022.100923.
Vrontis, D., Christofi, M., Pereira, V., Tarba, S., Makrides, A. and Trichina, E. (2022), “Artificial intelligence, robotics, advanced technologies and human resource management: a systematic review”, The International Journal of Human Resource Management, Vol. 33 No. 6, pp. 1237-1266, doi: 10.1080/09585192.2020.1871398.
Winstanley, D., Woodall, J. and Heery, E. (1996), “Business ethics and human resource management”, Personnel Review, Vol. 25 No. 6, pp. 5-12, doi: 10.1108/00483489610148491.
Wright, S.A. and Schultz, A.E. (2018), “The rising tide of artificial intelligence and business automation: developing an ethical framework”, Business Horizons, Vol. 61 No. 6, pp. 823-832, doi: 10.1016/j.bushor.2018.07.001.
Yam, J. and Skorburg, J.A. (2021), “From human resources to human rights: impact assessments for hiring algorithms”, Ethics and Information Technology, Vol. 23 No. 4, pp. 611-623, doi: 10.1007/s10676-021-09599-7.