Artificial intelligence legal personality and accountability: auditors’ accounts of capabilities and challenges for instrument boundary

Piotr Staszkiewicz (Department of Corporate Finance and Investments, SGH Warsaw School of Economics, Warsaw, Poland)
Jarosław Horobiowski (Department of Taxes, Voivodship Administrative Court in Wroclaw, Wroclaw, Poland)
Anna Szelągowska (Department of Innovative Citi, SGH Warsaw School of Economics, Warsaw, Poland)
Agnieszka Maryla Strzelecka (Department of Finance, Faculty of Economic Sciences, Koszalin University of Technology, Koszalin, Poland)

Meditari Accountancy Research

ISSN: 2049-372X

Article publication date: 25 June 2024

2960

Abstract

Purpose

The study aims to identify the practical borders of AI legal personality and accountability in human-centric services.

Design/methodology/approach

Using a framework tailored for AI studies, this research analyses structured interview data collected from auditors based in Poland.

Findings

The study identified new constructs to complement the taxonomy of arguments for AI legal personality: cognitive strain, consciousness, cyborg paradox, reasoning replicability, relativism, AI misuse, excessive human effort and substitution.

Research limitations/implications

The insights presented herein are primarily derived from the perspectives of Polish auditors. There is a need for further exploration into the viewpoints of other key stakeholders, such as lawyers, judges and policymakers, across various global contexts.

Practical implications

The findings of this study hold significant potential to guide the formulation of regulatory frameworks tailored to AI applications in human-centric services. The proposed sui generis AI personality institution offers a dynamic and adaptable alternative to conventional legal personality models.

Social implications

The outcomes of this research contribute to the ongoing public discourse on AI’s societal impact. It encourages a balanced assessment of the potential advantages and challenges associated with granting legal personality to AI systems.

Originality/value

This paper advocates for establishing a sui generis AI personality institution alongside a joint accountability model. This dual framework addresses the current uncertainties surrounding human, general AI and super AI characteristics and facilitates the joint accountability of responsible AI entities and their ultimate beneficiaries.

Keywords

Citation

Staszkiewicz, P., Horobiowski, J., Szelągowska, A. and Strzelecka, A.M. (2024), "Artificial intelligence legal personality and accountability: auditors’ accounts of capabilities and challenges for instrument boundary", Meditari Accountancy Research, Vol. 32 No. 7, pp. 120-146. https://doi.org/10.1108/MEDAR-10-2023-2204

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Piotr Staszkiewicz, Jarosław Horobiowski, Anna Szelągowska and Agnieszka Maryla Strzelecka.

License

Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial & non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

This study asks whether the arguments raised in the debate on artificial intelligence (AI) legal personality (also legal personhood) apply to human-centric services.

Given auditors’ core function of applying judgement and solving complex problems during audits, we adapt Wang (2019) capability-AI perspective of intelligence. We define intelligence as “the capacity of an information-processing system to adapt to its environment while operating with insufficient knowledge and resources” (p. 17). In this paper, we define an “information-processing system” as one that includes tangible and intangible AI agents capable of autonomous decisions, self-learning and adaptation. AI literature (Bostrom, 2014; Kaplan and Haenlein, 2019) defines its development in three stages: artificial narrow intelligence (ANI), artificial general intelligence (AGI) and artificial super intelligence (ASI). These AI forms, found in bots, robots, cyborgs and systems with spontaneous intelligence, present challenges for establishing legal status.

Legal personality gives a person legal subjectivity regarding rights and obligations. The theoretical boundaries of AI legal personality delineate the feasible and desirable extent of AI’s legal rights and obligations (Doomen, 2023), while on an empirical level, auditor comprehension of AI personality constitutes instruments of practical boundaries. Accountability and responsibility extend the concept of legal personality. Accountability and responsibility are generalisations of the legal personality concept. Accountability is being answerable for the results of one’s actions or decisions, while responsibility is the obligation to perform the assigned tasks or duties.

Challenges in AI encompass identifying and mitigating the potential societal, organisational and ethical impacts of AI solutions. AI is based on four key capabilities: perception, comprehension, acting and learning. These capabilities allow AI systems to perceive their surroundings, understand human intentions and context, take appropriate actions and learn from experience (Bawack et al., 2021).

Human-centric services necessitate human decision-makers (Tiron-Tudor and Deliu, 2021), a path entailing complex judgements, ethical considerations and uncertainties, often without clear, programmable decision paths. However, as AI develops its capacity for perception, comprehension, acting and learning, it will increasingly be able to take over human-centric services.

This study focuses on auditors, a subgroup of accountants who objectively assess the reliability and fairness of information addressed to stakeholders and society. It studied auditors instead of lawyers because lawyers’ education and practice may introduce confirmation bias. Auditors examine different business models in a rapidly changing economic environment so they may account for a broader range of perspectives.

Accounting researchers analyse the consequences of new technologies (Tiron-Tudor and Deliu, 2021; De Villiers et al., 2023), AI (Noor et al., 2021) impact on human decisions (Arnaboldi et al., 2017; Liu, 2022), complexity, uncertainty and ambiguity (Bakarich and O’Brien, 2021), organisations’ culture (Arnaboldi et al., 2017; Janssen et al., 2020; van der Voort et al., 2019), the responsibility gap (Agostino et al., 2022a, 2022b; Lehner et al., 2022; Lombardi et al., 2021) and automation bias (Bahner et al., 2008; Goddard et al., 2012). However, research at the intersection of law and auditing is rare, especially empirical research (Garanina et al., 2022; Jans et al., 2022; Tyson and Adams, 2019).

The theoretical debate on AI legal personality remains inconclusive (Butterworth, 2018; Čerka et al., 2017; Chen and Burgess, 2019; Chesterman, 2020; Karnouskos, 2022), However, accounting research can offer valuable insights by incorporating the perspectives of practitioners. Understanding their views and experiences is crucial for grasping the practical implications of AI legal personality. This approach can illuminate areas needing further theoretical exploration, thereby circumventing the pitfalls of obsolete regulatory frameworks (Di Tullio et al., 2023) and positioning accounting research as an “interdisciplinary contributor” (Tucker and Alewine, 2023).

This paper aims to identify the practical borders of AI legal personality, contributing to the ongoing debate on AI legal personality and accountability gap (Bracci, 2022; Conradie et al., 2022; Liu and Zawieska, 2020; Vesa and Tienari, 2022) and its relevance for the accounting domain.

To answer the research question, this study applies a framework for AI studies developed by Bawack et al. (2021) on structured interview data of Poland-based auditors. The study compares theoretical arguments for and against AI legal personality with practitioner arguments to identify potential new boundaries.

Polish accounting and auditing research is increasingly shaping the global debate (Białek-Jaworska and Kopańska, 2023; Dobija et al., 2019; Fusco et al., 2022; Krasodomska et al., 2020; Łada et al., 2022; Maruszewska et al., 2023; Matuszak and Różańska, 2021). This research centred on Poland, allowing a deeper exploration of the cultural context (Staszkiewicz and Morawska, 2019) influencing interview responses. This controlled environment facilitates the identification of formal and informal respondent codes within a shared educational background and language. Such nuances can be more challenging to isolate and interpret in cross-cultural and cross-language settings due to potential variations in educational systems and linguistic expressions. Although single-country studies raise generalisability concerns, our focus on auditors’ behaviour, consistent across borders (Nicholson-Crotty and Meier, 2002), mitigates this. Additionally, Poland’s International Standards on Auditing adoption makes it a perfect laboratory for studying typical auditor behaviour as being the global average (Simunic et al., 2017).

This paper contributes to the discussion on legal personality and accountability for AI in four aspects.

Firstly, moving beyond anecdotal evidence, this study establishes an empirical foundation for examining human perceptions of AI personality, thereby expanding upon the theoretical framework proposed by Chen and Burgess (2019). It advances already presented perspectives on space (Soroka et al., 2022), medicine (Laptev et al., 2022, crowd working (Nowik, 2021), social media (Jabri, 2020; Krönke, 2020; Molnár-Gábor, 2020), investments and financial markets (Kunitskaya, 2022; Schemmel, 2020) and enforcement (Buchholtz, 2020; Rademacher, 2020) with the perspective of the auditors.

Secondly, leveraging existing literature (e.g. Andrade et al., 2007; Bertolini and Riccaboni, 2021; Massaro and Norton, 2016; Millar, 2014; Soroka et al., 2022), this study proposes new constructs to augment Karnouskos’s (2022) taxonomy. The study presents an extended taxonomy of arguments professionals use for AI implementation in audit risk assessment. The study identifies incremental challenges and capabilities for AI personality: cognitive strain, consciousness, cyborg paradox, reasoning replicability, relativism, AI misuse, excessive human effort and substitution. Subsequently, the study reconciles incremental constructs to the prior discussion (Bryson et al., 2017; Butterworth, 2018; Čerka et al., 2017; Chen and Burgess, 2019; Nowik, 2021; Simnart, 2021; Solaiman, 2017), narrowing the discussion inconclusiveness.

Thirdly, this study suggests a sui generis AI personality institution as an alternative to the contentious concept of legal personality (Bryson et al., 2017; Čerka et al., 2017; Karnouskos, 2022; Nyholm, 2018; Solaiman, 2017; Soroka et al., 2022), defining its boundaries with nuanced arguments. The study identifies the boundaries of the institution with incremental arguments. It introduces the joint responsibility of AI and the ultimate beneficiary and bridges the gap in the uncertainty of the human, AGI and ASI characteristics. It argues that the AGI non-existence argument (Nowik, 2021) fails the cyborg paradox test. Thus, this study argues that the legal framework’s boundaries slow the development of professional services.

Fourthly, this research generalises the sui generis boundaries of AI personality to the joint accountability of humans and AI, emphasising the shared responsibility of the ultimate human beneficiary and the AI system. AI accountability refers to the obligation to implement human-understandable AI decision-reasoning capabilities, where the decision is in the AI’s direction. This proposal reclaims the original meaning of joint accountability, as offered by Rizzo et al. (1970), showing how the discussion has evolved from organisational convergence through the deficiency of intellectual capital in financial reporting to the boundaries of responsibility and accountability in big data and AI (DiMaggio and Powell, 1983; Dumay et al., 2016; Lugli and Bertacchini, 2023; Secundo et al., 2017). Furthermore, it provides complementary empirical evidence to prior research (Agostino et al., 2022b; Bracci, 2022; Karunakaran et al., 2022) that AI extends the boundaries of accountability, blurring the lines of accountability between humans and machines.

The paper proceeds by reviewing relevant literature, outlining theoretical frameworks, detailing the methodology, presenting findings and discussing the results and their broader implications.

2. Literature review

In this section, we review the literature on the impact of new technology on auditor practice, focusing on accountability and responsibility constructs. The broader concept of accountability, encompassing legal personality, raises the question: can AI be granted legal personality? We examine arguments for and against this attribution (including rights, obligations and enforceability) and conclude by compiling arguments to support empirical mapping.

2.1 Responsibility and accountability of artificial intelligence

Although the discussion on the growing importance of intellectual capital and new technologies in corporate disclosure and reporting flourishes (DiMaggio and Powell, 1983; Dumay et al., 2016; Lugli and Bertacchini, 2023; Secundo et al., 2017), audit practice remains relatively unexplored and challenging.

New technologies, including blockchain (Secinaro et al., 2021), big data (Ahn and Wickramasinghe, 2021; Al-Htaybat and von Alberti-Alhtaybat, 2017; Arnaboldi et al., 2017), robotic process automation (Alderman, 2019; Arnaboldi et al., 2017) and human–machine teams (Damacharla et al., 2018; Rajnai and Kocsis, 2017), are changing the auditing and accounting profession. These technologies affect high-volume routine data processes (Tiron-Tudor and Deliu, 2021) and human decisions (Arnaboldi et al., 2017; Liu, 2022).

New technologies lead to algocracy bias where an algorithm-based system constrains human participation (Danaher, 2016; Wieringa, 2020) and generates the responsibility and accountability gap (Agostino et al., 2022a; Lehner et al., 2022; Lombardi et al., 2021).

In essence, audit responsibility is about doing the audit work with due care and professionalism, while audit accountability is about owning the results of that work and being answerable for its quality and integrity. In particular, the accountability of an audit firm addresses how an organisation sets up and monitors policies and practices to satisfy stakeholders’ demand (Karunakaran et al., 2022).

The fundamental concept of accountability revolves around identifying the responsible actors (who), the recipients of accountability (to whom), the timing of accountability (when) and the methods of accountability (how), invariably positioning humans as the central subjects. However, AI systems’ increasing autonomy and decision-making capabilities challenge this traditional framework as AI transitions from an object to a subject position, thereby blurring the lines of accountability between humans and machines. (Agostino et al., 2022b; Bracci, 2022).

Stakeholders’ accountability concerns amplify as AI adoption in audits increases, emphasising distributed responsibility. The issue of distributed responsibility, also referred to as the “problem of many hands”, arises from the inherent lack of legal subjectivity attributed to AI. Consequently, the burden of accountability for actions or decisions made by AI falls solely upon human actors. The accountability for AI may be assigned to the creator, operator or ultimate beneficiary of the AI. Various scholars support that humans should be held accountable for the outcomes of AI-driven actions (Bracci, 2022; Vesa and Tienari, 2022), but it is challenging for ASI. This, in turn, calls for the changes of AI into explainable – xAI (Barredo Arrieta et al., 2020), rethinking the concept of joint accountability introduced by Rizzo et al. (1970), and it underscores the need for innovative enforcement and research frameworks (Busco et al., 2017; Loi and Spielkamp, 2021; de Villiers et al., 2023). Before we outline our conceptual framework, we shall examine the arguments for and against granting legal personhood to AI. By mapping these arguments onto the capabilities and challenges of AI systems (to be discussed later), we can establish the key labels that will guide our empirical research. This analysis is crucial for understanding the complexities of accountability in AI-driven systems.

2.2 Rights and obligations

Proponents of AI personality argue that AI can make autonomous decisions and impact humans independently and should, therefore, be granted a legal personality attribute like corporates have or other non-human legal persons: ships, idols, rivers or entire ecosystems (Simnart, 2021). Opponents maintain that non-human legal persons require human representation (human agency) and are unconscious and that AI expressing itself is effectively human action. Corporates, idols, rivers, etc. are aggregated wills of humans connected to the entity with a set of human transactions.

AI’s legal personality would imply a non-human ability to claim rights and obligations designed and assigned to humans, causing institutionalised conflicts between humans and the tools they originate. Opponents (e.g. Solaiman, 2017) argue that such a privilege would relax the ultimate legal beneficiary of AI from liability for their products and decisions. However, such an approach forces AI into a subordinate (slave) position to humans (Karnouskos, 2022), which would be unsustainable in the case of AGI and ASI.

Čerka et al. (2017) observe that AI makes independent decisions. Thus, it is challenging to determine the causal link between a wrong AI decision and the actions of the ultimate legal beneficiary. Therefore, making the ultimate legal beneficiary of AI liable for the results of independent AI decisions could be impractical. Čerka et al. further argue that an excessive burden of legal liability could lead to programmers’ unwillingness to reveal their identities publicly, moving programming work underground. Soroka et al. (2022) support Čerka et al. and add that forthcoming AI will likely detach itself from the creator and operator because of its already incumbent ability to modify its AI code. Thus, the legal link between the creator, owner and AI will likely break.

Chen and Burgess (2019) support AI personality, observing that AI has not been created and made deliberately, not property (a tool). Solaiman (2017) contends that AI does not possess attributes of legal personality such as intentionality, desires and interests and, therefore, lacks the prerequisites for attributing criminal liability. Nowik (2021) adds arguments against AI personality: moral hazard, asset acquisition and AI decision-making. Moral hazards can appear in the development and use of AI, creating the space for abuse. Granting AI legal personality would imply the ability to acquire and own assets. AI decision-making is complex, challenging to explain and based on rational concepts, while human decision-making encompasses “practical cognition”.

Chen and Burgess (2019) argue that even if Bryson et al. (2017) and Solaiman (2017) deny AI personality on the ground, they fail to explain why AI should not take such rights and responsibilities. However, Solaiman (2017) provides some reasons for this. Non-human legal persons require humans as agents to claim their rights or obligations. AI per se is not human and thus (without human agency) cannot inherit the same rights as humans. A person subject to legal personality must be capable of being a subject of law, meaning they can exercise rights, perform duties and be aware of their choices:

Chimpanzees had all of these attributes which are argued to be present in robots; nonetheless, the repeated appeals for the animals’ personality have failed mainly due to their inability to perform duties (Solaiman, 2017).

Nowik (2021) contradicts the autonomous aspects of AI, stating that AI is autonomous to a certain extent and has deprecated self-awareness. Thus, the efficiency of programming limits AI’s perception of rights and obligations. Chen and Burgess (2019) conclude that legal personality cannot be extrapolated or analogised from existing categories of legal personality to AI and requires a new legal person separate construct. Bryson et al. (2017) add that there is no moral obligation to recognise the legal personality of AI. Nowik (2021) enhances Solaiman’s claims that AI personality is a legal concept that does not exist within any country and raises problems of interpretation, which, in turn, steers the discussion towards the execution of liabilities.

2.3 Enforcements

Bryson et al. (2017) shift the focus to enforcement, highlighting the ambiguity surrounding the enforcement of legal obligations should AI be granted rights vis-à-vis human entities. Chen and Burgess (2019) contend that termination is the practical way to hold AI accountable, albeit limited to instances where AI is defined as a physical entity, such as robots. The feasibility of terminating AI, particularly those developed for space applications (Soroka et al., 2022) or existing within the internet, is questionable due to its intangible nature. Such AI lacks a physical location affiliation with a specific legal system and is likely unresponsive to conventional enforcement measures like imprisonment, fines or rights revocation. As Chen and Burgess (2019) summarise, “there would be no way to impose meaningfully impactful financially based sanctions or restrictions”.

Given the mismatch between legal expectations and AI’s attributes, granting AI legal personality may prove ineffective. Human law systems are designed to protect human rights, positioning AI in its tool form and subordinate to humans. If AI develops into AGI or ASI, it may acquire characteristics that allow effective enforcement of duties, but this is unknown. Equalising the legal positions of humans and AI without reconciling their values would lead to continuous disputes.

Granting AI human-like legal personality is unlikely, but there are potential solutions. The most extreme solution is to ban large-scale AI (AGI and ASI), which is impractical due to military use. Nyholm (2018) suggests maintaining a “collaborative agency” where humans are vital partners. Asaro (2007) proposes creating “quasi-agents” for AI with diminished responsibilities. No solution has been implemented. How to settle this dispute is unknown. These uncertainties are time-constrained, and humankind must answer them when AGI appears. Table 1 summarises the literature arguments using acronyms for argument labels and categorising them according to AI’s allocation to capabilities and challenges.

The discussion on AI legal personality is primarily theoretical (Butterworth, 2018; Čerka et al., 2017; Chen and Burgess, 2019; Chesterman, 2020; Karnouskos, 2022), with few empirical examples like Wolf et al. (2017) or Čerka et al. (2017). Therefore, this study expands the empirical foundation of the discussion by exploring the perspectives of professions beyond lawyers on AI and legal personhood. Our methodological approach is detailed in the following sections.

3. Conceptual framework

This qualitative research analyses a complex issue with no established theoretical view (De Villiers et al., 2019). This study applies Bawack et al.’s (2021) framework for AI studies to map the arguments in Table 1 to challenges against auditor accounts gathered from interviews for the project on translating AI into auditors’ risk assessments.

Distinct from prior studies (Andrade et al., 2007; Bryson et al., 2017; Čerka et al., 2017; Davies, 2011; Solaiman, 2017), this study embeds Bawack et al.’s framework within the economic analysis of law (Kornhauser, 2022) and critical analysis (de Villiers et al., 2023). The economic analysis of law applies microeconomic theory to the study of law. It involves two sides: predicting behaviour in response to legal rules, assuming the rationality of the decision-maker, and evaluating outcomes concerning social welfare. We examine the process between lawmakers’ policies on AI legal personality and free-profession representatives’ responses.

The premise of this study is that lawmakers make a binary decision to grant AI legal personality and supply line decision-makers (auditors) with a portfolio of arguments considered at the law level. Auditors then determine the applicability of AI in assessing audit risks, shaping their stance based on this determination and conveying their insights back to lawmakers.

The focal point of this research is the decision-making process. The rationale behind auditors’ adoption or rejection of AI may draw from the entire array or a subset of the arguments provided by lawmakers or may rely on premises not previously identified in legal discourses. If the latter, the practical response could either contribute to or prompt further discussion among lawmakers (see Figure 1 Panel C).

Bawack et al. (2021) proposed a five-element framework:

  1. AI enablers (big data, computing power and algorithms);

  2. AI capabilities (perceive, comprehend and act and learn);

  3. AI perspective (field of study, concept, ability and system);

  4. AI challenges (societal, organisational and ethical); and

  5. Problem space (application domain).

This study modifies the Bawack et al. (2021) perspective (as depicted in Table 1 Panel A) to consider the intersection between humans and AI for capabilities and challenges.

This study further refines AI enablers by focusing on algorithms capable of making independent decisions, aligns the AI perspective specifically with auditors and narrows the problem space to the interaction between law and economics (Table 1 Panel B). This identifies the area of practical interest not specified in the conceptual research. Figure 1 presents the framework elements (Panel A), study focus (Panel B) and conceptual link originated for the economic analysis of law. Bawack et al. (2021) conceptual frameworks lack dynamic elements. We propose a feedback mechanism between auditors (economy) and the judicial system (law) to address this. This mechanism situates the auditor risk assessment process within a dynamic context. Our research question investigates whether this practice-driven context generates new forms of argumentation. On the judicial system side, the question of whether to assign a legal personality to AI is under consideration; on the audit (economics) side, the question of whether to apply AI to audit risk assessment is under consideration. This brings together the debate argumentation mapped through challenges and capabilities (Panels A and B intersection) on AI’s legal personality and the economic discussion around the application of AI in audit risk assessment (Panel C).

Bawack et al. (2021) is a specific instance of the general framework for critical analysis presented by Alvesson and Deetz (2000) and extended by de Villiers et al. (2023). This framework provides a comprehensive approach to understanding complex issues. The three stages of the framework – insight, critique and transformative redefinition – enable researchers to critically analyse the current state of affairs and develop transformative solutions for the future.

De Villiers et al. (2023) build on the work of Busco et al. (2017), Carungu et al. (2021) and Farooq and de Villiers (2019) to examine the implications of AI in a two-dimensional perspective: human-AI input versus efficiency-judgement. This study applies this framework to the audit perspective, enhancing AI input in risk assessment to identify the accountability limitations of the efficiency-judgement caused by legal uncertainty.

We identify insights by juxtaposing theoretical and practical argumentation, and we critique legal personality and human accountability institutions to call for sui generis AI legal personality and joint accountability. For simplicity of presentation, we follow the notation of Bawack et al. (2021). The details of the research design are presented in the next section.

4. Research design

4.1 Interviewee data

This study uses data from a larger project on friction perception of the application of AI to auditors’ risk assessment processes. Data was gathered from 41 professionals, auditors, supervisors and academics through structured interviews (SI). The instrument was developed based on the pilot, prior research, professional literature and discussions with practitioners. The SI focused on applying AI to the audit risk assessment process and the question of AI’s legal personality recognition. The study used snowball sampling to reach a subpopulation of respondents with AI audit experience.

More specifically, a two-stage sampling strategy was used. Initially, random selection was chosen, but self-selection bias emerged due to auditors’ lack of experience with AI audits. Therefore, a snowball strategy was implemented, starting with an initial sample in which respondents recommended auditors with relevant experience. A combination of random and purposive sampling reduced self-selection bias. We tested consistency between samples using an ANOVA test on argument portfolios and found no significant differences (details omitted for brevity). The snowballing ended when the responses became saturated (marginal increase in the argument base and auditors’ role). The data was collected between August 2021 and August 2022 and reached sample saturation (Russano et al., 2014).

The SI was designed to avoid direct questioning of the study subject to prevent contamination of responses due to confirmation bias. This allowed for the dynamic cross-testing of unconscious and conscious argumentation. Appendix 1 presents the final SI extract. Interviews were conducted individually in Polish. To ensure comprehensive data collection, we used guided SI. These interviews were both video and voice recorded, then transcribed. Respondents were informed about the study’s purpose and structure before the interview. This approach enhanced response validity by allowing for clarification of ambiguities during the interview.

Additionally, the authors independently assessed responses to address potential interpretation issues. Respondents were also asked to elucidate their reasoning. Due to the COVID-19 restrictions, we held interviews on MS Teams. The source data in the original language is deposited at Harvard Dataverse. Appendix 2 shows the respondent’s essential characteristics.

4.2 Mapping method

The analytical process involved mapping the free-text responses from the SIs (see Appendix 1) onto the existing theoretical framework, using the acronyms delineated in Table 1. This involved tallying and aggregating instances where interviewees endorsed arguments aligned with AI capabilities [perceive (P), comprehend (C) and act and learn (L)] and confronting AI challenges [social (S), ethical (E) and organisational (O)]. Arguments posited by auditors that did not correspond with the predefined categories in Table 1 were classified under “Others”. Table 2 presents the initial allocation of the question to capabilities and challenges.

The initial allocation of the question was preliminary. The study used direct (Q14–15) references to the perception of the AI legal personality and indirect questions. Responders first answered the indirect risk assessment question and provided reasoning for including or excluding the AI risk in the audit risk assessment process. Subsequently, respondents discussed direct questions about the AI’s legal personality, while the interview concluded with an open question.

We mapped arguments from the general meaning of consistent blocks of responses (typically paragraphs or sentences). The allocation of arguments in Table 1 takes the form of the closest meaning to the keywords. This flexible approach allows for respondents’ inconsistencies with argument labelling. It also enables us to capture unconscious and conscious motives for AI legal personality allowance.

We cross-tested the argumentation for consistency in time and merit to limit confirmation bias and support the study’s generalisability. One author conducted the mapping procedure to safeguard consistency, and others checked for miscoding until a mutual agreement was reached.

5. Results

5.1 Overall results

Responders’ professional experience ranges from 4 to 30 years, averaging 18. Men account for 58% of respondents.

Arguments were mentioned 107 times by respondents, with 85% aligning with previously identified arguments in Table 1 and 15% classified as “Others”. Thus, this study identifies additional arguments compared to those used in Table 1 in the auditors’ justification for applying AI in risk assessment.

Table 3 presents the average number of referred arguments at different levels of capabilities and challenges among the sample.

The mix of challenges and capabilities of known arguments does not indicate any preference for a particular combination of factors when auditors formulate their judgement on AI legal.

5.2 Incremental challenges and capabilities

A total of 16 arguments presented by respondents did not readily confirm the classification in Table 1. These were subsequently organised into eight categories reflecting distinct challenges and capabilities dimensions (Table 4).

5.3 Social challenges

5.3.1 S1 cognitive strain.

Respondents articulate the cognitive strain when reconciling AI and human logic in human-centric risk assessment. R04:

So that he [the auditor] can explain to the client the procedures he uses, that is, the procedures of this artificial intelligence he used in the study. He understood them and knew how to explain by selling, so to speak, your service and these tools. Because as I realised, anyway, just like artificial intelligence in general, it shortens time, improves work, so the most important thing is understanding how this works.

Respondents claim the risk of deprivation of human control and abilities R04:

Everything still – and fortunately – because providing an algorithm in such systems is human work, their verification, their understanding, so the whole happiness in this area the role of man is not diminished, and may it remain so.

Meanwhile they underline the complexity and multidimensionality of the risk assessment process “Auditing, above all, has always been a very complex process”. They combine it with the uncertainty of the AI’s physical form. The responders express unease in terms of the physical existence of AI, as if the algorithm has physical aspects (e.g. autonomous car, robots), as a nebula of objects, such as a swarm of bees, as a self-replicating algorithm similar to a computer virus or as a coupled entity with no precise physical location, such as a cloud computing facility. Respondents also point out the conceptual differences between human and AI mortality. While humans are mortal, AI may not be subject to the same biological-temporal constraints.

Respondents’ cognitive strain is exacerbated by the lack of reconciliation between human and AI logic, human impairments, the tangibility of AI in place and time and the complexity of AI.

5.3.2 S2 consciousness.

Consciousness is a concern for both humans and AI. Auditors, especially senior auditors, may suffer from technological exclusion, which can manifest as a generation skills gap. R26: “The reason is really that auditors, especially senior auditors, do not use modern IT tools fluently”.

Another aspect of consciousness is related to the maturity of AI. Respondents argue that AI must learn to get the pattern from humans to communicate with humans and explain its reasoning to humans. R23:

It is known that there must be a man there. In fact, intelligence is learning all the time. Someone has to control it. They must control how these data algorithms learn because they are constantly learning. They feed on this data.

Thus, the consciousness of AI is time-conditioned.

5.4 Ethical challenges

5.4.1 E1 cyborg paradox.

Respondents question the border between humans and AI in terms of legal personality. They ask what the switch point is regarding cyborg legal personality if a human enjoys legal personality while the machine does not.

They argue that enhancing human capabilities by introducing implants, augmented reality, exoskeletons, artificial body parts and biofeedback causes the gradual loss of legal personality. To what extent does a human with implanted AI have a legal personality? Through the lens of the existing legal framework, a human transforming into a cyborg loses its legal personality and becomes a machine dependent on humans. What is unknown is the point of the switch.

Respondents tend to bypass the problem by creating ad hoc local definitions of AI. Thus, reserving its accounts is subject to locally generated definitions. R23:

There are a lot of different risks associated with it [AI]: legal, operational. And I think that is probably why that is really. Maybe not quite as if it is just understandable.

5.4.2 E2 reasoning replicability.

Unlike humans, AI cannot explain its reasoning, contributing to cognitive strain. Additionally, the replicability of AI reasoning within the ethical challenge questions the foundations of the ethical and logical system developed by humans for humans, which is different from the system designed by algorithms for algorithms. Respondents refer to data processing and replication paths for humans while assessing AI learning. R6:

So the more complex this model would be - complex from the data side and the estimation method using, for example, a deep neural network, the more difficult it would be for me to replicate it to check it.

Humans are trapped by truthiness and unable to reconcile human and AI truth. R6 “We will have to agree on what we understand, what is the truth here. So to what extent do we already recognise something true is ok” […]But the discussion about what is true will come to us because this is already the truth generated by artificial intelligence.” […] “we do not have this certainty; it is not such a ‘yes’ expressed based on a chain of evidence, completely deterministic, over which we would control”. The lack of reconciliation between human and AI logical systems makes it difficult for humans to justify the fairness of results. R04: “So that it does not turn out that the obtained results may be distorted, harmful”. These contribute to the lack of a clear path for understanding AI reasoning from a human perspective, which is central to the debate on reasoning replicability.

5.4.3 E3 relativism.

Relativism is the extension of reasoning replicability to different logical paths. Human interaction allows for a recursive exchange of arguments and modification of assumptions and hypotheses. Our respondents doubt AI’s ability to engage in such interaction. R20:

If we are talking about some “personality” with whom we can talk, that is, we have an auditor, that is, this program, this artificial intelligence, it is this auditor, it makes some research, presents conclusions with which, for example, we disagree, and we think that something has been misread, we prove, certainly it could have an impact on the risk assessment of the study because then we have a chance, to prove this artificial intelligence, means opportunity, I emphasise the chance, that a different look, that is, such a view from the client’s side at their algorithms, that they are built in this way, and not another, causes that the study could look different.

At its essence, the relativism argument relates to the human inability to accept alternative reasoning logic.

5.5 Organisational challenges

5.5.1 O1 artificial intelligence misuse.

Respondents point to the risk of AI misuse because AI can outperform humans. R06:

IBM Watson is being trained; it is the first computer program – the program is maybe a strong word, this is artificial intelligence, this is the system that first won in Jeopardy.

AI could be inherently biased because of the false learning input. R13: “that is, she could be taught to, for example, falsify information, embellish it” or limitations in learning. R34:

Because I assume that such machine learning requires large sets of data, somehow structured there? The question is whether this matter in which we work allows us to extract such data.

The deadly combination of excessive capabilities, intentional bias learning, insufficient learning material and loss of control is the “death pill” feared by respondents in practice.

5.5.2 O2 excessive human effort.

Excessive human effort is required to control and understand AI, which adds to the cost of AI implementation and maintenance. Respondents point out that this requires highly qualified staff. R08: “if one of the parties already uses AI, it means that the qualification must be higher”, and R10: “that for this audit [AI], you need to take a specialist”. And, legal compliance costs: R36:

It seems this is probably more material for some scientific work than an interview. But I think it would complicate the study in terms of legal and legislative risks related to the models used, the auditor’s responsibility for this type of study, and the function of control over this type of audit.

5.5.3 O3 substitution.

Excessive human effort motivates professionals to search for alternatives, such as substantive procedures, to bypass AI risk. R34:

let’s say if I were researching an investment firm that uses artificial intelligence algorithms to trade stocks, then let’s say I can check the portfolio’s composition, regardless of whether the investment decisions were made by the algorithm or the investment advisor in our conditions.

In addition to the arguments in Table 1, our respondents raised eight new arguments across different challenges and capabilities. These arguments relate to cognitive strain, consciousness, cyborg paradox, reasoning replicability, relativism, AI misuse, excessive human effort and substitution. Collectively, they represent business argument feedback paths, as shown in Figure 1 Panel C.

6. Discussion

This study delineates the divergence between the practical perspectives of professionals and theoretical discourses concerning AI legal personality. The research question of whether the arguments in the debate on AI legal personality apply to human-centric services is answered negatively. This underscores the relationship between practice, policy and theory is recursive. This paper presents the intersection of identified incremental arguments and discusses AI personality in the following subsection. It outlines the essence of AI rights and dynamics to formulate core discussion contributions, namely, AI’s sui generis personality boundaries and joint accountability.

6.1 Linkage of the identified incremental arguments to legal personality discussion

The cognitive strain argument supports Chen and Burgess’ (2019) discussion. Physical presence, geographical identification and the ability to discontinue existence are presumptions for legal personality. The issue of eliminating AI from society (right for bankruptcy, insolvency, etc.) is not trivial. As Bryson et al. (2017) noted, “when insolvent human legal persons violate others’ legal rights, other tools are available to hold legal persons to account—anything from apology to jail time”. In the case of AI, these options are unavailable, unsatisfying, or ineffective”. Eventually, humans will die, but this does not necessarily apply to AI. AI and humans are subject to limited resources; a basic one is energy. If so, a conflict could rationally be expected to arise between humans, AI and AIs themselves. Thus, a framework with the resolution and enforcement should be available in advance.

A human represents the corporate legal personality, thus claiming rights and obligations in the name of such a person. In the case of AI legal personality, a non-human claims the rights and obligations. Therefore, consciousness, reasoning replicability and relativism apply simultaneously and concurrently. The entire legal framework is human-centric, but AI may have unknown future characteristics and capabilities incompatible with the current human-centric legal framework, making reconciliation difficult. Therefore, equalising humans and AI in all social and legal aspects is impossible, as they are different social entities. The difference between human-represented legal persons and AI is similar to that between humans and animals. Solaiman (2017, p. 171) reports, “chimpanzees are not legal persons based precisely on the lack of being capable of rights and duties, an essential requirement of personhood”, implying the inability to reconcile between humans and animals. The failure originates from the human inability to impose and enforce animals’ duties. Humans are more vital than other living creatures, the dominant species on Earth. However, this strength may not apply to ASI, where the ability to control and enforce it may not be ultimately possible for humans. This reasoning is consistent with the respondents’ arguments about the importance of consciousness and reasoning reliability.

We already face authority and science miscommunication regarding AI (robot) civil rights. For example, there is a contradictory statement on Sophia’s citizenship status (Butterworth, 2018; Simnart, 2021). Čerka et al. (2017) report a case where a computer, discovering corporate plans to terminate it, sued for the right to exist. This anecdotal evidence supports this research’s incremental operational challenges: AI misuse, excessive human effort and potential substitution. All of the evidence suggests a technical clinch, where the human-centric legal personality framework is inadequate to hold AI accountable, requiring innovative forms of regulation (Nowik, 2021). Therefore, combining the cyborg paradox with arguments about consciousness and reasoning reliability supports the view that the current “slave”-oriented system is unsustainable.

6.2 Limited legal rights to artificial intelligence

The urgency and significance for global society underscores the imperative for addressing AI subjectivity. Nowik (2021) argues that holding the ultimate benefit of AI is reasonable but may slow innovation. Butterworth (2018) outlines the spectrum issues surrounding AI legal personality, from commercial motivations to ethical and systemic concerns. Bryson et al. (2017) proposed banning or limiting the growth of human-like AI. This view assumes the ability to enforce such constraints, which is likely to fail in the case of military or illegal applications. Nyholm (2018) allowed for some AI agencies, such as an automated weapon system, but limited them to “a type of collaborative agency, where the other key […] is certain humans”. Čerka et al. (2017) and Nowik (2021) opt for limited legal autonomy, allowing AI recognition of limited rights and obligations. This may conflict with the incremental organisational challenges, consciousness and reasoning replicability arguments developed in this study.

6.3 Dynamics

Respondents reflected on the speed and acceleration of change, using the argument of time to lift the attribute of AI’s legal personality. They acknowledged that the environment is changing rapidly and argued that the solstice point is difficult to identify. They argue that the dynamics of technological progress outweigh the cyborg paradox, which undermines Nowik’s (2021) argument that the current industry cannot program AI in a non-algorithmic way.

6.4 Essential new boundary identified for a sui generis artificial intelligence personality

Like Buiten et al. (2023), this study does not propose a specific legal instrument for AI personality but identifies its boundaries based on incremental challenges and capabilities. The instrument should:

  • link AI and a compulsory human representative to safeguard reasoning replicability, relativism and consciousness.;

  • implement the friction of physical and time constraints for AI to control cognitive strains;

  • be based on pre-set lists of rights and obligations specific to AI to control the cyborg paradox;

  • prohibit or frame the free form of AI to reconcile with the current industry law framework;

  • make AI, human operators, and legal beneficiaries jointly liable for AI-caused damages to control for misuse and consciousness; and

  • mitigate the risk of human impunity for violating the rights of other legal entities to control for reasoning replicability and excessive human effort.

6.5 Joint accountability of human and artificial intelligence

The establishment of sui generis AI personality boundaries paves the way for joint accountability, necessitating AI systems to feature decision-reasoning capabilities comprehensible to humans (Barredo Arrieta et al., 2020). AI accountability refers to the obligation to implement human-understandable AI decision-reasoning capabilities, where the decision is in the AI’s direction. This proposal reclaims the original meaning of joint accountability, as Rizzo et al. (1970) offered, allowing discontinuation of the xAI. To limit the disturbed accountability gap, we centre the ultimate beneficiary of the AI system as the primary accountable person.

6.6 Limitations

While the divergent viewpoints of practitioners and researchers highlight the complexity of adapting legal norms to AI advancements, the synthesis of incremental arguments remains subjective. Additionally, the focus on auditors within Poland may not fully encapsulate the broader stakeholder perspectives (such as lawyers, judges and policymakers in different countries), necessitating further inquiry across diverse legal and cultural contexts to validate the findings.

7. Conclusion

This study delineates the practical boundaries of AI legal personality, enriching the discourse on AI’s legal status and accountability, particularly within the context of accounting. It identifies the practical limits of AI legal personality, asking whether existing theoretical constraints encompass the practical spectrum for human-centric services. It finds eight arguments across challenges and capabilities: cognitive strain, consciousness, the cyborg paradox, reasoning replicability, relativism, AI misuse, excessive human effort and substitution.

The study proposes a framework for AI: quasi-persons with joint final beneficent risk liability and human legal representation obligations (a sui generis personality). It generalises the concept of the joint accountability of the xAI and the ultimate human beneficiary, which is suitable for implementation by auditors through ethical guidelines. This framework aims to bridge the gap between current enslaved persons and tools-oriented legislation and AI’s full social recognition.

The study’s policy implications are that deferring AI legal recognition slows audit practice development and that the cyborg paradox necessitates a rapid reconstruction of legal personality institutions. By exploring the conceptual boundaries of a sui generis legal personality for AI, this study helps policymakers develop a framework for developing concrete legislative solutions regarding AI accountability.

Acknowledging the limitations of a single, universal legal system in addressing AI is essential. Nonetheless, the study intimates that adopting a “joint accountability” model on existing professional ethics codes, such as those for auditors, might be the most practical approach in the current landscape.

A novel research trajectory posited by this investigation pertains to the mechanism for imposing obligations upon AI legal entities in audit context, especially given their intangible and potentially perpetual nature. This domain presents a compelling frontier for future scholarly exploration.

Figures

Panel A: Framework for AI research, Panel B: Study focus, Panel C: Conceptual link*

Figure 1.

Panel A: Framework for AI research, Panel B: Study focus, Panel C: Conceptual link*

AI discussion arguments synthesis

Label Cap. Chall. Context example of label contextual meaning References
Legal subjectivity C S Ability to possess rights and obligations in society (Andrade et al., 2007)
Natural slaves L S Machines are considered as proxies for humans (Massaro and Norton, 2016; Millar, 2014)
Human biased system L E The existing law system applies to humans and protects their rights (Karnouskos, 2022)
Rights for AI P E Humans have an economic incentive to deny robots rights, and humans pose a danger to robots (Karnouskos, 2022)
AI Justice P S AI justice rules for robots pose a risk to social development and stability (Soroka et al., 2022)
Product or objects L O The European Union legal framework considers AI-based applications as objects or products (Bertolini, 2020; Bertolini and Riccaboni, 2021; Soroka et al., 2022)
Casual relation C O When AI makes autonomous decisions, traditional rules will fail to create legal liability for the damage caused since autonomous AI causes it. The proof of a causal relationship between defect and damage in the case of AI may be challenging. In the current legal systems, liability applies only to circumstances where the reason for the actions or being idle of AI can be associated with a specific ultimate legal beneficiary. The creation of AI requires many people to take part, and it is unfeasible to prove the guilt of a particular person for causing damage (“the problem of many hands”) (Kingston, 2018; Soroka et al., 2022, p. 128)
Complex liability C O • A user updates the software of a robot with code he downloaded by potentially unknown authors from the internet (e.g. open-source), if the robot proactively searches for extensions of its functionality over the Internet or even uses “transfer learning”, where it learns from the experiences of other robots or via cooperative learning, where these AGI robots learn continuously, as a community, from each other (Buiten et al., 2023; De Chiara et al., 2021; Karnouskos, 2022; Soroka et al., 2022, p. 128)
Humans unaccountability L O Some commentators have proposed that robots be granted legal personality, aiming to exonerate these artefacts’ respective creators and users from liability (“releasing authors from its product liability”) (Solaiman, 2017)
Excessive liability L O “An excessive burden of legal liability could lead to the programmer’s fear and unwillingness to reveal their identity in public, or it could otherwise impede the progress of technology development in the official markets, moving all the programming work to unofficial markets. Even if we assume that this is not possible, it is obvious that applying a product liability model in the case of AI is more difficult than a regular product” (Čerka et al., 2017)
Human AI rights conflicts P E AI may attempt to prevent people from engaging in careless behaviour, e.g. preventing an alcoholic from drinking. This may clash with the fundamental right of human liberty (Palmerini et al., 2014)
Privacy abuse P E AI acts as a monitoring device exploiting private aspects of humans or enforcing compliance (law enforcement personnel) in totalitarian regimes (Karnouskos, 2022)
Automated journalism L S Media use AI to collect news sources, select articles, analyse them and automatically generate news (Jung et al., 2017)
DeepFakes L S AI can potentially misrepresent reality (e.g. via realistic fake videos) (Karnouskos, 2020; de Ruiter, 2021)
Robot emotions driver P E AI emotionally manipulates humans to help itself to pursue its agenda (Danaher, 2020)
Human job losses L O AI could substitute humans in the workforce, translating into job losses and a point of social friction (Frey and Osborne, 2017; Haddadin, 2014; Nevejans, 2017)
AI taxation L O Taxing AI on capital expenditure or the equivalent of the human work value. Tax may be an effective measure to balance job loss and benefit the impacted ones (Abbott and Bogenschneider, 2018; Kovacev, 2020)
Knowledge created from AI C O Currently intellectual property rights do not recognise AI as an author (Kop, 2019)
Gender issues C S What is the AI gender? Humans show marginally greater acceptance toward female-gendered health-care AI (Sutko, 2020; Tay et al., 2014)
Intimate roles L E AI as a sexual partner affects other areas, such as sex-stemming violence against women, human trafficking, health, finances, crime, etc. (Szczuka et al., 2019; Yeoman and Mars, 2012)
Ethics C E Autonomous systems are entrusted to make life-and-death decisions (Riesen, 2022; Sparrow, 2007)
Life dilemma C E Self-driving cars in unavoidable accidents will have to make life-and-death decisions (e.g. an AI car is about to be involved in a fatal accident with human casualties involving the car’s passengers and pedestrians) (Jenkins et al., 2022)
Discrimination against machines C E People judge humans and robots differently (Karnouskos, 2022)
Ethical framework C E As ethics come in different varieties, e.g. utilitarianism, deontology, relativism, absolutism, pluralism, feminist ethics, etc., there is no consensus on what guidelines AI should follow (Karnouskos, 2022)
Demise of civilisation L E AI is maintaining and expanding the survivability of the human race. AI can, at some point, tackle very complex problems, such as climate change, understanding the universe, etc., that humans cannot do (Karnouskos, 2022)
Robot-to-robot interaction C E To what degree do societal conditions impact the behaviour of robots and vice versa in a society where humans and robots are symbiotic? (Karnouskos, 2022)
AI collectively learns from each other’s experiences – this is not available to humans
Disputes between robots
Global rules L O How can a legal framework, without a worldwide regulator, be established if AI technologies are geographically unlimited? (Soroka et al., 2022)
Existence threat L E Research priorities for robust and beneficial AI discloses the risk of human extinction due to AI (Čerka et al., 2017)
Time P O The time horizon for AGI is unknown. The current technology is not developed enough to construct a machine programmed in a non-algorithmic way capable of reasoning without a given pattern (Nowik, 2021)
Notes:

Cap: AI capabilities: perceive (P), comprehend (C), act and learn (L). Chall.: AI challenges to social (S), ethics (E) and organisational and economics (O). Own mapping in line with Bawack et al. (2021). See Section 3 for concept details

Source: Own compilation

Identified incremental arguments in professional accounts

Question no. Challenges Capabilities
Q4, Q7 S, E, O P, C
Q9 O C, L
Q10–Q11 O P, C
Q12 a–c S, O P, C, L
Q13 O L
Q14–16 S, E, O P, C, L
Notes:

AI capabilities: perceive (P), comprehend (C), act and learn (L). AI challenges: social (S), ethics (E) and organisational (O)

Source: Own compilation

Average count (standard deviation) known argument from Table 1 in auditors’ accounts

Capabilities
Perceive Comprehend Act and learn
P C L
Challenges Societal S 0.500 0.250 5.500
(0.70) (0.50) (8.98)
n = 2 n = 4 n = 6
Ethical E 0.125 0.625 0.750
(0.35) (1.06) (1.03)
n = 8 n = 8 n = 8
Organisational O 1.500 1.667 0.917
(0.70) (2.34) (1.24)
n = 2 n = 6 n = 12
Notes:

AI capabilities: perceive (P), comprehend (C), act and learn (L). AI challenges: social (S), ethics (E) and organisational (O)

Source: Own compilation

Identified incremental arguments in professional accounts

No. Argument acronym Challenges Capabilities Ref
1 Cognitive strain S C S1
2 Consciousness S C S2
3 Cyborg paradox E P E1
4 Reasoning replicability E C E2
5 Relativism E C E3
6 AI misuse O L O1
7 Excessive human effort O L O2
8 Substitution O C O3
Notes:

AI capabilities: perceive (P), comprehend (C), act and learn (L). AI challenges: social (S), ethics (E), and organisational (O)

Source: Own compilation

The participants’ characteristics across positions, sectors and experience

No. Participant  reference Current main role Sector Experience (years)
1 R01 Auditor/researcher Public/private 17
2 R02 Researcher Public 25
3 R03 Researcher Public 10
4 R04 Researcher/auditor Public 23/21
5 R05 Researcher Public 30
6 R06 Researcher/expert Private/public 20/6
7 R07 Researcher Public 22
8 R08 Expert/auditor/manager Public 1, 5/10/25
9 R09 Auditor Private 20
10 R10 Expert/auditor Public 1/14
11 R11 Expert/auditor Public 1/23
12 R12 Expert/researcher Private/public 20
13 R13 Auditor Private 11
14 R14 Auditor Private 20
15 R15 Auditor Private/public 20
16 R16 Researcher Public 20
17 R17 Business Private 4
18 R18 Expert/business Public 10
19 R19 Auditor Private 10
20 R20 Business Private 25
21 R21 Auditor Private 22
22 R22 Auditor Private 10
23 R23 Auditor Private 15
24 R24 Business Private 15
25 R25 Auditor Private 29
26 R26 Auditor Private 20
27 R27 Auditor Private 30
28 R28 Auditor Public 6
29 R29 Auditor Private 20
30 R30 Auditor Private 20
31 R31 Auditor Private 20
32 R32 Auditor Private 17
33 R33 Auditor Private 10
34 R34 Auditor Private 14
35 R35 Researcher/auditor Public/private 18
36 R36 Business Private 15
37 R37 Auditor Private 15
38 R38 Auditor Private 17
39 R39 Auditor Private 8
40 R40 Auditor Private 30
41 R41 Auditor Private 29

Source: Own compilation

Appendix 1. Structured interview questionnaire extract

Appendix 2

References

Abbott, R. and Bogenschneider, B.N. (2018), “Should robots pay taxes?”, SSRN Electronic Journal, pp. 1-36.

Agostino, D., Bracci, E. and Steccolini, I. (2022a), “Accounting and accountability for the digital transformation of public services”, Financial Accountability and Management, Vol. 38 No. 2, pp. 145-151, doi: 10.1111/faam.12314.

Agostino, D., Saliterer, I. and Steccolini, I. (2022b), “Digitalisation, accounting and accountability: a literature review and reflections on future research in public services”, Financial Accountability and Management, Vol. 38 No. 2, pp. 152-176, doi: 10.1111/faam.12301.

Ahn, P.D. and Wickramasinghe, D. (2021), “Pushing the limits of accountability: big data analytics containing and controlling covid-19 in South Korea”, Accounting, Auditing and Accountability Journal, Vol. 34 No. 6, pp. 1320-1331, doi: 10.1108/AAAJ-08-2020-4829.

Alderman, J. (2019), “Auditing in the smart machine age: Moving beyond the hype to explore the evolution of the auditing profession in the smart machine age”, Graziado Business Review, Vol. 22 No. 1, pp. 1-10.

Al-Htaybat, K. and von Alberti-Alhtaybat, L. (2017), “Big data and corporate reporting: impacts and paradoxes”, Accounting, Auditing and Accountability Journal, Vol. 30 No. 4, pp. 850-873, doi: 10.1108/AAAJ-07-2015-2139.

Alvesson, M. and Deetz, S. (2000), Doing Critical Management Research, SAGE Publications, London, doi: 10.4135/9781849208918.

Andrade, F., Novais, P., Machado, J. and Neves, J. (2007), “Contracting agents: legal personality and representation”, Artificial Intelligence and Law, Vol. 15 No. 4, pp. 357-373, doi: 10.1007/s10506-007-9046-0.

Arnaboldi, M., Busco, C. and Cuganesan, S. (2017), “Accounting, accountability, social media and big data: revolution or hype?”, Accounting, Auditing and Accountability Journal, Vol. 30 No. 4, pp. 762-776, doi: 10.1108/AAAJ-03-2017-2880.

Asaro, P.M. (2007), “Robots and responsibility from a legal perspective”, Proceedings of the IEEE, Vol. 4 No. 14, pp. 20-24.

Bahner, J.E., Elepfandt, M.F. and Manzey, D. (2008), “Misuse of diagnostic aids in process control: the effects of automation misses on complacency and automation bias”, Proceedings of the Human Factors and Ergonomics Society, Vol. 52 No. 19, pp. 1330-1334, doi: 10.1177/154193120805201906.

Bakarich, K.M. and O’Brien, P.E. (2021), “The robots are coming … but aren’t here yet: the use of artificial intelligence technologies in the public accounting profession”, Journal of Emerging Technologies in Accounting, Vol. 18 No. 1, pp. 27-43, doi: 10.2308/JETA-19-11-20-47.

Barredo Arrieta, A., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., Garcia, S., et al. (2020), “Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI”, Information Fusion, Vol. 58, pp. 82-115, doi: 10.1016/j.inffus.2019.12.012.

Bawack, R.E., Fosso Wamba, S. and Carillo, K.D.A. (2021), “A framework for understanding artificial intelligence research: insights from practice”, Journal of Enterprise Information Management, Vol. 34 No. 2, pp. 645-678, doi: 10.1108/JEIM-07-2020-0284.

Bertolini, A. (2020), Artificial Intelligence and Civil Liability, European Parliament -Committee on Legal Affairs, Bruxelles, pp. 1-132.

Bertolini, A. and Riccaboni, M. (2021), “Grounding the case for a European approach to the regulation of automated driving: the technology-selection effect of liability rules”, European Journal of Law and Economics, Vol. 51 No. 2, pp. 243-284, doi: 10.1007/s10657-020-09671-5.

Białek-Jaworska, A. and Kopańska, A.K. (2023), “Do fiscal rules of local debt affect municipal off-budget activities? Analysis of various types of municipalities”, Meditari Accountancy Research, Vol. 31 No. 7, pp. 156-184, doi: 10.1108/MEDAR-11-2021-1491.

Bostrom, N. (2014), Superintelligence: Paths, Dangers, Strategies, 1st ed., Oxford University Press, Oxford.

Bracci, E. (2022), “The loopholes of algorithmic public services: an ‘intelligent’ accountability research agenda”, Accounting, Auditing and Accountability Journal, Vol. 36 No. 2, doi: 10.1108/AAAJ-06-2022-5856.

Bryson, J.J., Diamantis, M.E. and Grant, T.D. (2017), “Of, for, and by the people: the legal lacuna of synthetic persons”, Artificial Intelligence and Law, Vol. 25 No. 3, pp. 273-291, doi: 10.1007/s10506-017-9214-9.

Buchholtz, G. (2020), “Artificial intelligence and legal tech: challenges to the rule of law”, Regulating Artificial Intelligence, Springer International Publishing, pp. 175-198, doi: 10.1007/978-3-030-32361-5_8.

Buiten, M., de Streel, A. and Peitz, M. (2023), “The law and economics of AI liability”, Computer Law and Security Review, Vol. 48, doi: 10.1016/j.clsr.2023.105794.

Busco, C., Giovannoni, E. and Riccaboni, A. (2017), “Sustaining multiple logics within hybrid organisations”, Accounting, Auditing and Accountability Journal, Vol. 30 No. 1, pp. 191-216, doi: 10.1108/AAAJ-11-2013-1520.

Butterworth, M. (2018), “The ICO and artificial intelligence: the role of fairness in the GDPR framework”, Computer Law and Security Review, Vol. 34 No. 2, pp. 257-268, doi: 10.1016/j.clsr.2018.01.004.

Carungu, J., Di Pietra, R. and Molinari, M. (2021), “The impact of a humanitarian disaster on the working approach of accountants: a study of contingent effects”, Accounting, Auditing and Accountability Journal, Vol. 34 No. 6, pp. 1388-1403, doi: 10.1108/AAAJ-08-2020-4789.

Čerka, P., Grigienė, J. and Sirbikytė, G. (2017), “Is it possible to grant legal personality to artificial intelligence software systems?”, Computer Law and Security Review, Vol. 33 No. 5, pp. 685-699, doi: 10.1016/j.clsr.2017.03.022.

Chen, J. and Burgess, P. (2019), “The boundaries of legal personhood: how spontaneous intelligence can problematise differences between humans, artificial intelligence, companies and animals”, Artificial Intelligence and Law, Vol. 27 No. 1, pp. 73-92, doi: 10.1007/s10506-018-9229-x.

Chesterman, S. (2020), “Artificial intelligence and the limits of legal personality”, International and Comparative Law Quarterly, Vol. 69 No. 4, pp. 819-844, doi: 10.1017/S0020589320000366.

Conradie, N., Kempt, H. and Königs, P. (2022), “Introduction to the topical collection on AI and responsibility”, Philosophy and Technology, Vol. 35 No. 4, p. 97, doi: 10.1007/s13347-022-00583-7.

Damacharla, P., Javaid, A.Y., Gallimore, J.J. and Devabhaktuni, V.K. (2018), “Common metrics to benchmark human-machine teams (HMT): a review”, IEEE Access, Vol. 6, pp. 38637-38655, doi: 10.1109/ACCESS.2018.2853560.

Danaher, J. (2016), “The threat of algocracy: reality, resistance and accommodation”, Philosophy and Technology, Vol. 29 No. 3, pp. 245-268, doi: 10.1007/s13347-015-0211-1.

Danaher, J. (2020), “Robot betrayal: a guide to the ethics of robotic deception”, Ethics and Information Technology, Vol. 22 No. 2, pp. 117-128, doi: 10.1007/s10676-019-09520-3.

Davies, C.R. (2011), “An evolutionary step in intellectual property rights – artificial intelligence and intellectual property”, Computer Law and Security Review, Vol. 27 No. 6, pp. 601-619, doi: 10.1016/j.clsr.2011.09.006.

De Chiara, A., Elizalde, I., Manna, E. and Segura-Moreiras, A. (2021), “Car accidents in the age of robots”, International Review of Law and Economics, Vol. 68, p. 106022, doi: 10.1016/j.irle.2021.106022.

de Ruiter, A. (2021), “The distinct wrong of deepfakes”, Philosophy and Technology, Vol. 34 No. 4, pp. 1311-1332, doi: 10.1007/s13347-021-00459-2.

de Villiers, C., Dumay, J. and Maroun, W. (2019), “Qualitative accounting research: dispelling myths and developing a new research agenda”, Accounting and Finance, Vol. 59 No. 3, pp. 1459-1487, doi: 10.1111/acfi.12487.

de Villiers, C., Dimes, R. and Molinari, M. (2023), “How will AI text generation and processing impact sustainability reporting? Critical analysis, a conceptual framework and avenues for future research”, Sustainability Accounting, Management and Policy Journal, Vol. 15 No. 1, doi: 10.1108/SAMPJ-02-2023-0097.

di Tullio, P., La Torre, M., Rea, M.A., Guthrie, J. and Dumay, J. (2023), “Beyond the planetary boundaries: exploring pluralistic accountability in the new space age”, Accounting, Auditing and Accountability Journal, doi: 10.1108/AAAJ-08-2022-6003.

DiMaggio, P.J. and Powell, W.W. (1983), “The iron cage revisited: Institutional isomorphism and collective rationality in organizational fields”, American Sociological Review, Vol. 48 No. 2, p. 147, doi: 10.2307/2095101.

Dobija, D., Górska, A.M., Grossi, G. and Strzelczyk, W. (2019), “Rational and symbolic uses of performance measurement”, Accounting, Auditing and Accountability Journal, Vol. 32 No. 3, pp. 750-781, doi: 10.1108/AAAJ-08-2017-3106.

Doomen, J. (2023), “The artificial intelligence entity as a legal person”, Information and Communications Technology Law, Vol. 32 No. 3, pp. 277-287, doi: 10.1080/13600834.2023.2196827.

Dumay, J., Bernardi, C., Guthrie, J. and Demartini, P. (2016), “Integrated reporting: a structured literature review”, Accounting Forum, Vol. 40 No. 3, pp. 166-185, doi: 10.1016/j.accfor.2016.06.001.

Farooq, M.B. and de Villiers, C. (2019), “The shaping of sustainability assurance through the competition between accounting and non-accounting providers”, Accounting, Auditing and Accountability Journal, Vol. 32 No. 1, pp. 307-336, doi: 10.1108/AAAJ-10-2016-2756.

Frey, C.B. and Osborne, M.A. (2017), “The future of employment: how susceptible are jobs to computerisation?”, Technological Forecasting and Social Change, Vol. 114, pp. 254-280, doi: 10.1016/j.techfore.2016.08.019.

Fusco, F., Civitillo, R., Ricci, P., Morawska, S., Pustułka, K. and Banasik, P. (2022), “Sustainability reporting in justice systems: a comparative research in two European countries”, Meditari Accountancy Research, Vol. 30 No. 6, pp. 1629-1657, doi: 10.1108/MEDAR-11-2020-1091.

Garanina, T., Ranta, M. and Dumay, J. (2022), “Blockchain in accounting research: current trends and emerging topics”, Accounting, Auditing and Accountability Journal, Vol. 35 No. 7, pp. 1507-1533, doi: 10.1108/AAAJ-10-2020-4991.

Goddard, K., Roudsari, A. and Wyatt, J.C. (2012), “Automation bias: a systematic review of frequency, effect mediators, and mitigators”, Journal of the American Medical Informatics Association, Vol. 19 No. 1, pp. 121-127, doi: 10.1136/amiajnl-2011-000089.

Haddadin, S. (2014), Towards Safe Robots. Approaching Asimov ‘ s 1st Law, Springer, Berlin Heidelberg

Jabri, S. (2020), “Artificial intelligence and healthcare: products and procedures”, Regulating Artificial Intelligence, Springer International Publishing, pp. 307-335, doi: 10.1007/978-3-030-32361-5_14.

Jans, M., Aysolmaz, B., Corten, M., Joshi, A. and van Peteghem, M. (2022), “Digitalisation in accounting–warmly embraced or coldly ignored”,? Accounting, Auditing and Accountability Journal, Vol. 36 No. 9, pp. 61-85, doi: 10.1108/AAAJ-11-2020-4998.

Janssen, M., Hartog, M., Matheus, R., Yi Ding, A. and Kuk, G. (2020), “Will algorithms blind people? The effect of explainable AI and Decision-Makers, experience on AI-supported decision-making in government”, Social Science Computer Review, Vol. 40 No. 2, p. 8944393209801, doi: 10.1177/0894439320980118.

Jenkins, R., Cerny, D. and Hribek, T. (2022), Autonomous Vehicle Ethics: The Trolley Problem and Beyond, Oxford University Press, Oxford.

Jung, J., Song, H., Kim, Y., Im, H. and Oh, S. (2017), “Intrusion of software robots into journalism: the public’s and journalists’ perceptions of news written by algorithms and human journalists”, Computers in Human Behavior, Vol. 71, pp. 291-298, doi: 10.1016/j.chb.2017.02.022.

Kaplan, A. and Haenlein, M. (2019), “Siri, Siri, in my hand: who’s the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence”, Business Horizons, Vol. 62 No. 1, pp. 15-25, doi: 10.1016/j.bushor.2018.08.004.

Karnouskos, S. (2020), “Artificial intelligence in digital media: the era of deepfakes”, IEEE Transactions on Technology and Society, Vol. 1 No. 3, pp. 138-147, doi: 10.1109/tts.2020.3001312.

Karnouskos, S. (2022), “Symbiosis with artificial intelligence via the prism of law, robots, and society”, Artificial Intelligence and Law, Vol. 30 No. 1, pp. 93-115, doi: 10.1007/s10506-021-09289-1.

Karunakaran, A., Orlikowski, W.J. and Scott, S.V. (2022), “Crowd-based accountability: examining how social media commentary reconfigures organizational accountability”, Organization Science, Vol. 33 No. 1, pp. 170-193, doi: 10.1287/orsc.2021.1546.

Kingston, J. (2018), “Artificial intelligence and legal liability”, ArXiv.

Kop, M. (2019), “AI and intellectual property: towards an articulated public domain”, Texas Intellectual Property Law Journal, pp. 297-341, doi: 10.2139/ssrn.3409715.

Kornhauser, L. (2022), “The economic analysis of law”, in Zalta, E.N. (Ed.), The {Stanford} Encyclopedia of Philosophy, {S}pring 2, Metaphysics Research Lab, Stanford University.

Kovacev, R.J. (2020), “A taxing dilemma: Robot taxes and the challenges of effective taxation of AI, automation and robotics in the fourth industrial revolution”, The Contemporary Tax Journal, Vol. 9 No. 2, doi: 10.31979/2381-3679.2020.090204.

Krasodomska, J., Michalak, J. and Świetla, K. (2020), “Directive 2014/95/EU”, Meditari Accountancy Research, Vol. 28 No. 5, pp. 751-779, doi: 10.1108/MEDAR-06-2019-0504.

Krönke, C. (2020), “Artificial intelligence and social media”, Regulating Artificial Intelligence, Springer International Publishing, pp. 145-173, doi: 10.1007/978-3-030-32361-5_7.

Kunitskaya, O.M. (2022), “Legal regulation of artificial intelligence in the area of investment in the economy”, in Popkova, E.G. (Ed.), Immigration Market Modeling in Digital Economy: Game Theoretic Approaches, Vol. 368, Springer Nature, pp. 315-332, doi: 10.1007/978-3-030-93244-2_36.

Łada, M., Kozarkiewicz, A., Bartnik, B. and Haslam, J. (2022), “Market liberalisation’s impact on management accounting: a case study focused on a regional trade unit of the Polish gas company”, Journal of Accounting in Emerging Economies, Vol. 12 No. 5, pp. 790-811, doi: 10.1108/JAEE-04-2021-0125.

Laptev, V.A., Ershova, I.V. and Feyzrakhmanova, D.R. (2022), “Medical applications of artificial intelligence (legal aspects and future prospects)”, Laws, Vol. 11 No. 1, p. 3, doi: 10.3390/laws11010003.

Lehner, O.M., Ittonen, K., Silvola, H., Ström, E. and Wührleitner, A. (2022), “Artificial intelligence based decision-making in accounting and auditing: ethical challenges and normative thinking”, Accounting, Auditing and Accountability Journal, Vol. 35 No. 9, pp. 109-135, doi: 10.1108/AAAJ-09-2020-4934.

Liu, H.-Y. and Zawieska, K. (2020), “From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence”, Ethics and Information Technology, Vol. 22 No. 4, pp. 321-333, doi: 10.1007/s10676-017-9443-3.

Liu, M. (2022), “Assessing human information processing in lending decisions: a machine learning approach”, Journal of Accounting Research, Vol. 60 No. 2, pp. 607-651, doi: 10.1111/1475-679X.12427.

Loi, M. and Spielkamp, M. (2021), “Towards accountability in the use of artificial intelligence for public administrations”, AIES 2021 - Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, Vol. 1, Association for Computing Machinery, doi: 10.1145/3461702.3462631.

Lombardi, R., de Villiers, C., Moscariello, N. and Pizzo, M. (2021), “The disruption of blockchain in auditing – a systematic literature review and an agenda for future research”, Accounting, Auditing and Accountability Journal, Vol. 35 No. 7, doi: 10.1108/AAAJ-10-2020-4992.

Lugli, E. and Bertacchini, F. (2023), “Audit quality and digitalisation: some insights from the Italian context”, Meditari Accountancy Research, Vol. 31 No. 4, pp. 841-860, doi: 10.1108/MEDAR-08-2021-1399.

Maruszewska, E.W., Niesiobędzka, M. and Kołodziej, S. (2023), “Why is it so hard to provide a faithful representation? The impact of indirectly evoked incentives on the accounting policy decision and the accountant’s subsequent post-decision distortion”, Meditari Accountancy Research, Vol. 32 No. 3, doi: 10.1108/MEDAR-11-2021-1505.

Massaro, T.M. and Norton, H. (2016), “Siri-ously? Free speech rights and artificial intelligence”, Northwestern University Law Review, Vol. 110 No. 5, pp. 1169-1194.

Matuszak, Ł. and Różańska, E. (2021), “Towards 2014/95/EU directive compliance: the case of Poland”, Sustainability Accounting, Management and Policy Journal, Vol. 12 No. 5, pp. 1052-1076, doi: 10.1108/SAMPJ-02-2020-0042.

Millar, J. (2014), “Technology as moral proxy: Autonomy and paternalism by design”, 2014 IEEE International Symposium on Ethics in Science, Technology and Engineering, IEEE, pp. 1-7, doi: 10.1109/ETHICS.2014.6893388.

Molnár-Gábor, F. (2020), “artificial intelligence in healthcare: doctors, patients and liabilities”, Regulating Artificial Intelligence, Springer International Publishing, pp. 337-360, doi: 10.1007/978-3-030-32361-5_15.

Nevejans, N. (2017), “European civil law rules in robotics, Directorate-general for internal policies: policy department C: citizens’ rights and constitutional affairs”,

Nicholson-Crotty, S. and Meier, K.J. (2002), “Size doesn’t matter: in defense of Single-State studies”, State Politics and Policy Quarterly, Vol. 2 No. 4, pp. 411-422, doi: 10.1177/153244000200200405.

Noor, N., Hill, S.R. and Troshani, I. (2021), “Artificial intelligence service agents: Role of Parasocial relationship”, Journal of Computer Information Systems, Vol. 62 No. 5, pp. 1-15, doi: 10.1080/08874417.2021.1962213.

Nowik, P. (2021), “Electronic personhood for artificial intelligence in the workplace”, Computer Law and Security Review, Vol. 42, p. 105584, doi: 10.1016/j.clsr.2021.105584.

Nyholm, S. (2018), “Attributing agency to automated systems: Reflections on human–robot collaborations and Responsibility-Loci”, Science and Engineering Ethics, Vol. 24 No. 4, pp. 1201-1219, doi: 10.1007/s11948-017-9943-x.

Palmerini, E., Azzarri, F., Battaglia, F., Bertolini, A., Carnevale, Carpaneto, J., Cavallo, F et al (2014),. “Regulating emerging robotic technologies in Europe: Robotics facing law and ethics”.

Rademacher, T. (2020), “Artificial intelligence and law enforcement”, “. Regulating Artificial Intelligence, Springer International Publishing, pp. 225-254, doi: 10.1007/978-3-030-32361-5_10.

Rajnai, Z. and Kocsis, I. (2017), “Labor market risks of industry 4.0, digitisation, robots and AI”, 2017 IEEE 15th International Symposium on Intelligent Systems and Informatics (SISY), IEEE, pp. 343-346, doi: 10.1109/SISY.2017.8080580.

Riesen, E. (2022), “The moral case for the development and use of autonomous weapon systems”, Journal of Military Ethics, Vol. 21 No. 2, pp. 132-150, doi: 10.1080/15027570.2022.2124022.

Rizzo, J.R., House, R.J. and Lirtzman, S.I. (1970), “Role conflict and ambiguity in complex organisations”, Administrative Science Quarterly, Vol. 15 No. 2, p. 150, doi: 10.2307/2391486.

Russano, M.B., Narchet, F.M., Kleinman, S.M. and Meissner, C.A. (2014), “Structured interviews of experienced HUMINT interrogators”, Applied Cognitive Psychology, Vol. 28 No. 6, pp. 847-859.

Schemmel, J. (2020), “Artificial intelligence and the financial markets: Business as usual?”, “ Regulating Artificial Intelligence, Springer International Publishing, pp. 255-276, doi: 10.1007/978-3-030-32361-5_11.

Secinaro, S., Dal Mas, F., Brescia, V. and Calandra, D. (2021), “Blockchain in the accounting, auditing and accountability fields: a bibliometric and coding analysis”, Accounting, Auditing and Accountability Journal, Vol. 35 No. 9, doi: 10.1108/AAAJ-10-2020-4987.

Secundo, G., Del Vecchio, P., Dumay, J. and Passiante, G. (2017), “Intellectual capital in the age of big data: establishing a research agenda”, Journal of Intellectual Capital, Vol. 18 No. 2, pp. 242-261, doi: 10.1108/JIC-10-2016-0097.

Simnart, V. (2021), “Artificial intelligence and legal personality”, Entre Tradition et Pragmatisme, pp. 1359-1370.

Simunic, D.A., Ye, M. and Zhang, P. (2017), “The joint effects of multiple legal system characteristics on auditing standards and auditor behavior”, Contemporary Accounting Research, Vol. 34 No. 1, pp. 7-38, doi: 10.1111/1911-3846.12242.

Solaiman, S.M. (2017), “Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy”, Artificial Intelligence and Law, Vol. 25 No. 2, pp. 155-179, doi: 10.1007/s10506-016-9192-3.

Soroka, L., Danylenko, A. and Sokiran, M. (2022), “Legal issues and risks of the artificial intelligence use in space activity”, Philosophy and Cosmology, Vol. 28, pp. 118-135, doi: 10.29202/phil-cosm/28/10.

Sparrow, R. (2007), “Killer robots”, Journal of Applied Philosophy, Vol. 24 No. 1, pp. 62-77, doi: 10.1111/j.1468-5930.2007.00346.x.

Staszkiewicz, P. and Morawska, S. (2019), “The efficiency of bankruptcy law: evidence of creditor protection in Poland”, European Journal of Law and Economics, Vol. 48 No. 3, pp. 365-383, doi: 10.1007/s10657-019-09629-2.

Sutko, D.M. (2020), “Theorising femininity in artificial intelligence: a framework for undoing technology’s gender troubles”, Cultural Studies, Vol. 34 No. 4, pp. 567-592, doi: 10.1080/09502386.2019.1671469.

Szczuka, J., Hartmann, T. and Amsterdam, V.U. (2019), “AI love you”, AI Love You, No. July, doi: 10.1007/978-3-030-19734-6.

Tay, B., Jung, Y. and Park, T. (2014), “When stereotypes meet robots: the double-edge sword of robot gender and personality in human-robot interaction”, Computers in Human Behavior, Vol. 38, pp. 75-84, doi: 10.1016/j.chb.2014.05.014.

Tiron-Tudor, A. and Deliu, D. (2021), “Reflections on the human-algorithm complex duality perspectives in the auditing process”, Qualitative Research in Accounting and Management, doi: 10.1108/QRAM-04-2021-0059. ahead-of-print

Tucker, B.P. and Alewine, H.C. (2023), “The roles of management control: lessons from the Apollo program”, Contemporary Accounting Research, Vol. 40 No. 2, pp. 1046-1081, doi: 10.1111/1911-3846.12833.

Tyson, T. and Adams, C.A. (2019), “Increasing the scope of assurance research: new lines of inquiry and novel theoretical perspectives”, Sustainability Accounting, Management and Policy Journal, Vol. 11 No. 2, pp. 291-316, doi: 10.1108/SAMPJ-03-2018-0067.

Vesa, M. and Tienari, J. (2022), “Artificial intelligence and rationalised unaccountability: ideology of the elites?”, Organization, Vol. 29 No. 6, pp. 1133-1145, doi: 10.1177/1350508420963872.

Voort, H.G.V.D., Klievink, A.J., Arnaboldi, M. and Meijer, A.J. (2019), “Rationality and politics of algorithms. Will the promise of big data survive the dynamics of public decision making?”, Government Information Quarterly, Vol. 36 No. 1, pp. 27-38, doi: 10.1016/j.giq.2018.10.011.

Wang, P. (2019), “On defining artificial intelligence”, Journal of Artificial General Intelligence, Vol. 10 No. 2, pp. 1-37, doi: 10.2478/jagi-2019-0002.

Wieringa, M. (2020), “What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability”, FAT* 2020 – Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, pp. 1-18, doi: 10.1145/3351095.3372833.

Wolf, M.J., Miller, K.W. and Grodzinsky, F.S. (2017), “Why we should have seen that coming”, The ORBIT Journal, Vol. 1 No. 2, pp. 1-12, doi: 10.29297/orbit.v1i2.49.

Yeoman, I. and Mars, M. (2012), “Robots, men and sex tourism”, Futures, Elsevier Ltd, Vol. 44 No. 4, pp. 365-371, doi: 10.1016/j.futures.2011.11.004.

Acknowledgements

The authors thank Warren Maroun and anonymous reviewers for their insightful comments. They are grateful to all study participants for their time and commitment, and especially to Magdalena Polz, Justyna Adamczyk, and Agnieszka Baklarz for their invaluable assistance. They also extend their appreciation to Karen McBride, Jia Liu, and the participants of the Accounting for Planet, People and Profit Conference; the Joint Conference of the BAFA Corporate Finance and Asset Pricing SIG and the Northern Area Group; EAA Congress 2024 for their stimulating discussions. This publication is open access thanks to SGH Warsaw School of Economics and the support of Maciej Bednarczyk.

Corresponding author

Piotr Staszkiewicz can be contacted at: piotr.staszkiewicz@mail.com

Related articles