Teemu Birkstedt, Matti Minkkinen, Anushree Tandon and Matti Mäntymäki
Following the surge of documents laying out organizations' ethical principles for their use of artificial intelligence (AI), there is a growing demand for translating ethical…
Abstract
Purpose
Following the surge of documents laying out organizations' ethical principles for their use of artificial intelligence (AI), there is a growing demand for translating ethical principles to practice through AI governance (AIG). AIG has emerged as a rapidly growing, yet fragmented, research area. This paper synthesizes the organizational AIG literature by outlining research themes and knowledge gaps as well as putting forward future agendas.
Design/methodology/approach
The authors undertake a systematic literature review on AIG, addressing the current state of its conceptualization and suggesting future directions for AIG scholarship and practice. The review protocol was developed following recommended guidelines for systematic reviews and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).
Findings
The results of the authors’ review confirmed the assumption that AIG is an emerging research topic with few explicit definitions. Moreover, the authors’ review identified four themes in the AIG literature: technology, stakeholders and context, regulation and processes. The central knowledge gaps revealed were the limited understanding of AIG implementation, lack of attention to the AIG context, uncertain effectiveness of ethical principles and regulation, and insufficient operationalization of AIG processes. To address these gaps, the authors present four future AIG agendas: technical, stakeholder and contextual, regulatory, and process. Going forward, the authors propose focused empirical research on organizational AIG processes, the establishment of an AI oversight unit and collaborative governance as a research approach.
Research limitations/implications
To address the identified knowledge gaps, the authors present the following working definition of AIG: AI governance is a system of rules, practices and processes employed to ensure an organization's use of AI technologies aligns with its strategies, objectives, and values, complete with legal requirements, ethical principles and the requirements set by stakeholders. Going forward, the authors propose focused empirical research on organizational AIG processes, the establishment of an AI oversight unit and collaborative governance as a research approach.
Practical implications
For practitioners, the authors highlight training and awareness, stakeholder management and the crucial role of organizational culture, including senior management commitment.
Social implications
For society, the authors review elucidates the multitude of stakeholders involved in AI governance activities and complexities related to balancing the needs of different stakeholders.
Originality/value
By delineating the AIG concept and the associated research themes, knowledge gaps and future agendas, the authors review builds a foundation for organizational AIG research, calling for broad contextual investigations and a deep understanding of AIG mechanisms. For practitioners, the authors highlight training and awareness, stakeholder management and the crucial role of organizational culture, including senior management commitment.
Details
Keywords
Samuli Laato, Miika Tiainen, A.K.M. Najmul Islam and Matti Mäntymäki
Inscrutable machine learning (ML) models are part of increasingly many information systems. Understanding how these models behave, and what their output is based on, is a…
Abstract
Purpose
Inscrutable machine learning (ML) models are part of increasingly many information systems. Understanding how these models behave, and what their output is based on, is a challenge for developers let alone non-technical end users.
Design/methodology/approach
The authors investigate how AI systems and their decisions ought to be explained for end users through a systematic literature review.
Findings
The authors’ synthesis of the literature suggests that AI system communication for end users has five high-level goals: (1) understandability, (2) trustworthiness, (3) transparency, (4) controllability and (5) fairness. The authors identified several design recommendations, such as offering personalized and on-demand explanations and focusing on the explainability of key functionalities instead of aiming to explain the whole system. There exists multiple trade-offs in AI system explanations, and there is no single best solution that fits all cases.
Research limitations/implications
Based on the synthesis, the authors provide a design framework for explaining AI systems to end users. The study contributes to the work on AI governance by suggesting guidelines on how to make AI systems more understandable, fair, trustworthy, controllable and transparent.
Originality/value
This literature review brings together the literature on AI system communication and explainable AI (XAI) for end users. Building on previous academic literature on the topic, it provides synthesized insights, design recommendations and future research agenda.
Details
Keywords
Albandari Alshahrani, Anastasia Griva, Denis Dennehy and Matti Mäntymäki
Artificial intelligence (AI) has received much attention due to its promethean-like powers to transform the management and delivery of public sector services. Due to the…
Abstract
Purpose
Artificial intelligence (AI) has received much attention due to its promethean-like powers to transform the management and delivery of public sector services. Due to the proliferation of research articles in this context, research to date is fragmented into research streams based on different types of AI technologies or a specific government function of the public sector (e.g. health, education). The purpose of this study is to synthesize this literature, identify challenges and opportunities, and offer a research agenda that guides future inquiry.
Design/methodology/approach
This paper aggregates this fragmented body of knowledge by conducting a systematic literature review of AI research in public sector organisations in the Chartered Association of Business Schools (CABS)-ranked journals between 2012 and 2023.
Findings
The search strategy resulted in the retrieval of 2,870 papers, of which 61 were identified as primary papers relevant to this research. These primary papers are mapped to the ten classifications of the functions of government as classified by the Organisation for Economic Co-operation and Development (OECD), and the reported challenges and benefits aggregated.
Originality/value
This study advances knowledge by providing a state-of-the-art of AI research based the OECD classifications of government functions, reporting of claimed benefits and challenges and providing a research agenda for future research.