Anton Saveliev and Denis Zhurenkov
The purpose of this paper is to review and analyze how the development and utilization of artificial intelligence (AI) technologies for social responsibility are defined in the…
Abstract
Purpose
The purpose of this paper is to review and analyze how the development and utilization of artificial intelligence (AI) technologies for social responsibility are defined in the national AI strategies of the USA, Russia and China.
Design/methodology/approach
The notion of responsibility concerning AI is currently not legally defined by any country in the world. The authors of this research are going to use the methodology, based on Luciano Floridi’s Unified framework of five principles for AI in society, to determine how social responsibility is implemented in the AI strategies of the USA, Russia and China.
Findings
All three strategies for the development of AI in the USA, Russia and China, as evaluated in the paper, contain some or other components aimed at achieving public responsibility and responsible use of AI. The Unified framework of five principles for AI in society, developed by L. Floridi, can be used as a viable assessment tool to determine at least in general terms how social responsibility is implied and implemented in national strategic documents in the field of AI. However, authors of the paper call for further development in the field of mutually recognizable ethical models for socially beneficial AI.
Practical implications
This study allows us to better understand the linkages, overlaps and differences between modern philosophy of information, AI-ethics, social responsibility and government regulation. The analysis provided in this paper can serve as a basic blueprint for future attempts to define how social responsibility is understood and implied by government decision-makers.
Originality/value
The analysis provided in the paper, however general and empirical it may be, is a first-time example of how the Unified framework of five principles for AI in society can be applied as an assessment tool to determine social responsibility in AI-related official documents.
Details
Keywords
John M. LaVelle, Trupti Sarode and Satlaj Dighe
Educators strive to develop and implement high impact educational experiences, which are critical to ensuring university courses and curricula serve as memorable and transferable…
Abstract
Educators strive to develop and implement high impact educational experiences, which are critical to ensuring university courses and curricula serve as memorable and transferable learning experiences for students. It is not clear, however, which experiences are exceptional from a student perspective, or what kinds of illustrative examples exist in applied disciplines. In this chapter, we ground our discussion of high impact educational experiences in the field of program evaluation, contextualize it as organized at the University of Minnesota, describe three experiences that have been repeatedly described as impactful by students, and engage in a collective dialogue as teachers and learners.
Details
Keywords
The purpose of this paper is to consider the question of equipping fully autonomous robotic weapons with the capacity to kill. Current ideas concerning the feasibility and…
Abstract
Purpose
The purpose of this paper is to consider the question of equipping fully autonomous robotic weapons with the capacity to kill. Current ideas concerning the feasibility and advisability of developing and deploying such weapons, including the proposal that they be equipped with a so-called “ethical governor”, are reviewed and critiqued. The perspective adopted for this study includes software engineering practice as well as ethical and legal aspects of the use of lethal autonomous robotic weapons.
Design/methodology/approach
In the paper, the author survey and critique the applicable literature.
Findings
In the current paper, the author argue that fully autonomous robotic weapons with the capacity to kill should neither be developed nor deployed, that research directed toward equipping such weapons with a so-called “ethical governor” is immoral and serves as an “ethical smoke-screen” to legitimize research and development of these weapons and that, as an ethical duty, engineers and scientists should condemn and refuse to participate in their development.
Originality/value
This is a new approach to the argument for banning autonomous lethal robotic weapons based on classical work of Joseph Weizenbaum, Helen Nissenbaum and others.
Details
Keywords
Auxane Boch and Bethany Rhea Thomas
Social robotics is a rapidly growing application of artificial intelligence (AI) in society, encompassing an expanding range of applications. This paper aims to contribute to the…
Abstract
Purpose
Social robotics is a rapidly growing application of artificial intelligence (AI) in society, encompassing an expanding range of applications. This paper aims to contribute to the ongoing integration of psychology into social robotics ethics by reviewing current theories and empirical findings related to human–robot interaction (HRI) and addressing critical points of contention within the ethics discourse.
Design/methodology/approach
The authors will explore the factors influencing the acceptance of social robots, explore the development of relationships between humans and robots and delve into three prominent controversies: deception, dehumanisation and violence.
Findings
The authors first propose design factors allowing for a positive interaction with the robot, and further discuss precise dimensions to evaluate when designing a social robot to ensure ethical design technology, building on the four ethical principles for trustworthy AI. The final section of this paper will outline and offer explicit recommendations for future research endeavours.
Originality/value
This paper provides originality and value to the field of social robotics ethics by integrating psychology into the ethical discourse and offering a comprehensive understanding of HRI. It introduces three ethical dimensions and provides recommendations for implementing them, contributing to the development of ethical design in social robots and trustworthy AI.
Details
Keywords
Yong Tang, Jason Xiong, Rafael Becerril-Arreola and Lakshmi Iyer
The purpose of this paper is fourfold: first, to provide the first systematic study on the ethics of blockchain, mapping its main socio-technical challenges in technology and…
Abstract
Purpose
The purpose of this paper is fourfold: first, to provide the first systematic study on the ethics of blockchain, mapping its main socio-technical challenges in technology and applications; second, to identify ethical issues of blockchain; third, to propose a conceptual framework of blockchain ethics study; fourth, to discuss ethical issues for stakeholders.
Design/methodology/approach
The paper employs literature research, research agenda and framework development.
Findings
Ethics of blockchain and its applications is essential for technology adoption. There is a void of research on blockchain ethics. The authors propose a first theoretical framework of blockchain ethics. Research agenda is proposed for future search. Finally, the authors recommend measures for stakeholders to facilitate the ethical adequacy of blockchain implementations and future Information Systems (IS) research directions. This research raises timely awareness and stimulates further debate on the ethics of blockchain in the IS community.
Originality/value
First, this work provides timely systematic research on blockchain ethics. Second, the authors propose the first research framework of blockchain ethics. Third, the authors identify key research questions of blockchain ethics. Fourth, this study contributes to the understanding of blockchain technology and its societal impacts.