The purpose of this paper is to consider the question of equipping fully autonomous robotic weapons with the capacity to kill. Current ideas concerning the feasibility and…
Abstract
Purpose
The purpose of this paper is to consider the question of equipping fully autonomous robotic weapons with the capacity to kill. Current ideas concerning the feasibility and advisability of developing and deploying such weapons, including the proposal that they be equipped with a so-called “ethical governor”, are reviewed and critiqued. The perspective adopted for this study includes software engineering practice as well as ethical and legal aspects of the use of lethal autonomous robotic weapons.
Design/methodology/approach
In the paper, the author survey and critique the applicable literature.
Findings
In the current paper, the author argue that fully autonomous robotic weapons with the capacity to kill should neither be developed nor deployed, that research directed toward equipping such weapons with a so-called “ethical governor” is immoral and serves as an “ethical smoke-screen” to legitimize research and development of these weapons and that, as an ethical duty, engineers and scientists should condemn and refuse to participate in their development.
Originality/value
This is a new approach to the argument for banning autonomous lethal robotic weapons based on classical work of Joseph Weizenbaum, Helen Nissenbaum and others.
Details
Keywords
This chapter presents reflections and considerations regarding artificial intelligence (AI) and contemporary and future warfare. As “an evolving collection of computational…
Abstract
This chapter presents reflections and considerations regarding artificial intelligence (AI) and contemporary and future warfare. As “an evolving collection of computational techniques for solving problems,” AI holds great potential for national defense endeavors (Rubin, Stafford, Mertoguno, & Lukos, 2018). Though decades old, AI is becoming an integral instrument of war for contemporary warfighters. But there are also challenges and uncertainties. Johannsen, Solka, and Rigsby (2018), scientists who work with AI and national defense, ask, “are we moving too quickly with a technology we still don't fully understand?” Their concern is not if AI should be used, but, if research and development of it and pursuit of its usage are following a course that will reap the rewards desired. Although they have long-term optimism, they ask: “Until theory can catch up with practice, is a system whose outputs we can neither predict nor explain really all that desirable?” 1 Time (speed of development) is a factor, but so too are research and development priorities, guidelines, and strong accountability mechanisms.2
Sahil Sholla and Iraq Ahmad Reshi
This paper does not concern with the “why” of ethics. Such questions are typically of interest to philosophers and are outside the scope of this work. In the next section, the…
Abstract
Purpose
This paper does not concern with the “why” of ethics. Such questions are typically of interest to philosophers and are outside the scope of this work. In the next section, the authors offer a look into “what” of ethics, i.e. various types and subtypes of ethics. Subsequently, the authors explore “how” of ethics, by summarising various computational approaches to ethical reasoning offered by researchers in the field.
Design/methodology/approach
The approaches are classified based on the application domain, ethical theory, agent type and design paradigm adopted. Moreover, promising research directions towards ethical reasoning are also presented.
Findings
Since the field is essentially interdisciplinary in nature, collaborative research from such areas as neuroscience, psychology, artificial intelligence, law and social sciences is necessary. It is hoped that this paper offers much needed insight into computational approaches for ethical reasoning paving way for researchers to further engage with the question.
Originality/value
In this paper, the authors discussed vaious computational approaches proposed by researchers to implement ethics. Although none of the approaches adequately answer the question, it is necessary to engage with the research effort to make a substantial contribution to the emerging research area. Though some effort has been made in the design of logic-based systems, they are largely in stages of infancy and merit considerable research.
Details
Keywords
– This first part of a two-part paper aims to provide an insight into the ethical and legal issues associated with certain classes of robot. This part is concerned with ethics.
Abstract
Purpose
This first part of a two-part paper aims to provide an insight into the ethical and legal issues associated with certain classes of robot. This part is concerned with ethics.
Design/methodology/approach
Following an introduction, this paper first considers the ethical deliberations surrounding robots used in warfare and healthcare. It then addresses the issue of robot truth and deception and subsequently discusses some on-going deliberations and possible ways forward. Finally, brief conclusions are drawn.
Findings
Robot ethics are the topic of wide-ranging debate and encompass such diverse applications as military drones and robotic carers. Many ethical considerations have been raised including philosophical issues such as moral behaviour and truth and deception. Preliminary research suggests that some of these concerns may be ameliorated through the use of software which encompasses ethical principles. It is widely recognised that a multidisciplinary approach is required and there is growing evidence of this.
Originality/value
This paper provides an insight into the highly topical and complex issue of robot ethics.
Details
Keywords
Harnessing the power and potential of Artificial Intelligence (AI) continues a centuries-old trajectory of the application of science and knowledge for the benefit of humanity…
Abstract
Harnessing the power and potential of Artificial Intelligence (AI) continues a centuries-old trajectory of the application of science and knowledge for the benefit of humanity. Such an endeavor has great promise, but also the possibility of creating conflict and disorder. This chapter draws upon the strengths of the previous chapters to provide readers with a purposeful assessment of the current AI security landscape, concluding with four key considerations for a globally secure future.
Details
Keywords
Kenneth D. Lawrence, Ronald Klimberg and Sheila M. Lawrence
This paper will detail the development of a multi-objective mathematical programming model for audit sampling of balances for accounts receivable. The nonlinear nature of the…
Abstract
This paper will detail the development of a multi-objective mathematical programming model for audit sampling of balances for accounts receivable. The nonlinear nature of the model structure will require the use of a nonlinear solution algorithm, such as the GRG or the genetic algorithm embedded in a Solver spreadsheet modeling system, to obtain appropriate results.