Search results

1 – 10 of over 2000
Article
Publication date: 31 August 2005

Keith Miller and David Larson

Traditionally, philosophers have ascribed moral agency almost exclusively to humans (Eshleman, 2004). Early writing about moral agency can be traced to Aristotle (Louden, 1989…

305

Abstract

Traditionally, philosophers have ascribed moral agency almost exclusively to humans (Eshleman, 2004). Early writing about moral agency can be traced to Aristotle (Louden, 1989) and Aquinas (1997). In addition to human moral agents, Aristotle discussed the possibility of moral agency of the Greek gods and Aquinas discussed the possibility of moral agency of angels. In the case of angels, a difficulty in ascribing moral agency was that it was suspected that angels did not have enough independence from God to ascribe to the angels genuine moral choices. Recently, new candidates have been suggested for non‐human moral agency. Floridi and Sanders (2004) suggest that artificially intelligence (AI) programs that meet certain criteria may attain the status of moral agents; they suggest a redefinition of moral agency to clarify the relationship between artificial and human agents. Other philosophers, as well as scholars in Science and Technology Studies, are studying the possibility that artifacts that are not designed to mimic human intelligence still embody a kind of moral agency. For example, there has been a lively discussion about the moral intent and the consequential effects of speed bumps (Latour, 1994; Keulartz et al., 2004). The connections and distributed intelligence of a network is another candidate being considered for moral agency (Allen, Varner & Zinser, 2000). These philosophical arguments may have practical consequences for software developers, and for the people affected by computing. In this paper, we will examine ideas about artificial moral agency from the perspective of a software developer.

Details

Journal of Information, Communication and Ethics in Society, vol. 3 no. 3
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 13 March 2007

Kenneth Einar Himma

Information ethics, as is well known, has emerged as an independent area of ethical and philosophical inquiry. There are a number of academic journals that are devoted entirely to…

4872

Abstract

Purpose

Information ethics, as is well known, has emerged as an independent area of ethical and philosophical inquiry. There are a number of academic journals that are devoted entirely to the numerous ethical issues that arise in connection with the new information communication technologies; these issues include a host of intellectual property, information privacy, and security issues of concern to librarians and other information professionals. In addition, there are a number of major international conferences devoted to information ethics every year. It would hardly be overstating the matter to say that information ethics is as “hot” an area of theoretical inquiry as medical ethics. The purpose of this paper is to provide an overview on these and related issues.

Design/methodology/approach

The paper presents a review of relevant information ethics literature together with the author's assessment of the arguments.

Findings

There are issues that are more abstract and basic than the substantive issues with which most information ethics theorizing is concerned. These issues are thought to be “foundational” in the sense that we cannot fully succeed in giving an analysis of the concrete problems of information ethics (e.g. are legal intellectual property rights justifiably protected?) until these issues are adequately addressed.

Originality/value

The paper offers a needed survey of foundational issues in information ethics.

Details

Library Hi Tech, vol. 25 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 11 May 2015

Anne Gerdes and Peter Øhrstrøm

The purpose of this paper is to explore artificial moral agency by reflecting upon the possibility of a Moral Turing Test (MTT) and whether its lack of focus on interiority, i.e…

Abstract

Purpose

The purpose of this paper is to explore artificial moral agency by reflecting upon the possibility of a Moral Turing Test (MTT) and whether its lack of focus on interiority, i.e. its behaviouristic foundation, counts as an obstacle to establishing such a test to judge the performance of an Artificial Moral Agent (AMA). Subsequently, to investigate whether an MTT could serve as a useful framework for the understanding, designing and engineering of AMAs, we set out to address fundamental challenges within the field of robot ethics regarding the formal representation of moral theories and standards. Here, typically three design approaches to AMAs are available: top-down theory-driven models and bottom-up approaches which set out to model moral behaviour by means of models for adaptive learning, such as neural networks, and finally, hybrid models, which involve components from both top-down and bottom-up approaches to the modelling of moral agency. With inspiration from Allen and Wallace (2009, 2000) as well as Prior (1949, 2003), we elaborate on theoretically driven approaches to machine ethics by introducing deontic tense logic. Finally, within this framework, we explore the character of human interaction with a robot which has successfully passed an MTT.

Design/methodology/approach

The ideas in this paper reflect preliminary theoretical considerations regarding the possibility of establishing a MTT based on the evaluation of moral behaviour, which focusses on moral reasoning regarding possible actions. The thoughts reflected fall within the field of normative ethics and apply deontic tense logic to discuss the possibilities and limitations of artificial moral agency.

Findings

The authors stipulate a formalisation of logic of obligation, time and modality, which may serve as a candidate for implementing a system corresponding to an MTT in a restricted sense. Hence, the authors argue that to establish a present moral obligation, we need to be able to make a description of the actual situation and the relevant general moral rules. Such a description can never be complete, as the combination of exhaustive knowledge about both situations and rules would involve a God eye’s view, enabling one to know all there is to know and take everything relevant into consideration before making a perfect moral decision to act upon. Consequently, due to this frame problem, from an engineering point of view, we can only strive for designing a robot supposed to operate within a restricted domain and within a limited space-time region. Given such a setup, the robot has to be able to perform moral reasoning based on a formal description of the situation and any possible future developments. Although a system of this kind may be useful, it is clearly also limited to a particular context. It seems that it will always be possible to find special cases (outside the context for which it was designed) in which a given system does not pass the MTT. This calls for a new design of moral systems with trust-related components which will make it possible for the system to learn from experience.

Originality/value

It is without doubt that in the near future we are going to be faced with advanced social robots with increasing autonomy, and our growing engagement with these robots calls for the exploration of ethical issues and stresses the importance of informing the process of engineering ethical robots. Our contribution can be seen as an early step in this direction.

Details

Journal of Information, Communication and Ethics in Society, vol. 13 no. 2
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 8 August 2016

Rollin M. Omari and Masoud Mohammadian

The developing academic field of machine ethics seeks to make artificial agents safer as they become more pervasive throughout society. In contrast to computer ethics, machine…

Abstract

Purpose

The developing academic field of machine ethics seeks to make artificial agents safer as they become more pervasive throughout society. In contrast to computer ethics, machine ethics is concerned with the behavior of machines toward human users and other machines. This study aims to use an action-based ethical theory founded on the combinational aspects of deontological and teleological theories of ethics in the construction of an artificial moral agent (AMA).

Design/methodology/approach

The decision results derived by the AMA are acquired via fuzzy logic interpretation of the relative values of the steady-state simulations of the corresponding rule-based fuzzy cognitive map (RBFCM).

Findings

Through the use of RBFCMs, the following paper illustrates the possibility of incorporating ethical components into machines, where latent semantic analysis (LSA) and RBFCMs can be used to model dynamic and complex situations, and to provide abilities in acquiring causal knowledge.

Research limitations/implications

This approach is especially appropriate for data-poor and uncertain situations common in ethics. Nonetheless, to ensure that a machine with an ethical component can function autonomously in the world, research in artificial intelligence will need to further investigate the representation and determination of ethical principles, the incorporation of these ethical principles into a system’s decision procedure, ethical decision-making with incomplete and uncertain knowledge, the explanation for decisions made using ethical principles and the evaluation of systems that act based upon ethical principles.

Practical implications

To date, the conducted research has contributed to a theoretical foundation for machine ethics through exploration of the rationale and the feasibility of adding an ethical dimension to machines. Further, the constructed AMA illustrates the possibility of utilizing an action-based ethical theory that provides guidance in ethical decision-making according to the precepts of its respective duties. The use of LSA illustrates their powerful capabilities in understanding text and their potential application as information retrieval systems in AMAs. The use of cognitive maps provides an approach and a decision procedure for resolving conflicts between different duties.

Originality/value

This paper suggests that cognitive maps could be used in AMAs as tools for meta-analysis, where comparisons regarding multiple ethical principles and duties can be examined and considered. With cognitive mapping, complex and abstract variables that cannot easily be measured but are important to decision-making can be modeled. This approach is especially appropriate for data-poor and uncertain situations common in ethics.

Details

Journal of Information, Communication and Ethics in Society, vol. 14 no. 3
Type: Research Article
ISSN: 1477-996X

Keywords

Book part
Publication date: 7 April 2023

Amir Rafiee, Yong Wu and Abdul Sattar

Autonomous Vehicles (AVs) promise great benefits, including improving safety, reducing congestion, and providing mobility for elderly and the disabled; however, there are…

Abstract

Autonomous Vehicles (AVs) promise great benefits, including improving safety, reducing congestion, and providing mobility for elderly and the disabled; however, there are discussions on how they should be programmed to respond in an ethical dilemma where a choice has to be made between two or more courses of action resulting in loss of life. To explore this question, the authors examine the current academic literature where the application of the existing philosophical theories to ethics settings in AVs has been discussed, specifically the utilitarianism and the deontological ethics. These two theories are widely regarded as rivals, and are useful in demonstrating the complex ethical issues that must be addressed when programming AVs. We also look at the legal framework, specifically normative principles in criminal law used to regulate difficult choices in an emergency, which some have suggested as a plausible defence for manufacturers who seek to program AVs using a utilitarian framework. These include the doctrine of necessity, the sudden emergency doctrine, and the duty of care. The authors critique each theory, highlighting their benefits and limitations. The authors then make a case for programming AVs using a randomized decision system (RDS) and propose that it could be a viable solution in dealing with certain moral dilemmas. Finally using our assessment, the authors suggest certain objectives for manufacturers and regulators in designing and programming AVs that are technically viable, and would make them morally acceptable and fair.

Content available
Article
Publication date: 19 October 2010

169

Abstract

Details

Industrial Robot: An International Journal, vol. 37 no. 6
Type: Research Article
ISSN: 0143-991X

Article
Publication date: 27 September 2011

Colin Allen and Wendell Wallach

In spite of highly publicized competitions where computers have prevailed over humans, the intelligence of computer systems still remains quite limited in comparison to that of

637

Abstract

Purpose

In spite of highly publicized competitions where computers have prevailed over humans, the intelligence of computer systems still remains quite limited in comparison to that of humans. Present day computers provide plenty of information but lack wisdom. The purpose of this paper is to investigate whether reliance on computers with limited intelligence might undermine the quality of the education students receive.

Design/methodology/approach

Using a conceptual approach, the authors take the performance of IBM's Watson computer against human quiz competitors as a starting point to explore how society, and especially education, might change in the future when everyone has access to desktop technology to access information. They explore the issue of placing excessive trust in such machines without the capacity to evaluate the quality and reliability of the information provided.

Findings

The authors find that the day when computing machines surpass human intelligence is much further in the future than predicted by some forecasters. Addressing the problem of dependency on information technology, they envisage a technical solution ‐ wiser machines which not only return the search results, but also help make them comprehensible ‐ but find that although it is relatively simple to engineer knowledge distribution and access, it is more difficult to engineer wisdom.

Practical implications

Creating computers that are wise will be difficult, but educating students to be wise in the age of computers may also be quite difficult. For the future, one might explore the development of computer tools that demonstrate sensitivity to alternative answers to difficult questions, different courses of action, and their own limitations. For the present, one will need to train students to appreciate the limitations inherent in the technologies on which they have become dependent.

Originality/value

Critical thinking, innovation, and wisdom require skills beyond the kinds of answers computers give now or are likely to provide in the coming decade.

Details

On the Horizon, vol. 19 no. 4
Type: Research Article
ISSN: 1074-8121

Keywords

Article
Publication date: 10 September 2019

Yong Tang, Jason Xiong, Rafael Becerril-Arreola and Lakshmi Iyer

The purpose of this paper is fourfold: first, to provide the first systematic study on the ethics of blockchain, mapping its main socio-technical challenges in technology and…

8646

Abstract

Purpose

The purpose of this paper is fourfold: first, to provide the first systematic study on the ethics of blockchain, mapping its main socio-technical challenges in technology and applications; second, to identify ethical issues of blockchain; third, to propose a conceptual framework of blockchain ethics study; fourth, to discuss ethical issues for stakeholders.

Design/methodology/approach

The paper employs literature research, research agenda and framework development.

Findings

Ethics of blockchain and its applications is essential for technology adoption. There is a void of research on blockchain ethics. The authors propose a first theoretical framework of blockchain ethics. Research agenda is proposed for future search. Finally, the authors recommend measures for stakeholders to facilitate the ethical adequacy of blockchain implementations and future Information Systems (IS) research directions. This research raises timely awareness and stimulates further debate on the ethics of blockchain in the IS community.

Originality/value

First, this work provides timely systematic research on blockchain ethics. Second, the authors propose the first research framework of blockchain ethics. Third, the authors identify key research questions of blockchain ethics. Fourth, this study contributes to the understanding of blockchain technology and its societal impacts.

Article
Publication date: 7 April 2020

Vinh Nhat Lu, Jochen Wirtz, Werner H. Kunz, Stefanie Paluch, Thorsten Gruber, Antje Martins and Paul G. Patterson

Robots are predicted to have a profound impact on the service sector. The emergence of robots has attracted increasing interest from business scholars and practitioners alike. In…

16879

Abstract

Purpose

Robots are predicted to have a profound impact on the service sector. The emergence of robots has attracted increasing interest from business scholars and practitioners alike. In this article, we undertake a systematic review of the business literature about the impact of service robots on customers and employees with the objective of guiding future research.

Design/methodology/approach

We analyzed the literature on service robots as they relate to customers and employees in business journals listed in the Financial Times top 50 journals plus all journals covered in the cross-disciplinary SERVSIG literature alerts.

Findings

The analysis of the identified studies yielded multiple observations about the impact of service robots on customers (e.g. overarching frameworks on acceptance and usage of service robots; characteristics of service robots and anthropomorphism; and potential for enhanced and deteriorated service experiences) and service employees (e.g. employee benefits such as reduced routine work, enhanced productivity and job satisfaction; potential negative consequences such as loss of autonomy and a range of negative psychological outcomes; opportunities for human–robot collaboration; job insecurity; and robot-related up-skilling and development requirements). We also conclude that current research on service robots is fragmented, is largely conceptual in nature and focused on the initial adoption stage. We feel that more research is needed to build an overarching theory. In addition, more empirical research is needed, especially on the long(er)-term usage service robots on actual behaviors, the well-being and potential downsides and (ethical) risks for customers and service employees.

Research limitations/implications

Our review focused on the business and service literature. Future work may want to include additional literature streams, including those in computer science, engineering and information systems.

Originality/value

This article is the first to synthesize the business and service literature on the impact of service robots on customers and employees.

Details

Journal of Service Theory and Practice, vol. 30 no. 3
Type: Research Article
ISSN: 2055-6225

Keywords

Open Access
Article
Publication date: 26 September 2018

Jochen Wirtz, Paul G. Patterson, Werner H. Kunz, Thorsten Gruber, Vinh Nhat Lu, Stefanie Paluch and Antje Martins

The service sector is at an inflection point with regard to productivity gains and service industrialization similar to the industrial revolution in manufacturing that started in…

72213

Abstract

Purpose

The service sector is at an inflection point with regard to productivity gains and service industrialization similar to the industrial revolution in manufacturing that started in the eighteenth century. Robotics in combination with rapidly improving technologies like artificial intelligence (AI), mobile, cloud, big data and biometrics will bring opportunities for a wide range of innovations that have the potential to dramatically change service industries. The purpose of this paper is to explore the potential role service robots will play in the future and to advance a research agenda for service researchers.

Design/methodology/approach

This paper uses a conceptual approach that is rooted in the service, robotics and AI literature.

Findings

The contribution of this paper is threefold. First, it provides a definition of service robots, describes their key attributes, contrasts their features and capabilities with those of frontline employees, and provides an understanding for which types of service tasks robots will dominate and where humans will dominate. Second, this paper examines consumer perceptions, beliefs and behaviors as related to service robots, and advances the service robot acceptance model. Third, it provides an overview of the ethical questions surrounding robot-delivered services at the individual, market and societal level.

Practical implications

This paper helps service organizations and their management, service robot innovators, programmers and developers, and policymakers better understand the implications of a ubiquitous deployment of service robots.

Originality/value

This is the first conceptual paper that systematically examines key dimensions of robot-delivered frontline service and explores how these will differ in the future.

Details

Journal of Service Management, vol. 29 no. 5
Type: Research Article
ISSN: 1757-5818

Keywords

1 – 10 of over 2000