Search results

1 – 10 of over 2000
Open Access
Article
Publication date: 31 October 2018

Barbara Fedock, Armando Paladino, Liston Bailey and Belinda Moses

The purpose of this paper is to examine how robotics program developers perceived the role of emulation of human ethics when programming robots for use in educational settings. A…

2413

Abstract

Purpose

The purpose of this paper is to examine how robotics program developers perceived the role of emulation of human ethics when programming robots for use in educational settings. A purposive sampling of online robotics program developer professional sites which focused on the role of emulation of human ethics used when programming robots for use in educational settings was included in the study. Content related to robotics program developers’ perceptions on educational uses of robots and ethics were analyzed.

Design/methodology/approach

The design for this study was a qualitative summative content analysis. The researchers analyzed keywords related to a phenomenon. The phenomenon was the emulation of human ethics programmed in robots. Articles selected to be analyzed in this study were published by robotics program developers who focused on robots and ethics in the education. All articles analyzed in this study were posted online, and the public has complete access to the studies.

Findings

Robotics program developers viewed the importance of situational human ethics interpretations and implementations. To facilitate flexibility, robotics program developers programmed robots to search computer-based ethics related research, frameworks and case studies. Robotics program developers acknowledged the importance of human ethics, but they felt more flexibility was needed in the role of how classroom human ethical models were created, developed and used. Some robotic program developers expressed questions and concerns about the implementations of flexible robot ethical accountability levels and behaviors in the educational setting. Robotics program developers argued that educational robots were not designed or programmed to emulate human ethics.

Research limitations/implications

One limitation of the study was 32 online, public articles written by robotics program designers analyzed through qualitative content analysis to find themes and patterns. In qualitative content analysis studies, findings may not be as generalizable as in quantitative studies. Another limitation was only a limited number of articles written by robotics programs existed which addressed robotics and emulation of human ethics in the educational setting.

Practical implications

The significance of this study is the need for a renewed global initiative in education to promote debates, research and on-going collaboration with scientific leaders on ethics and programming robots. The implication for education leaders is to provide ongoing professional development on the role of ethics in education and to create best practices for using robots in education to promote increased student learning and enhance the teaching process.

Social implications

The implications of this study are global. All cultures will be affected by the robotics’ shift in how students are taught ethical decision making in the educational setting. Robotics program developers will create computational educational moral models which will replace archetypal educational ethics frameworks. Because robotics program developers do not classify robots as human, educators, parents and communities will continue to question the use of robots in educational settings, and they will challenge robotics ethical dilemmas, moral standards and computational findings. The examination of robotics program developers’ perspectives through different lens may help close the gap and establish a new understanding among all stakeholders.

Originality/value

Four university doctoral faculty members conducted this content analysis study. After discussions on robotics and educational ethics, the researchers discovered a gap in the literature on the use of robots in the educational setting and the emulation of human ethics in robots. Therefore, to explore the implications for educators, the researchers formed a group to research the topic to learn more about the topic. No personal gains resulted from the study. All research was original. All cultures will be affected by the robotics’ shift in how students are taught ethical decision making in the educational setting. Robotics program developers will create computational educational moral models which will replace archetypal educational ethics frameworks. Because robotics program developers do not classify robots as human, educators, parents and communities will continue to question the use of robots in educational settings, and they will challenge robotics ethical dilemmas, moral standards, and computational findings. The examination of robotics program developers’ perspectives through different lens may help close the gap and establish a new understanding among all stakeholders.

Details

Journal of Research in Innovative Teaching & Learning, vol. 11 no. 2
Type: Research Article
ISSN: 2397-7604

Keywords

Article
Publication date: 16 August 2013

Aimee van Wynsberghe

With the rapid and pervasive introduction of robots into human environments, ethics scholars along with roboticists are asking how ethics can be applied to the discipline of…

1198

Abstract

Purpose

With the rapid and pervasive introduction of robots into human environments, ethics scholars along with roboticists are asking how ethics can be applied to the discipline of robotics. The purpose of this paper is to provide a concrete example of incorporating ethics into the design process of a robot in healthcare.

Design/methodology/approach

The approach for including ethics in the design process of care robots used in this paper is called the Care‐Centered Value Sensitive Design (CCVSD) approach. The CCVSD approach presented here provides both an outline of the components demanding ethical attention as well as a step‐by‐step manner in which such considerations may proceed in a prospective manner throughout the design process of a robot. This begins from the moment of idea generation and continues throughout the design of various prototypes. In this paper, this approach's utility and prospective methodology are illustrated by proposing a novel care robot, the “wee‐bot”, for the collection and testing of urine samples in a hospital context.

Findings

The results of applying the CCVSD approach inspired the design of a novel robot for the testing of urine in pediatric oncology patients – the “wee‐bot” robot – and showed that it is possible to successfully incorporate ethics into the design of a care robot by exploring and prescribing design requirements. In other words, the use of the CCVSD approach allowed for the translation of ethical values into technical design requirements as was shown in this paper.

Practical implications

This paper provides a practical solution to the question of how to incorporate ethics into the design of robots and bridges the gap between the work of roboticists and robot ethicists so that they may work together in the design of a novel care robot.

Social implications

In providing a solution to the issue of how to address ethical issues in the design of robots, the aim is to mitigate issues of societal concern regarding the design, development and implementation of robots in healthcare.

Originality/value

This paper is the first and only presentation of a concrete prospective methodology for including ethics into the design of robots. While the example given here is tailored to the healthcare context, the approach can be adjusted to fit another context and/or robot design.

Details

Industrial Robot: An International Journal, vol. 40 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 11 May 2015

Anne Gerdes and Peter Øhrstrøm

The purpose of this paper is to explore artificial moral agency by reflecting upon the possibility of a Moral Turing Test (MTT) and whether its lack of focus on interiority, i.e…

Abstract

Purpose

The purpose of this paper is to explore artificial moral agency by reflecting upon the possibility of a Moral Turing Test (MTT) and whether its lack of focus on interiority, i.e. its behaviouristic foundation, counts as an obstacle to establishing such a test to judge the performance of an Artificial Moral Agent (AMA). Subsequently, to investigate whether an MTT could serve as a useful framework for the understanding, designing and engineering of AMAs, we set out to address fundamental challenges within the field of robot ethics regarding the formal representation of moral theories and standards. Here, typically three design approaches to AMAs are available: top-down theory-driven models and bottom-up approaches which set out to model moral behaviour by means of models for adaptive learning, such as neural networks, and finally, hybrid models, which involve components from both top-down and bottom-up approaches to the modelling of moral agency. With inspiration from Allen and Wallace (2009, 2000) as well as Prior (1949, 2003), we elaborate on theoretically driven approaches to machine ethics by introducing deontic tense logic. Finally, within this framework, we explore the character of human interaction with a robot which has successfully passed an MTT.

Design/methodology/approach

The ideas in this paper reflect preliminary theoretical considerations regarding the possibility of establishing a MTT based on the evaluation of moral behaviour, which focusses on moral reasoning regarding possible actions. The thoughts reflected fall within the field of normative ethics and apply deontic tense logic to discuss the possibilities and limitations of artificial moral agency.

Findings

The authors stipulate a formalisation of logic of obligation, time and modality, which may serve as a candidate for implementing a system corresponding to an MTT in a restricted sense. Hence, the authors argue that to establish a present moral obligation, we need to be able to make a description of the actual situation and the relevant general moral rules. Such a description can never be complete, as the combination of exhaustive knowledge about both situations and rules would involve a God eye’s view, enabling one to know all there is to know and take everything relevant into consideration before making a perfect moral decision to act upon. Consequently, due to this frame problem, from an engineering point of view, we can only strive for designing a robot supposed to operate within a restricted domain and within a limited space-time region. Given such a setup, the robot has to be able to perform moral reasoning based on a formal description of the situation and any possible future developments. Although a system of this kind may be useful, it is clearly also limited to a particular context. It seems that it will always be possible to find special cases (outside the context for which it was designed) in which a given system does not pass the MTT. This calls for a new design of moral systems with trust-related components which will make it possible for the system to learn from experience.

Originality/value

It is without doubt that in the near future we are going to be faced with advanced social robots with increasing autonomy, and our growing engagement with these robots calls for the exploration of ethical issues and stresses the importance of informing the process of engineering ethical robots. Our contribution can be seen as an early step in this direction.

Details

Journal of Information, Communication and Ethics in Society, vol. 13 no. 2
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 10 June 2014

Robert Bogue

– This first part of a two-part paper aims to provide an insight into the ethical and legal issues associated with certain classes of robot. This part is concerned with ethics.

1549

Abstract

Purpose

This first part of a two-part paper aims to provide an insight into the ethical and legal issues associated with certain classes of robot. This part is concerned with ethics.

Design/methodology/approach

Following an introduction, this paper first considers the ethical deliberations surrounding robots used in warfare and healthcare. It then addresses the issue of robot truth and deception and subsequently discusses some on-going deliberations and possible ways forward. Finally, brief conclusions are drawn.

Findings

Robot ethics are the topic of wide-ranging debate and encompass such diverse applications as military drones and robotic carers. Many ethical considerations have been raised including philosophical issues such as moral behaviour and truth and deception. Preliminary research suggests that some of these concerns may be ameliorated through the use of software which encompasses ethical principles. It is widely recognised that a multidisciplinary approach is required and there is growing evidence of this.

Originality/value

This paper provides an insight into the highly topical and complex issue of robot ethics.

Details

Industrial Robot: An International Journal, vol. 41 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 12 December 2018

Kristijan Krkač

The supposedly radical development of artificial intelligence (AI) has raised questions regarding the moral responsibility of it. In the sphere of business, they are translated…

2639

Abstract

Purpose

The supposedly radical development of artificial intelligence (AI) has raised questions regarding the moral responsibility of it. In the sphere of business, they are translated into questions about AI and business ethics (BE) and corporate social responsibility (CSR). The purpos of this study is to conceptually reformulate these questions from the point of view of two possible aspect-changes, namely, starting from corporate social irresponsibility (CSI) and starting not from AIs incapability for responsibility but from its ability to imitate human CSR without performing typical human CSI.

Design/methodology/approach

The author draws upon the literature and his previous works on the relationship between AI and human CSI. This comparison aims to remodel the understanding of human CSI and AIs inability to be CSI. The conceptual remodelling is offered by taking a negative view on the relation. If AI can be made not to perform human-like CSI, then AI is at least less CSI than humans. For this task, it is necessary to remodel human and AI CSR, but AI does not have to be CSR. It is sufficient that it can be less CSI than humans to be more CSR.

Findings

The previously suggested remodelling of basic concepts in question leads to the conclusion that it is not impossible for AI to act or operate more CSI then humans simply by not making typical human CSIs. Strictly speaking, AI is not CSR because it cannot be responsible as humans can. If it can perform actions with a significantly lesser amount of CSI in comparison to humans, it is certainly less CSI.

Research limitations/implications

This paper is only a conceptual remodelling and a suggestion of a research hypothesis. As such, it implies particular morality, ethics and the concepts of CSI and AI.

Practical implications

How this remodelling could be done in practice is an issue of future research.

Originality/value

The author delivers the paper on comparison between human and AI CSI which is not much discussed in literature.

Details

Social Responsibility Journal, vol. 15 no. 6
Type: Research Article
ISSN: 1747-1117

Keywords

Article
Publication date: 10 June 2021

Nesibe Kantar and Terrell Ward Bynum

The purpose of this paper is to explore an emerging ethical theory for the Digital Age – Flourishing Ethics – which will likely be applicable in many different cultures worldwide…

Abstract

Purpose

The purpose of this paper is to explore an emerging ethical theory for the Digital Age – Flourishing Ethics – which will likely be applicable in many different cultures worldwide, addressing not only human concerns but also activities, decisions and consequences of robots, cyborgs, artificially intelligent agents and other new digital technologies.

Design/methodology/approach

In the past, a number of influential ethical theories in Western philosophy have focused upon choice and autonomy, or pleasure and pain or fairness and justice. These are important ethical concepts, but we consider “flourishing” to be a broader “umbrella concept” under which all of the above ideas can be included, plus additional ethical ideas from cultures in other regions of the world (for example, Buddhist, Muslim, Confucianist cultures and others). Before explaining the applied approach, this study discusses relevant ideas of four example thinkers who emphasize flourishing in their ethics writings: Aristotle, Norbert Wiener, James Moor and Simon Rogerson.

Findings

Flourishing Ethics is not a single ethical theory. It is “an approach,” a “family” of similar ethical theories which can be successfully applied to humans in many different cultures, as well as to non-human agents arising from new digital technologies.

Originality/value

This appears to be the first extended analysis of the emerging flourishing ethics “family” of theories.

Details

Journal of Information, Communication and Ethics in Society, vol. 19 no. 3
Type: Research Article
ISSN: 1477-996X

Keywords

Book part
Publication date: 14 December 2023

Esra Sipahi Döngül and Shajara Ul-Durar

The relationship between robots and spirituality in the workplace is an interesting and evolving area of research that could provide important insights into the role of technology…

Abstract

The relationship between robots and spirituality in the workplace is an interesting and evolving area of research that could provide important insights into the role of technology in promoting human well-being and personal growth. Robots are becoming increasingly common in the workplace and their functions in the business world are increasing. The use of robots in the workplace can affect people's spiritual values. Spiritual values such as being successful in their work, providing a sense of purpose and satisfaction, and feeling valued and important are important. The use of robots in the workplace may cause some people to take over many of the tasks that their jobs once did. In this case, employees may feel that their work no longer makes sense and may experience a loss of motivation. The fact that robots don't need the skills and experience of humans can make people feel inadequate in their jobs. However, the use of robots in the workplace can also support people's spiritual values. When robots work with humans, they have responsibilities such as interacting with them, showing empathy, respecting coworkers, and treating humans appropriately. This is important for people's mental and emotional health in the workplace. This approach will help people in the workplace work successfully and happily with robots. The use of robots in the workplace raises moral and ethical questions. In this section, research on the production of artificial intelligence-equipped robots and other intelligent technological machines and their use in organizations is evaluated within the framework of spirituality.

Details

Spirituality Management in the Workplace
Type: Book
ISBN: 978-1-83753-450-0

Keywords

Content available
Article
Publication date: 27 April 2012

Gurvinder S. Virk

828

Abstract

Details

Industrial Robot: An International Journal, vol. 39 no. 3
Type: Research Article
ISSN: 0143-991X

Abstract

Details

Consciousness and Creativity in Artificial Intelligence
Type: Book
ISBN: 978-1-80455-161-5

Open Access
Article
Publication date: 26 September 2018

Jochen Wirtz, Paul G. Patterson, Werner H. Kunz, Thorsten Gruber, Vinh Nhat Lu, Stefanie Paluch and Antje Martins

The service sector is at an inflection point with regard to productivity gains and service industrialization similar to the industrial revolution in manufacturing that started in…

72153

Abstract

Purpose

The service sector is at an inflection point with regard to productivity gains and service industrialization similar to the industrial revolution in manufacturing that started in the eighteenth century. Robotics in combination with rapidly improving technologies like artificial intelligence (AI), mobile, cloud, big data and biometrics will bring opportunities for a wide range of innovations that have the potential to dramatically change service industries. The purpose of this paper is to explore the potential role service robots will play in the future and to advance a research agenda for service researchers.

Design/methodology/approach

This paper uses a conceptual approach that is rooted in the service, robotics and AI literature.

Findings

The contribution of this paper is threefold. First, it provides a definition of service robots, describes their key attributes, contrasts their features and capabilities with those of frontline employees, and provides an understanding for which types of service tasks robots will dominate and where humans will dominate. Second, this paper examines consumer perceptions, beliefs and behaviors as related to service robots, and advances the service robot acceptance model. Third, it provides an overview of the ethical questions surrounding robot-delivered services at the individual, market and societal level.

Practical implications

This paper helps service organizations and their management, service robot innovators, programmers and developers, and policymakers better understand the implications of a ubiquitous deployment of service robots.

Originality/value

This is the first conceptual paper that systematically examines key dimensions of robot-delivered frontline service and explores how these will differ in the future.

Details

Journal of Service Management, vol. 29 no. 5
Type: Research Article
ISSN: 1757-5818

Keywords

1 – 10 of over 2000