The purpose of this paper is to explore artificial moral agency by reflecting upon the possibility of a Moral Turing Test (MTT) and whether its lack of focus on…
The purpose of this paper is to explore artificial moral agency by reflecting upon the possibility of a Moral Turing Test (MTT) and whether its lack of focus on interiority, i.e. its behaviouristic foundation, counts as an obstacle to establishing such a test to judge the performance of an Artificial Moral Agent (AMA). Subsequently, to investigate whether an MTT could serve as a useful framework for the understanding, designing and engineering of AMAs, we set out to address fundamental challenges within the field of robot ethics regarding the formal representation of moral theories and standards. Here, typically three design approaches to AMAs are available: top-down theory-driven models and bottom-up approaches which set out to model moral behaviour by means of models for adaptive learning, such as neural networks, and finally, hybrid models, which involve components from both top-down and bottom-up approaches to the modelling of moral agency. With inspiration from Allen and Wallace (2009, 2000) as well as Prior (1949, 2003), we elaborate on theoretically driven approaches to machine ethics by introducing deontic tense logic. Finally, within this framework, we explore the character of human interaction with a robot which has successfully passed an MTT.
The ideas in this paper reflect preliminary theoretical considerations regarding the possibility of establishing a MTT based on the evaluation of moral behaviour, which focusses on moral reasoning regarding possible actions. The thoughts reflected fall within the field of normative ethics and apply deontic tense logic to discuss the possibilities and limitations of artificial moral agency.
The authors stipulate a formalisation of logic of obligation, time and modality, which may serve as a candidate for implementing a system corresponding to an MTT in a restricted sense. Hence, the authors argue that to establish a present moral obligation, we need to be able to make a description of the actual situation and the relevant general moral rules. Such a description can never be complete, as the combination of exhaustive knowledge about both situations and rules would involve a God eye’s view, enabling one to know all there is to know and take everything relevant into consideration before making a perfect moral decision to act upon. Consequently, due to this frame problem, from an engineering point of view, we can only strive for designing a robot supposed to operate within a restricted domain and within a limited space-time region. Given such a setup, the robot has to be able to perform moral reasoning based on a formal description of the situation and any possible future developments. Although a system of this kind may be useful, it is clearly also limited to a particular context. It seems that it will always be possible to find special cases (outside the context for which it was designed) in which a given system does not pass the MTT. This calls for a new design of moral systems with trust-related components which will make it possible for the system to learn from experience.
It is without doubt that in the near future we are going to be faced with advanced social robots with increasing autonomy, and our growing engagement with these robots calls for the exploration of ethical issues and stresses the importance of informing the process of engineering ethical robots. Our contribution can be seen as an early step in this direction.
Helping Autism‐diagnosed teenagers navigate and develop socially (HANDS) is an EU research project in progress. The aim of HANDS is to investigate the potential of…
Helping Autism‐diagnosed teenagers navigate and develop socially (HANDS) is an EU research project in progress. The aim of HANDS is to investigate the potential of persuasive technology as a tool to help young people diagnosed, to whatever degree, as autistic. The HANDS project set out to develop mobile ICT solutions to help young people with autism become more fully integrated into society and the purpose of this paper is to present an overview of the design behind the HANDS toolset.
The topic of credibility is approached from an analytical, as well as an ethical, angle in order to address issues of credibility in relation to designing assistive technological tools. In addition, the authors set out to explore possible ways in which credibility can be evaluated. The paper presents a preliminary method for the evaluation of credibility; but which requires further refinement, as well as empirical support in order to inform us about issues of system credibility. Therefore, the suggested method reflects a working hypothesis which may serve as a springboard for further investigation.
The authors propose a preliminary method which reveals the necessity of certain preconditions requisite for evaluating the credibility of a system; and, in this way, seek to establish an ethically sound evaluation procedure for analysing credibility, by combining quantitative (i.e. electronic footprints) and qualitative assessments (i.e. dialogue between teacher and learner) of system credibility.
Further investigation of the evaluation process is needed to develop a standard for resolving the credibility of a system. Naturally, such a standard would serve not only as a tool for measuring credibility but also as a didactic tool for scaffolding a pedagogic dialogue between teacher and learner. It becomes important, therefore, to undertake the task of developing this standard in collaboration with the teachers in the HANDS project.
The paper discusses credibility issues and ethical concerns with a view to designing mobile solutions for autism‐diagnosed teenagers. The ideas expressed and developed herein are applicable to many assistive, technological tools available to persons with special needs.
School districts across the United States have adopted web-based student information systems (SIS) that offer parents, students, teachers and administrators immediate access to a variety of data points on each individual. In this chapter, I offer findings from in-depth interviews with school stakeholders that demonstrates how some students, typically ‘high performers’, are drawn into ‘pushed self-tracking’ (Lupton, 2016) of their academic achievement metrics, obsessively monitoring their grades and other quantified measures through digital devices, comparing their performance to other students and often generating a variety of affective states for themselves. I suggest that an SIS functions as a neoliberal technology of childhood government with these students internalising and displaying the self-governing capacities of ‘enterprise’ and ‘autonomy’ (Rose, 1996). These capacities are a product of and reinforce the metric culture of the school.
On the basis of three examples of intellectual capital statements that make the individual its central figure, this article discusses the role of individuals in knowledge…
On the basis of three examples of intellectual capital statements that make the individual its central figure, this article discusses the role of individuals in knowledge creation. After all, it is often claimed that the individual is the “container” of knowledge and therefore, what it means to account for the individual is an issue. However, analysing these individual competence statements (intellectual capital statements), it is clear that the individual is never alone. It is always related to organisational purposes and the individual competency statement makes the individual an organisational entity because individual competency is related either to organisational bonus systems, to corporate revenues or to the organisational configuration of its knowledge resources. Through the individual competency statement, the individual is made an organisational entity.