Search results

1 – 10 of 292
Open Access
Article
Publication date: 9 June 2020

Mark Ryan and Bernd Carsten Stahl

The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research into…

24536

Abstract

Purpose

The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research into the ethical consequences of artificial intelligence (AI). This is reflected by many outputs across academia, policy and the media. Many of these outputs aim to provide guidance to particular stakeholder groups. It has recently been shown that there is a large degree of convergence in terms of the principles upon which these guidance documents are based. Despite this convergence, it is not always clear how these principles are to be translated into practice.

Design/methodology/approach

In this paper, the authors move beyond the high-level ethical principles that are common across the AI ethics guidance literature and provide a description of the normative content that is covered by these principles. The outcome is a comprehensive compilation of normative requirements arising from existing guidance documents. This is not only required for a deeper theoretical understanding of AI ethics discussions but also for the creation of practical and implementable guidance for developers and users of AI.

Findings

In this paper, the authors therefore provide a detailed explanation of the normative implications of existing AI ethics guidelines but directed towards developers and organisational users of AI. The authors believe that the paper provides the most comprehensive account of ethical requirements in AI currently available, which is of interest not only to the research and policy communities engaged in the topic but also to the user communities that require guidance when developing or deploying AI systems.

Originality/value

The authors believe that they have managed to compile the most comprehensive document collecting existing guidance which can guide practical action but will hopefully also support the consolidation of the guidelines landscape. The authors’ findings should also be of academic interest and inspire philosophical research on the consistency and justification of the various normative statements that can be found in the literature.

Details

Journal of Information, Communication and Ethics in Society, vol. 19 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Open Access
Article
Publication date: 14 July 2022

Alejandra Rojas and Aarni Tuomi

The emergence of artificial intelligence (AI) is leading to a job transformation within the service ecosystem in which issues related to AI governance principles may hinder the…

1987

Abstract

Purpose

The emergence of artificial intelligence (AI) is leading to a job transformation within the service ecosystem in which issues related to AI governance principles may hinder the social sustainability of the sector. The relevance of AI startups in driving innovation has been recognized; thus, this paper aims to investigate whether and how AI startups may influence the sustainable social development (SSD) of the service sector.

Design/methodology/approach

An empirical study based on 24 in-depth interviews was conducted to qualitatively explore the perceptions of service sector facing AI policymakers, AI consultants and academics (n = 12), as well as AI startups (founders, AI developers; n = 12). An inductive coding approach was used to identify and analyze the data.

Findings

As part of a complex system, AI startups influence the SSD of the service sector in relation to other stakeholders’ contributions for the ethical deployment of AI. Four key factors influencing AI startups’ ability to contribute to the SSD of the service sector were identified: awareness of socioeconomic issues; fostering decent work; systematically applying ethics; and business model innovation.

Practical implications

This study proposes measures for service sector AI startups to promote collaborative efforts and implement managerial practices that adapt to their available resources.

Originality/value

This study develops original guidelines for startups that seek ethical development of beneficial AI in the service sector, building upon Ethics as a Service approach.

Details

Journal of Ethics in Entrepreneurship and Technology, vol. 2 no. 1
Type: Research Article
ISSN: 2633-7436

Keywords

Open Access
Article
Publication date: 27 June 2023

Teemu Birkstedt, Matti Minkkinen, Anushree Tandon and Matti Mäntymäki

Following the surge of documents laying out organizations' ethical principles for their use of artificial intelligence (AI), there is a growing demand for translating ethical…

13416

Abstract

Purpose

Following the surge of documents laying out organizations' ethical principles for their use of artificial intelligence (AI), there is a growing demand for translating ethical principles to practice through AI governance (AIG). AIG has emerged as a rapidly growing, yet fragmented, research area. This paper synthesizes the organizational AIG literature by outlining research themes and knowledge gaps as well as putting forward future agendas.

Design/methodology/approach

The authors undertake a systematic literature review on AIG, addressing the current state of its conceptualization and suggesting future directions for AIG scholarship and practice. The review protocol was developed following recommended guidelines for systematic reviews and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).

Findings

The results of the authors’ review confirmed the assumption that AIG is an emerging research topic with few explicit definitions. Moreover, the authors’ review identified four themes in the AIG literature: technology, stakeholders and context, regulation and processes. The central knowledge gaps revealed were the limited understanding of AIG implementation, lack of attention to the AIG context, uncertain effectiveness of ethical principles and regulation, and insufficient operationalization of AIG processes. To address these gaps, the authors present four future AIG agendas: technical, stakeholder and contextual, regulatory, and process. Going forward, the authors propose focused empirical research on organizational AIG processes, the establishment of an AI oversight unit and collaborative governance as a research approach.

Research limitations/implications

To address the identified knowledge gaps, the authors present the following working definition of AIG: AI governance is a system of rules, practices and processes employed to ensure an organization's use of AI technologies aligns with its strategies, objectives, and values, complete with legal requirements, ethical principles and the requirements set by stakeholders. Going forward, the authors propose focused empirical research on organizational AIG processes, the establishment of an AI oversight unit and collaborative governance as a research approach.

Practical implications

For practitioners, the authors highlight training and awareness, stakeholder management and the crucial role of organizational culture, including senior management commitment.

Social implications

For society, the authors review elucidates the multitude of stakeholders involved in AI governance activities and complexities related to balancing the needs of different stakeholders.

Originality/value

By delineating the AIG concept and the associated research themes, knowledge gaps and future agendas, the authors review builds a foundation for organizational AIG research, calling for broad contextual investigations and a deep understanding of AIG mechanisms. For practitioners, the authors highlight training and awareness, stakeholder management and the crucial role of organizational culture, including senior management commitment.

Details

Internet Research, vol. 33 no. 7
Type: Research Article
ISSN: 1066-2243

Keywords

Open Access
Article
Publication date: 7 June 2023

Zohreh Pourzolfaghar, Marco Alfano and Markus Helfert

This paper aims to describe the results of applying ethical AI requirements to a healthcare use case. The purpose of this study is to investigate the effectiveness of using open…

1610

Abstract

Purpose

This paper aims to describe the results of applying ethical AI requirements to a healthcare use case. The purpose of this study is to investigate the effectiveness of using open educational resources for Trustworthy AI to provide recommendations to an AI solution within the healthcare domain.

Design/methodology/approach

This study utilizes the Hackathon method as its research methodology. Hackathons are short events where participants share a common goal. The purpose of this to determine the efficacy of the educational resources provided to the students. To achieve this objective, eight teams of students and faculty members participated in the Hackathon. The teams made suggestions for healthcare use case based on the knowledge acquired from educational resources. A research team based at the university hosting the Hackathon devised the use case. The healthcare research team participated in the Hackathon by presenting the use case and subsequently analysing and evaluating the utility of the outcomes.

Findings

The Hackathon produced a framework of proposed recommendations for the introduced healthcare use case, in accordance with the EU's requirements for Trustworthy AI.

Research limitations/implications

The educational resources have been applied to one use-case.

Originality/value

This is the first time that open educational resources for Trustworthy AI have been utilized in higher education, making this a novel study. The university hosting the Hackathon has been the coordinator for the Trustworthy AI Hackathon (as partner to Trustworthy AI project).

Details

American Journal of Business, vol. 38 no. 3
Type: Research Article
ISSN: 1935-5181

Keywords

Open Access
Article
Publication date: 9 August 2022

Sari Knaapi-Junnila, Minna M. Rantanen and Jani Koskinen

Data economy is pervasively present in our everyday lives. Still, ordinary laypersons' chances to genuine communication with other stakeholders are scarce. This paper aims to…

Abstract

Purpose

Data economy is pervasively present in our everyday lives. Still, ordinary laypersons' chances to genuine communication with other stakeholders are scarce. This paper aims to raise awareness about communication patterns in the context of data economy and initiate a dialogue about laypersons' position in data economy ecosystems.

Design/methodology/approach

This conceptual paper covers theory-based critical reflection with ethical- and empirical-based remarks. It provides novel perspectives both for research and stakeholder collaboration.

Findings

The authors suggest invitational rhetoric and Habermasian discourse as instruments towards understanding partnership between all stakeholders of the data economy to enable laypersons to transfer from subjectivity to the agency.

Originality/value

The authors provide (1) theory-based critical reflection concerning communication patterns in the data economy; (2) both ethical and empirical-based remarks about laypersons' position in data economy and (3) ideas for interdisciplinary research and stakeholder collaboration practices by using invitational rhetoric and rational discourse. By that, this paper suggests taking a closer look at communication practices and ethics alike in the data economy. Moreover, it encourages clear, rational and justified arguments between stakeholders in a respectful and equal environment in the data economy ecosystems.

Details

Information Technology & People, vol. 35 no. 8
Type: Research Article
ISSN: 0959-3845

Keywords

Open Access
Article
Publication date: 1 November 2023

Dan Jin

The purpose of this study is to provide insights and guidance for practitioners in terms of ensuring rigorous ethical and moral conduct in artificial intelligence (AI) hiring and…

1287

Abstract

Purpose

The purpose of this study is to provide insights and guidance for practitioners in terms of ensuring rigorous ethical and moral conduct in artificial intelligence (AI) hiring and implementation.

Design/methodology/approach

The research employed two experimental designs and one pilot study to investigate the ethical and moral implications of different levels of AI implementation in the hospitality industry, the intersection of self-congruency and ethical considerations when AI replaces human service providers and the impact of psychological distance associated with AI on individuals' ethical and moral considerations. These research methods included surveys and experimental manipulations to gather and analyze relevant data.

Findings

Findings provide valuable insights into the ethical and moral dimensions of AI implementation, the influence of self-congruency on ethical considerations and the role of psychological distance in individuals’ ethical evaluations. They contribute to the development of guidelines and practices for the responsible and ethical implementation of AI in various industries, including the hospitality sector.

Practical implications

The study highlights the importance of exercising rigorous ethical-moral AI hiring and implementation practices to ensure AI principles and enforcement operations in the restaurant industry. It provides practitioners with useful insights into how AI-robotization can improve ethical and moral standards.

Originality/value

The study contributes to the literature by providing insights into the ethical and moral implications of AI service robots in the hospitality industry. Additionally, the study explores the relationship between psychological distance and acceptance of AI-intervened service, which has not been extensively studied in the literature.

Details

International Hospitality Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2516-8142

Keywords

Open Access
Article
Publication date: 29 March 2022

Stefano Calzati

This study advances a reconceptualization of data and information which overcomes normative understandings often contained in data policies at national and international levels…

Abstract

Purpose

This study advances a reconceptualization of data and information which overcomes normative understandings often contained in data policies at national and international levels. This study aims to propose a conceptual framework that moves beyond subject- and collective-centric normative understandings.

Design/methodology/approach

To do so, this study discusses the European Union (EU) and China’s approaches to data-driven technologies highlighting their similarities and differences when it comes to the vision underpinning how tech innovation is shaped.

Findings

Regardless of the different attention to the subject (the EU) and the collective (China), the normative understandings of technology by both actors remain trapped into a positivist approach that overlooks all that is not and cannot be turned into data, thus hindering the elaboration of a more holistic ecological thinking merging humans and technologies.

Originality/value

Revising the philosophical and political debate on data and data-driven technologies, a third way is elaborated, i.e. federated data as commons. This third way puts the subject as part by default of a collective at the centre of discussion. This framing can serve as the basis for elaborating sociotechnical alternatives when it comes to define and regulate the mash-up of humans and technology.

Details

Journal of Information, Communication and Ethics in Society, vol. 21 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Open Access
Article
Publication date: 4 April 2024

Bassem T. ElHassan and Alya A. Arabi

The purpose of this paper is to illuminate the ethical concerns associated with the use of artificial intelligence (AI) in the medical sector and to provide solutions that allow…

Abstract

Purpose

The purpose of this paper is to illuminate the ethical concerns associated with the use of artificial intelligence (AI) in the medical sector and to provide solutions that allow deriving maximum benefits from this technology without compromising ethical principles.

Design/methodology/approach

This paper provides a comprehensive overview of AI in medicine, exploring its technical capabilities, practical applications, and ethical implications. Based on our expertise, we offer insights from both technical and practical perspectives.

Findings

The study identifies several advantages of AI in medicine, including its ability to improve diagnostic accuracy, enhance surgical outcomes, and optimize healthcare delivery. However, there are pending ethical issues such as algorithmic bias, lack of transparency, data privacy issues, and the potential for AI to deskill healthcare professionals and erode humanistic values in patient care. Therefore, it is important to address these issues as promptly as possible to make sure that we benefit from the AI’s implementation without causing any serious drawbacks.

Originality/value

This paper gains its value from the combined practical experience of Professor Elhassan gained through his practice at top hospitals worldwide, and the theoretical expertise of Dr. Arabi acquired from international institutes. The shared experiences of the authors provide valuable insights that are beneficial for raising awareness and guiding action in addressing the ethical concerns associated with the integration of artificial intelligence in medicine.

Details

International Journal of Ethics and Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9369

Keywords

Open Access
Article
Publication date: 23 March 2021

Aizhan Tursunbayeva, Claudia Pagliari, Stefano Di Lauro and Gilda Antonelli

This research analyzed the existing academic and grey literature concerning the technologies and practices of people analytics (PA), to understand how ethical considerations are…

34457

Abstract

Purpose

This research analyzed the existing academic and grey literature concerning the technologies and practices of people analytics (PA), to understand how ethical considerations are being discussed by researchers, industry experts and practitioners, and to identify gaps, priorities and recommendations for ethical practice.

Design/methodology/approach

An iterative “scoping review” method was used to capture and synthesize relevant academic and grey literature. This is suited to emerging areas of innovation where formal research lags behind evidence from professional or technical sources.

Findings

Although the grey literature contains a growing stream of publications aimed at helping PA practitioners to “be ethical,” overall, research on ethical issues in PA is still at an early stage. Optimistic and technocentric perspectives dominate the PA discourse, although key themes seen in the wider literature on digital/data ethics are also evident. Risks and recommendations for PA projects concerned transparency and diverse stakeholder inclusion, respecting privacy rights, fair and proportionate use of data, fostering a systemic culture of ethical practice, delivering benefits for employees, including ethical outcomes in business models, ensuring legal compliance and using ethical charters.

Research limitations/implications

This research adds to current debates over the future of work and employment in a digitized, algorithm-driven society.

Practical implications

The research provides an accessible summary of the risks, opportunities, trade-offs and regulatory issues for PA, as well as a framework for integrating ethical strategies and practices.

Originality/value

By using a scoping methodology to surface and analyze diverse literatures, this study fills a gap in existing knowledge on ethical aspects of PA. The findings can inform future academic research, organizations using or considering PA products, professional associations developing relevant guidelines and policymakers adapting regulations. It is also timely, given the increase in digital monitoring of employees working from home during the Covid-19 pandemic.

Open Access
Article
Publication date: 5 July 2021

Babak Abedin

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote…

7019

Abstract

Purpose

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote the benefits of explainability or criticize it due to its counterproductive effects. This study addresses this polarized space and aims to identify opposing effects of the explainability of AI and the tensions between them and propose how to manage this tension to optimize AI system performance and trustworthiness.

Design/methodology/approach

The author systematically reviews the literature and synthesizes it using a contingency theory lens to develop a framework for managing the opposing effects of AI explainability.

Findings

The author finds five opposing effects of explainability: comprehensibility, conduct, confidentiality, completeness and confidence in AI (5Cs). The author also proposes six perspectives on managing the tensions between the 5Cs: pragmatism in explanation, contextualization of the explanation, cohabitation of human agency and AI agency, metrics and standardization, regulatory and ethical principles, and other emerging solutions (i.e. AI enveloping, blockchain and AI fuzzy systems).

Research limitations/implications

As in other systematic literature review studies, the results are limited by the content of the selected papers.

Practical implications

The findings show how AI owners and developers can manage tensions between profitability, prediction accuracy and system performance via visibility, accountability and maintaining the “social goodness” of AI. The results guide practitioners in developing metrics and standards for AI explainability, with the context of AI operation as the focus.

Originality/value

This study addresses polarized beliefs amongst scholars and practitioners about the benefits of AI explainability versus its counterproductive effects. It poses that there is no single best way to maximize AI explainability. Instead, the co-existence of enabling and constraining effects must be managed.

1 – 10 of 292