Search results

1 – 10 of over 2000
Article
Publication date: 11 March 2022

Aline Shakti Franzke

As Big Data and Artificial Intelligence (AI) proliferate, calls have emerged for ethical reflection. Ethics guidelines have played a central role in this respect. While…

Abstract

Purpose

As Big Data and Artificial Intelligence (AI) proliferate, calls have emerged for ethical reflection. Ethics guidelines have played a central role in this respect. While quantitative research on the ethics guidelines of AI/Big Data has been undertaken, there has been a dearth of systematic qualitative analyses of these documents.

Design/methodology/approach

Aiming to address this research gap, this paper analyses 70 international ethics guidelines documents from academia, NGOs and the corporate realm, published between 2017 and 2020.

Findings

The article presents four key findings: existing ethics guidelines (1) promote a broad spectrum of values; (2) focus principally on AI, followed by (Big) Data and algorithms; (3) do not adequately define the term “ethics” and related terms; and (4) have most frequent recourse to the values of “transparency,” “privacy,” and “security.” Based on these findings, the article argues that the guidelines corpus exhibits discernible utilitarian tendencies; guidelines would benefit from greater reflexivity with respect to their ethical framework; and virtue ethical approaches have a valuable contribution to make to the process of guidelines development.

Originality/value

The paper provides qualitative insights into the ethical discourse surrounding AI guidelines, as well as a concise overview of different types of operative translations of theoretical ethical concepts vis-à-vis the sphere of AI. These may prove beneficial for (applied) ethicists, developers and regulators who understand these guidelines as policy.

Details

Journal of Information, Communication and Ethics in Society, vol. 20 no. 4
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 22 July 2021

Soraj Hongladarom

The paper aims to analyze the content of the newly published National AI Ethics Guideline in Thailand. Thailand’s ongoing political struggles and transformation has made it a good…

Abstract

Purpose

The paper aims to analyze the content of the newly published National AI Ethics Guideline in Thailand. Thailand’s ongoing political struggles and transformation has made it a good case to see how a policy document such as a guideline in AI ethics becomes part of the transformations. Looking at how the two are interrelated will help illuminate the political and cultural dynamics of Thailand as well as how governance of ethics itself is conceptualized.

Design/methodology/approach

The author looks at the history of how the National AI Ethics Guidelines came to be and interprets its content, situating the Guideline within the contemporary history of the country as well as comparing the Guideline with some of the leading existing guidelines.

Findings

It is found that the Guideline represents an ambivalent and paradoxical nature that characterizes Thailand’s attempt at modernization. On the one hand, there is a desire to join the ranks of the more advanced economies, but, on the other hand, there is also a strong desire to maintain its own traditional values. Thailand has not been successful in resolving this tension yet, and this lack of success is shown in the way that content of the AI Ethics Guideline is presented.

Practical implications

The findings of the paper could be useful for further attempts in drafting and revising AI ethics guidelines in the future.

Originality/value

The paper represents the first attempt, so far as the author is aware, to analyze the content of the Thai AI Ethics Guideline critically.

Details

Journal of Information, Communication and Ethics in Society, vol. 19 no. 4
Type: Research Article
ISSN: 1477-996X

Keywords

Open Access
Article
Publication date: 9 June 2020

Mark Ryan and Bernd Carsten Stahl

The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research into…

22553

Abstract

Purpose

The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research into the ethical consequences of artificial intelligence (AI). This is reflected by many outputs across academia, policy and the media. Many of these outputs aim to provide guidance to particular stakeholder groups. It has recently been shown that there is a large degree of convergence in terms of the principles upon which these guidance documents are based. Despite this convergence, it is not always clear how these principles are to be translated into practice.

Design/methodology/approach

In this paper, the authors move beyond the high-level ethical principles that are common across the AI ethics guidance literature and provide a description of the normative content that is covered by these principles. The outcome is a comprehensive compilation of normative requirements arising from existing guidance documents. This is not only required for a deeper theoretical understanding of AI ethics discussions but also for the creation of practical and implementable guidance for developers and users of AI.

Findings

In this paper, the authors therefore provide a detailed explanation of the normative implications of existing AI ethics guidelines but directed towards developers and organisational users of AI. The authors believe that the paper provides the most comprehensive account of ethical requirements in AI currently available, which is of interest not only to the research and policy communities engaged in the topic but also to the user communities that require guidance when developing or deploying AI systems.

Originality/value

The authors believe that they have managed to compile the most comprehensive document collecting existing guidance which can guide practical action but will hopefully also support the consolidation of the guidelines landscape. The authors’ findings should also be of academic interest and inspire philosophical research on the consistency and justification of the various normative statements that can be found in the literature.

Details

Journal of Information, Communication and Ethics in Society, vol. 19 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 22 December 2022

Dorine Eva van Norren

This paper aims to demonstrate the relevance of worldviews of the global south to debates of artificial intelligence, enhancing the human rights debate on artificial intelligence…

Abstract

Purpose

This paper aims to demonstrate the relevance of worldviews of the global south to debates of artificial intelligence, enhancing the human rights debate on artificial intelligence (AI) and critically reviewing the paper of UNESCO Commission on the Ethics of Scientific Knowledge and Technology (COMEST) that preceded the drafting of the UNESCO guidelines on AI. Different value systems may lead to different choices in programming and application of AI. Programming languages may acerbate existing biases as a people’s worldview is captured in its language. What are the implications for AI when seen from a collective ontology? Ubuntu (I am a person through other persons) starts from collective morals rather than individual ethics.

Design/methodology/approach

Literature overview on the African philosophy of Ubuntu as applied to artificial intelligence. Application of it to the United Nations Educational, Scientific and Cultural Organisation (UNESCO) debates on establishing guidelines to the ethics of artificial intelligence.

Findings

Metaphysically, Ubuntu and its conception of social personhood (attained during one’s life) largely rejects transhumanism. When confronted with economic choices, Ubuntu favors sharing above competition and thus an anticapitalist logic of equitable distribution of AI benefits, humaneness and nonexploitation. When confronted with issues of privacy, Ubuntu emphasizes transparency to group members, rather than individual privacy, yet it calls for stronger (group privacy) protection. In democratic terms, it promotes consensus decision-making over representative democracy. Certain applications of AI may be more controversial in Africa than in other parts of the world, like care for the elderly, that deserve the utmost respect and attention, and which builds moral personhood. At the same time, AI may be helpful, as care from the home and community is encouraged from an Ubuntu perspective. The report on AI and ethics of the UNESCO World COMEST formulated principles as input, which are analyzed from the African ontological point of view. COMEST departs from “universal” concepts of individual human rights, sustainability and good governance which are not necessarily fully compatible with relatedness, including future and past generations. Next to rules based approaches, which may hamper diversity, bottom-up approaches are needed with intercultural deep learning algorithms.

Research limitations/implications

There is very few existing literature on AI and Ubuntu. Therefore, this paper is of an explorative nature.

Practical implications

The ethics of Ubuntu offers unique vantage points in looking at the organization of society and economics today, which are also relevant for development of AI, especially in its tenet of relatedness rather than individuality (and practical use of AI for individuals), taking responsibility for society as a whole (such as analyzing the benefit of AI for all strata of society), and embodying true inclusiveness. Whether looking at top-down guidelines for the development and implementation of AI or the bottom-up ethical learning process of AI (deep learning), ethics of the Global South can have an important role to play to combat global individualist tendencies and inequity, likely reinforced by AI. This warrants far more research.

Social implications

Applications of AI in Africa are not contextualized, do not address the most pressing needs of the African continent, lead to cybersecurity issues and also do not incorporate African ethics. UNESCO’s work in this regard is important but expert inputs are largely centered around Western “universal” principles and Organisation for Economic Cooperation and Development and EU precedence. African ethics have, so far, a small role to play in global ethics and philosophy and therefore risk to be overlooked in the discussion on AI and ethics. This is why the consultation process of UNESCO on ethics of AI was of paramount importance. However, it does not automatically lead to consultation of African philosophers or sages, as many are educated in Western (ized) education systems. See further details under practical implications.

Originality/value

This is a new area of research in which little work has been done so far. This paper offers the opportunity to widen the debate on AI and ethics beyond the conventional discourse, involving multiple worldviews, of which Ubuntu is just one.

Details

Journal of Information, Communication and Ethics in Society, vol. 21 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Open Access
Article
Publication date: 14 July 2022

Alejandra Rojas and Aarni Tuomi

The emergence of artificial intelligence (AI) is leading to a job transformation within the service ecosystem in which issues related to AI governance principles may hinder the…

1688

Abstract

Purpose

The emergence of artificial intelligence (AI) is leading to a job transformation within the service ecosystem in which issues related to AI governance principles may hinder the social sustainability of the sector. The relevance of AI startups in driving innovation has been recognized; thus, this paper aims to investigate whether and how AI startups may influence the sustainable social development (SSD) of the service sector.

Design/methodology/approach

An empirical study based on 24 in-depth interviews was conducted to qualitatively explore the perceptions of service sector facing AI policymakers, AI consultants and academics (n = 12), as well as AI startups (founders, AI developers; n = 12). An inductive coding approach was used to identify and analyze the data.

Findings

As part of a complex system, AI startups influence the SSD of the service sector in relation to other stakeholders’ contributions for the ethical deployment of AI. Four key factors influencing AI startups’ ability to contribute to the SSD of the service sector were identified: awareness of socioeconomic issues; fostering decent work; systematically applying ethics; and business model innovation.

Practical implications

This study proposes measures for service sector AI startups to promote collaborative efforts and implement managerial practices that adapt to their available resources.

Originality/value

This study develops original guidelines for startups that seek ethical development of beneficial AI in the service sector, building upon Ethics as a Service approach.

Details

Journal of Ethics in Entrepreneurship and Technology, vol. 2 no. 1
Type: Research Article
ISSN: 2633-7436

Keywords

Article
Publication date: 7 December 2021

Kumar Saurabh, Ridhi Arora, Neelam Rani, Debasisha Mishra and M. Ramkumar

Digital transformation (DT) leverages digital technologies to change current processes and introduce new processes in any organisation’s business model, customer/user experience…

1762

Abstract

Purpose

Digital transformation (DT) leverages digital technologies to change current processes and introduce new processes in any organisation’s business model, customer/user experience and operational processes (DT pillars). Artificial intelligence (AI) plays a significant role in achieving DT. As DT is touching each sphere of humanity, AI led DT is raising many fundamental questions. These questions raise concerns for the systems deployed, how they should behave, what risks they carry, the monitoring and evaluation control we have in hand, etc. These issues call for the need to integrate ethics in AI led DT. The purpose of this study is to develop an “AI led ethical digital transformation framework”.

Design/methodology/approach

Based on the literature survey, various existing business ethics decision-making models were synthesised. The authors mapped essential characteristics such as intensity and the individual, organisational and opportunity factors of ethics models with the proposed AI led ethical DT. The DT framework is evaluated using a thematic analysis of 23 expert interviews with relevant AI ethics personas from industry and society. The qualitative data of the interviews and opinion data has been analysed using MAXQDA software.

Findings

The authors have explored how AI can drive the ethical DT framework and have identified the core constituents of developing an AI led ethical DT framework. Backed by established ethical theories, the paper presents how DT pillars are related and sequenced to ethical factors. This research provides the potential to examine theoretically sequenced ethical factors with practical DT pillars.

Originality/value

The study establishes deduced and induced ethical value codes based on thematic analysis to develop guidelines for the pursuit of ethical DT. The authors identify four unique induced themes, namely, corporate social responsibility, perceived value, standard benchmarking and learning willingness. The comprehensive findings of this research, supported by a robust theoretical background, have substantial implications for academic research and corporate applicability. The proposed AI led ethical DT framework is unique and can be used for integrated social, technological and economic ethical research.

Details

Journal of Information, Communication and Ethics in Society, vol. 20 no. 2
Type: Research Article
ISSN: 1477-996X

Keywords

Open Access
Article
Publication date: 27 June 2023

Teemu Birkstedt, Matti Minkkinen, Anushree Tandon and Matti Mäntymäki

Following the surge of documents laying out organizations' ethical principles for their use of artificial intelligence (AI), there is a growing demand for translating ethical…

6926

Abstract

Purpose

Following the surge of documents laying out organizations' ethical principles for their use of artificial intelligence (AI), there is a growing demand for translating ethical principles to practice through AI governance (AIG). AIG has emerged as a rapidly growing, yet fragmented, research area. This paper synthesizes the organizational AIG literature by outlining research themes and knowledge gaps as well as putting forward future agendas.

Design/methodology/approach

The authors undertake a systematic literature review on AIG, addressing the current state of its conceptualization and suggesting future directions for AIG scholarship and practice. The review protocol was developed following recommended guidelines for systematic reviews and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA).

Findings

The results of the authors’ review confirmed the assumption that AIG is an emerging research topic with few explicit definitions. Moreover, the authors’ review identified four themes in the AIG literature: technology, stakeholders and context, regulation and processes. The central knowledge gaps revealed were the limited understanding of AIG implementation, lack of attention to the AIG context, uncertain effectiveness of ethical principles and regulation, and insufficient operationalization of AIG processes. To address these gaps, the authors present four future AIG agendas: technical, stakeholder and contextual, regulatory, and process. Going forward, the authors propose focused empirical research on organizational AIG processes, the establishment of an AI oversight unit and collaborative governance as a research approach.

Research limitations/implications

To address the identified knowledge gaps, the authors present the following working definition of AIG: AI governance is a system of rules, practices and processes employed to ensure an organization's use of AI technologies aligns with its strategies, objectives, and values, complete with legal requirements, ethical principles and the requirements set by stakeholders. Going forward, the authors propose focused empirical research on organizational AIG processes, the establishment of an AI oversight unit and collaborative governance as a research approach.

Practical implications

For practitioners, the authors highlight training and awareness, stakeholder management and the crucial role of organizational culture, including senior management commitment.

Social implications

For society, the authors review elucidates the multitude of stakeholders involved in AI governance activities and complexities related to balancing the needs of different stakeholders.

Originality/value

By delineating the AIG concept and the associated research themes, knowledge gaps and future agendas, the authors review builds a foundation for organizational AIG research, calling for broad contextual investigations and a deep understanding of AIG mechanisms. For practitioners, the authors highlight training and awareness, stakeholder management and the crucial role of organizational culture, including senior management commitment.

Details

Internet Research, vol. 33 no. 7
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 20 January 2022

Verma Prikshat, Parth Patel, Arup Varma and Alessio Ishizaka

This narrative review presents a multi-stakeholder ethical framework for AI-augmented HRM, based on extant research in the domains of ethical HRM and ethical AI. More…

2387

Abstract

Purpose

This narrative review presents a multi-stakeholder ethical framework for AI-augmented HRM, based on extant research in the domains of ethical HRM and ethical AI. More specifically, the authors identify critical ethical issues pertaining to AI-augmented HRM functions and suggest ethical principles to address these issues by identifying the relevant stakeholders based on the responsibility ethics approach.

Design/methodology/approach

This paper follows a narrative review approach by first identifying various ethical/codes/issues/dilemmas discussed in HRM and AI. The authors next discuss ethical issues concerning AI-augmented HRM, drawing from recent literature. Finally, the authors propose ethical principles for AI-augmented HRM and stakeholders responsible for managing those issues.

Findings

The paper summarises key findings of extant research in the ethical HRM and AI domain and provides a multi-stakeholder ethical framework for AI-augmented HRM functions.

Originality/value

This research's value lies in conceptualising a multi-stakeholder ethical framework for AI-augmented HRM functions comprising 11 ethical principles. The research also identifies the class of stakeholders responsible for identified ethical principles. The research also presents future research directions based on the proposed model.

Details

International Journal of Manpower, vol. 43 no. 1
Type: Research Article
ISSN: 0143-7720

Keywords

Content available
Article
Publication date: 10 February 2022

Junaid Qadir, Mohammad Qamar Islam and Ala Al-Fuqaha

Along with the various beneficial uses of artificial intelligence (AI), there are various unsavory concomitants including the inscrutability of AI tools (and the opaqueness of…

1311

Abstract

Purpose

Along with the various beneficial uses of artificial intelligence (AI), there are various unsavory concomitants including the inscrutability of AI tools (and the opaqueness of their mechanisms), the fragility of AI models under adversarial settings, the vulnerability of AI models to bias throughout their pipeline, the high planetary cost of running large AI models and the emergence of exploitative surveillance capitalism-based economic logic built on AI technology. This study aims to document these harms of AI technology and study how these technologies and their developers and users can be made more accountable.

Design/methodology/approach

Due to the nature of the problem, a holistic, multi-pronged approach is required to understand and counter these potential harms. This paper identifies the rationale for urgently focusing on human-centered AI and provide an outlook of promising directions including technical proposals.

Findings

AI has the potential to benefit the entire society, but there remains an increased risk for vulnerable segments of society. This paper provides a general survey of the various approaches proposed in the literature to make AI technology more accountable. This paper reports that the development of ethical accountable AI design requires the confluence and collaboration of many fields (ethical, philosophical, legal, political and technical) and that lack of diversity is a problem plaguing the state of the art in AI.

Originality/value

This paper provides a timely synthesis of the various technosocial proposals in the literature spanning technical areas such as interpretable and explainable AI; algorithmic auditability; as well as policy-making challenges and efforts that can operationalize ethical AI and help in making AI accountable. This paper also identifies and shares promising future directions of research.

Details

Journal of Information, Communication and Ethics in Society, vol. 20 no. 2
Type: Research Article
ISSN: 1477-996X

Keywords

Open Access
Article
Publication date: 7 June 2023

Zohreh Pourzolfaghar, Marco Alfano and Markus Helfert

This paper aims to describe the results of applying ethical AI requirements to a healthcare use case. The purpose of this study is to investigate the effectiveness of using open…

1239

Abstract

Purpose

This paper aims to describe the results of applying ethical AI requirements to a healthcare use case. The purpose of this study is to investigate the effectiveness of using open educational resources for Trustworthy AI to provide recommendations to an AI solution within the healthcare domain.

Design/methodology/approach

This study utilizes the Hackathon method as its research methodology. Hackathons are short events where participants share a common goal. The purpose of this to determine the efficacy of the educational resources provided to the students. To achieve this objective, eight teams of students and faculty members participated in the Hackathon. The teams made suggestions for healthcare use case based on the knowledge acquired from educational resources. A research team based at the university hosting the Hackathon devised the use case. The healthcare research team participated in the Hackathon by presenting the use case and subsequently analysing and evaluating the utility of the outcomes.

Findings

The Hackathon produced a framework of proposed recommendations for the introduced healthcare use case, in accordance with the EU's requirements for Trustworthy AI.

Research limitations/implications

The educational resources have been applied to one use-case.

Originality/value

This is the first time that open educational resources for Trustworthy AI have been utilized in higher education, making this a novel study. The university hosting the Hackathon has been the coordinator for the Trustworthy AI Hackathon (as partner to Trustworthy AI project).

Details

American Journal of Business, vol. 38 no. 3
Type: Research Article
ISSN: 1935-5181

Keywords

1 – 10 of over 2000