Search results

1 – 10 of over 1000
To view the access options for this content please click here
Article
Publication date: 22 July 2021

Soraj Hongladarom

The paper aims to analyze the content of the newly published National AI Ethics Guideline in Thailand. Thailand’s ongoing political struggles and transformation has made…

Abstract

Purpose

The paper aims to analyze the content of the newly published National AI Ethics Guideline in Thailand. Thailand’s ongoing political struggles and transformation has made it a good case to see how a policy document such as a guideline in AI ethics becomes part of the transformations. Looking at how the two are interrelated will help illuminate the political and cultural dynamics of Thailand as well as how governance of ethics itself is conceptualized.

Design/methodology/approach

The author looks at the history of how the National AI Ethics Guidelines came to be and interprets its content, situating the Guideline within the contemporary history of the country as well as comparing the Guideline with some of the leading existing guidelines.

Findings

It is found that the Guideline represents an ambivalent and paradoxical nature that characterizes Thailand’s attempt at modernization. On the one hand, there is a desire to join the ranks of the more advanced economies, but, on the other hand, there is also a strong desire to maintain its own traditional values. Thailand has not been successful in resolving this tension yet, and this lack of success is shown in the way that content of the AI Ethics Guideline is presented.

Practical implications

The findings of the paper could be useful for further attempts in drafting and revising AI ethics guidelines in the future.

Originality/value

The paper represents the first attempt, so far as the author is aware, to analyze the content of the Thai AI Ethics Guideline critically.

Details

Journal of Information, Communication and Ethics in Society, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1477-996X

Keywords

Content available
Article
Publication date: 9 June 2020

Mark Ryan and Bernd Carsten Stahl

The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research…

Downloads
7630

Abstract

Purpose

The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research into the ethical consequences of artificial intelligence (AI). This is reflected by many outputs across academia, policy and the media. Many of these outputs aim to provide guidance to particular stakeholder groups. It has recently been shown that there is a large degree of convergence in terms of the principles upon which these guidance documents are based. Despite this convergence, it is not always clear how these principles are to be translated into practice.

Design/methodology/approach

In this paper, the authors move beyond the high-level ethical principles that are common across the AI ethics guidance literature and provide a description of the normative content that is covered by these principles. The outcome is a comprehensive compilation of normative requirements arising from existing guidance documents. This is not only required for a deeper theoretical understanding of AI ethics discussions but also for the creation of practical and implementable guidance for developers and users of AI.

Findings

In this paper, the authors therefore provide a detailed explanation of the normative implications of existing AI ethics guidelines but directed towards developers and organisational users of AI. The authors believe that the paper provides the most comprehensive account of ethical requirements in AI currently available, which is of interest not only to the research and policy communities engaged in the topic but also to the user communities that require guidance when developing or deploying AI systems.

Originality/value

The authors believe that they have managed to compile the most comprehensive document collecting existing guidance which can guide practical action but will hopefully also support the consolidation of the guidelines landscape. The authors’ findings should also be of academic interest and inspire philosophical research on the consistency and justification of the various normative statements that can be found in the literature.

Details

Journal of Information, Communication and Ethics in Society, vol. 19 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

To view the access options for this content please click here
Article
Publication date: 15 June 2020

Bernice Ibiricu and Marja Leena van der Made

This paper aims to provide a framework for a code of ethics related to digital and leading edge technologies.

Downloads
1167

Abstract

Purpose

This paper aims to provide a framework for a code of ethics related to digital and leading edge technologies.

Design/methodology/approach

The proposed ethical framework is anchored in data protection legislation, and results from a combination of case studies, observed user behaviour and decision-making processes.

Findings

A concise and user-friendly ethical framework ensures the embedded code of conduct is respected and observed by all employees concerned.

Originality/value

An ethical framework aligned with EU data protection legislation is required.

Details

Records Management Journal, vol. 30 no. 3
Type: Research Article
ISSN: 0956-5698

Keywords

To view the access options for this content please click here
Article
Publication date: 4 December 2020

Anton Saveliev and Denis Zhurenkov

The purpose of this paper is to review and analyze how the development and utilization of artificial intelligence (AI) technologies for social responsibility are defined…

Abstract

Purpose

The purpose of this paper is to review and analyze how the development and utilization of artificial intelligence (AI) technologies for social responsibility are defined in the national AI strategies of the USA, Russia and China.

Design/methodology/approach

The notion of responsibility concerning AI is currently not legally defined by any country in the world. The authors of this research are going to use the methodology, based on Luciano Floridi’s Unified framework of five principles for AI in society, to determine how social responsibility is implemented in the AI strategies of the USA, Russia and China.

Findings

All three strategies for the development of AI in the USA, Russia and China, as evaluated in the paper, contain some or other components aimed at achieving public responsibility and responsible use of AI. The Unified framework of five principles for AI in society, developed by L. Floridi, can be used as a viable assessment tool to determine at least in general terms how social responsibility is implied and implemented in national strategic documents in the field of AI. However, authors of the paper call for further development in the field of mutually recognizable ethical models for socially beneficial AI.

Practical implications

This study allows us to better understand the linkages, overlaps and differences between modern philosophy of information, AI-ethics, social responsibility and government regulation. The analysis provided in this paper can serve as a basic blueprint for future attempts to define how social responsibility is understood and implied by government decision-makers.

Originality/value

The analysis provided in the paper, however general and empirical it may be, is a first-time example of how the Unified framework of five principles for AI in society can be applied as an assessment tool to determine social responsibility in AI-related official documents.

To view the access options for this content please click here
Book part
Publication date: 15 July 2020

Keith A. Abney

New technologies, including artificial intelligence (AI), have helped us begin to take our first steps off Earth and into outer space. But conflicts inevitably will arise…

Abstract

New technologies, including artificial intelligence (AI), have helped us begin to take our first steps off Earth and into outer space. But conflicts inevitably will arise and, in the absence of settled governance, may be resolved by force, as is typical for new frontiers. But the terrestrial assumptions behind the ethics of war will need to be rethought when the context radically changes, and both the environment of space and the advent of robotic warfighters with superhuman capabilities will constitute such a radical change. This essay examines how new autonomous technologies, especially dual-use technologies, and the challenges to human existence in space will force us to rethink the ethics of war, both from space to Earth, and in space itself.

To view the access options for this content please click here
Article
Publication date: 25 October 2021

Florian Königstorfer and Stefan Thalmann

Artificial intelligence (AI) is currently one of the most disruptive technologies and can be applied in many different use cases. However, applying AI in regulated…

Abstract

Purpose

Artificial intelligence (AI) is currently one of the most disruptive technologies and can be applied in many different use cases. However, applying AI in regulated environments is challenging, as it is currently not clear how to achieve and assess the fairness, accountability and transparency (FAT) of AI. Documentation is one promising governance mechanism to ensure that AI is FAT when it is applied in practice. However, due to the nature of AI, documentation standards from software engineering are not suitable to collect the required evidence. Even though FAT AI is called for by lawmakers, academics and practitioners, suitable guidelines on how to document AI are not available. This interview study aims to investigate the requirements for AI documentations.

Design/methodology/approach

A total of 16 interviews were conducted with senior employees from companies in the banking and IT industry as well as with consultants. The interviews were then analyzed using an informed-inductive coding approach.

Findings

The authors found five requirements for AI documentation, taking the specific nature of AI into account. The interviews show that documenting AI is not a purely technical task, but also requires engineers to present information on how the AI is understandably integrated into the business process.

Originality/value

This paper benefits from the unique insights of senior employees into the documentation of AI.

Details

Digital Policy, Regulation and Governance, vol. 23 no. 5
Type: Research Article
ISSN: 2398-5038

Keywords

To view the access options for this content please click here
Book part
Publication date: 15 July 2020

Yvonne R. Masakowski

Advances in Artificial Intelligence (AI) technologies and Autonomous Unmanned Vehicles are shaping our daily lives, society, and will continue to transform how we will…

Abstract

Advances in Artificial Intelligence (AI) technologies and Autonomous Unmanned Vehicles are shaping our daily lives, society, and will continue to transform how we will fight future wars. Advances in AI technologies have fueled an explosion of interest in the military and political domain. As AI technologies evolve, there will be increased reliance on these systems to maintain global security. For the individual and society, AI presents challenges related to surveillance, personal freedom, and privacy. For the military, we will need to exploit advances in AI technologies to support the warfighter and ensure global security. The integration of AI technologies in the battlespace presents advantages, costs, and risks in the future battlespace. This chapter will examine the issues related to advances in AI technologies, as we examine the benefits, costs, and risks associated with integrating AI and autonomous systems in society and in the future battlespace.

Details

Artificial Intelligence and Global Security
Type: Book
ISBN: 978-1-78973-812-4

Keywords

To view the access options for this content please click here
Article
Publication date: 20 May 2019

Anastassia Lauterbach

This paper aims to inform policymakers about key artificial intelligence (AI) technologies, risks and trends in national AI strategies. It suggests a framework of social…

Downloads
2604

Abstract

Purpose

This paper aims to inform policymakers about key artificial intelligence (AI) technologies, risks and trends in national AI strategies. It suggests a framework of social governance to ensure emergence of safe and beneficial AI.

Design/methodology/approach

The paper is based on approximately 100 interviews with researchers, executives of traditional companies and startups and policymakers in seven countries. The interviews were carried out in January-August 2017.

Findings

Policymakers still need to develop an informed, scientifically grounded and forward-looking view on what societies and businesses might expect from AI. There is lack of transparency on what key AI risks are and what might be regulatory approaches to handle them. There is no collaborative framework in place involving all important actors to decide on AI technology design principles and governance. Today's technology decisions will have long-term consequences on lives of billions of people and competitiveness of millions of businesses.

Research limitations/implications

The research did not include a lot of insights from the emerging markets.

Practical implications

Policymakers will understand the scope of most important AI concepts, risks and national strategies.

Social implications

AI is progressing at a very fast rate, changing industries, businesses and approaches how companies learn, generate business insights, design products and communicate with their employees and customers. It has a big societal impact, as – if not designed with care – it can scale human bias, increase cybersecurity risk and lead to negative shifts in employment. Like no other invention, it can tighten control by the few over the many, spread false information and propaganda and therewith shape the perception of people, communities and enterprises.

Originality/value

This paper is a compendium on the most important concepts of AI, bringing clarity into discussions around AI risks and the ways to mitigate them. The breadth of topics is valuable to policymakers, students, practitioners, general executives and board directors alike.

Details

Digital Policy, Regulation and Governance, vol. 21 no. 3
Type: Research Article
ISSN: 2398-5038

Keywords

Content available
Article
Publication date: 23 March 2021

Aizhan Tursunbayeva, Claudia Pagliari, Stefano Di Lauro and Gilda Antonelli

This research analyzed the existing academic and grey literature concerning the technologies and practices of people analytics (PA), to understand how ethical…

Downloads
4086

Abstract

Purpose

This research analyzed the existing academic and grey literature concerning the technologies and practices of people analytics (PA), to understand how ethical considerations are being discussed by researchers, industry experts and practitioners, and to identify gaps, priorities and recommendations for ethical practice.

Design/methodology/approach

An iterative “scoping review” method was used to capture and synthesize relevant academic and grey literature. This is suited to emerging areas of innovation where formal research lags behind evidence from professional or technical sources.

Findings

Although the grey literature contains a growing stream of publications aimed at helping PA practitioners to “be ethical,” overall, research on ethical issues in PA is still at an early stage. Optimistic and technocentric perspectives dominate the PA discourse, although key themes seen in the wider literature on digital/data ethics are also evident. Risks and recommendations for PA projects concerned transparency and diverse stakeholder inclusion, respecting privacy rights, fair and proportionate use of data, fostering a systemic culture of ethical practice, delivering benefits for employees, including ethical outcomes in business models, ensuring legal compliance and using ethical charters.

Research limitations/implications

This research adds to current debates over the future of work and employment in a digitized, algorithm-driven society.

Practical implications

The research provides an accessible summary of the risks, opportunities, trade-offs and regulatory issues for PA, as well as a framework for integrating ethical strategies and practices.

Originality/value

By using a scoping methodology to surface and analyze diverse literatures, this study fills a gap in existing knowledge on ethical aspects of PA. The findings can inform future academic research, organizations using or considering PA products, professional associations developing relevant guidelines and policymakers adapting regulations. It is also timely, given the increase in digital monitoring of employees working from home during the Covid-19 pandemic.

Details

Personnel Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0048-3486

Keywords

Content available
Article
Publication date: 5 July 2021

Babak Abedin

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely…

Abstract

Purpose

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote the benefits of explainability or criticize it due to its counterproductive effects. This study addresses this polarized space and aims to identify opposing effects of the explainability of AI and the tensions between them and propose how to manage this tension to optimize AI system performance and trustworthiness.

Design/methodology/approach

The author systematically reviews the literature and synthesizes it using a contingency theory lens to develop a framework for managing the opposing effects of AI explainability.

Findings

The author finds five opposing effects of explainability: comprehensibility, conduct, confidentiality, completeness and confidence in AI (5Cs). The author also proposes six perspectives on managing the tensions between the 5Cs: pragmatism in explanation, contextualization of the explanation, cohabitation of human agency and AI agency, metrics and standardization, regulatory and ethical principles, and other emerging solutions (i.e. AI enveloping, blockchain and AI fuzzy systems).

Research limitations/implications

As in other systematic literature review studies, the results are limited by the content of the selected papers.

Practical implications

The findings show how AI owners and developers can manage tensions between profitability, prediction accuracy and system performance via visibility, accountability and maintaining the “social goodness” of AI. The results guide practitioners in developing metrics and standards for AI explainability, with the context of AI operation as the focus.

Originality/value

This study addresses polarized beliefs amongst scholars and practitioners about the benefits of AI explainability versus its counterproductive effects. It poses that there is no single best way to maximize AI explainability. Instead, the co-existence of enabling and constraining effects must be managed.

Details

Internet Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1066-2243

Keywords

1 – 10 of over 1000