Search results

1 – 10 of 419
Open Access
Article
Publication date: 5 July 2021

Babak Abedin

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote…

5871

Abstract

Purpose

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote the benefits of explainability or criticize it due to its counterproductive effects. This study addresses this polarized space and aims to identify opposing effects of the explainability of AI and the tensions between them and propose how to manage this tension to optimize AI system performance and trustworthiness.

Design/methodology/approach

The author systematically reviews the literature and synthesizes it using a contingency theory lens to develop a framework for managing the opposing effects of AI explainability.

Findings

The author finds five opposing effects of explainability: comprehensibility, conduct, confidentiality, completeness and confidence in AI (5Cs). The author also proposes six perspectives on managing the tensions between the 5Cs: pragmatism in explanation, contextualization of the explanation, cohabitation of human agency and AI agency, metrics and standardization, regulatory and ethical principles, and other emerging solutions (i.e. AI enveloping, blockchain and AI fuzzy systems).

Research limitations/implications

As in other systematic literature review studies, the results are limited by the content of the selected papers.

Practical implications

The findings show how AI owners and developers can manage tensions between profitability, prediction accuracy and system performance via visibility, accountability and maintaining the “social goodness” of AI. The results guide practitioners in developing metrics and standards for AI explainability, with the context of AI operation as the focus.

Originality/value

This study addresses polarized beliefs amongst scholars and practitioners about the benefits of AI explainability versus its counterproductive effects. It poses that there is no single best way to maximize AI explainability. Instead, the co-existence of enabling and constraining effects must be managed.

Expert briefing
Publication date: 5 February 2019

Prospect for artificial intelligence applications.

Details

DOI: 10.1108/OXAN-DB241630

ISSN: 2633-304X

Keywords

Geographic
Topical
Expert briefing
Publication date: 21 September 2020

It required arguably the single largest computational effort for a machine learning model to date, and is it capable of producing text at times indistinguishable from the work of…

Details

DOI: 10.1108/OXAN-DB256373

ISSN: 2633-304X

Keywords

Geographic
Topical
Article
Publication date: 7 December 2021

Yue Wang and Sai Ho Chung

This study is a systematic literature review of the application of artificial intelligence (AI) in safety-critical systems. The authors aim to present the current application…

1314

Abstract

Purpose

This study is a systematic literature review of the application of artificial intelligence (AI) in safety-critical systems. The authors aim to present the current application status according to different AI techniques and propose some research directions and insights to promote its wider application.

Design/methodology/approach

A total of 92 articles were selected for this review through a systematic literature review along with a thematic analysis.

Findings

The literature is divided into three themes: interpretable method, explain model behavior and reinforcement of safe learning. Among AI techniques, the most widely used are Bayesian networks (BNs) and deep neural networks. In addition, given the huge potential in this field, four future research directions were also proposed.

Practical implications

This study is of vital interest to industry practitioners and regulators in safety-critical domain, as it provided a clear picture of the current status and pointed out that some AI techniques have great application potential. For those that are inherently appropriate for use in safety-critical systems, regulators can conduct in-depth studies to validate and encourage their use in the industry.

Originality/value

This is the first review of the application of AI in safety-critical systems in the literature. It marks the first step toward advancing AI in safety-critical domain. The paper has potential values to promote the use of the term “safety-critical” and to improve the phenomenon of literature fragmentation.

Details

Industrial Management & Data Systems, vol. 122 no. 2
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 20 May 2019

Anastassia Lauterbach

This paper aims to inform policymakers about key artificial intelligence (AI) technologies, risks and trends in national AI strategies. It suggests a framework of social…

4351

Abstract

Purpose

This paper aims to inform policymakers about key artificial intelligence (AI) technologies, risks and trends in national AI strategies. It suggests a framework of social governance to ensure emergence of safe and beneficial AI.

Design/methodology/approach

The paper is based on approximately 100 interviews with researchers, executives of traditional companies and startups and policymakers in seven countries. The interviews were carried out in January-August 2017.

Findings

Policymakers still need to develop an informed, scientifically grounded and forward-looking view on what societies and businesses might expect from AI. There is lack of transparency on what key AI risks are and what might be regulatory approaches to handle them. There is no collaborative framework in place involving all important actors to decide on AI technology design principles and governance. Today's technology decisions will have long-term consequences on lives of billions of people and competitiveness of millions of businesses.

Research limitations/implications

The research did not include a lot of insights from the emerging markets.

Practical implications

Policymakers will understand the scope of most important AI concepts, risks and national strategies.

Social implications

AI is progressing at a very fast rate, changing industries, businesses and approaches how companies learn, generate business insights, design products and communicate with their employees and customers. It has a big societal impact, as – if not designed with care – it can scale human bias, increase cybersecurity risk and lead to negative shifts in employment. Like no other invention, it can tighten control by the few over the many, spread false information and propaganda and therewith shape the perception of people, communities and enterprises.

Originality/value

This paper is a compendium on the most important concepts of AI, bringing clarity into discussions around AI risks and the ways to mitigate them. The breadth of topics is valuable to policymakers, students, practitioners, general executives and board directors alike.

Details

Digital Policy, Regulation and Governance, vol. 21 no. 3
Type: Research Article
ISSN: 2398-5038

Keywords

Book part
Publication date: 7 June 2019

John Hooker and Tae Wan Kim

Businesses are rapidly automating workplaces with new technologies (e.g., driverless cargo trucks, artificially intelligent mortgage approvals, machine-learning-based paralegals…

Abstract

Businesses are rapidly automating workplaces with new technologies (e.g., driverless cargo trucks, artificially intelligent mortgage approvals, machine-learning-based paralegals, algorithmic managers). Such technological advancement raises a host of questions for business and society. As Thomas Donaldson recently remarked, “It’s instance of a problem that more sophisticated engineering cannot solve, and that requires a more sophisticated approach to values” (Ufberg, 2017). In this chapter, we explore the value questions as follows: What is the purpose of business in the machine age? What model for business will best serve society in coming decades: profit maximization, stakeholder theory, or another conception entirely? Is it time for a new social contract between business and society? Do firms have a natural duty to offer employment? Are existing concepts of responsibility/liability adequate for an age in which companies use autonomous robots as scapegoats? How can we protect our humanity and dignity in an algorithm-based society? Do we need to teach ethics to robots?

Article
Publication date: 9 December 2022

Na Jiang, Xiaohui Liu, Hefu Liu, Eric Tze Kuan Lim, Chee-Wee Tan and Jibao Gu

Artificial intelligence (AI) has gained significant momentum in recent years. Among AI-infused systems, one prominent application is context-aware systems. Although the fusion of…

1401

Abstract

Purpose

Artificial intelligence (AI) has gained significant momentum in recent years. Among AI-infused systems, one prominent application is context-aware systems. Although the fusion of AI and context awareness has given birth to personalized and timely AI-powered context-aware systems, several challenges still remain. Given the “black box” nature of AI, the authors propose that human–AI collaboration is essential for AI-powered context-aware services to eliminate uncertainty and evolve. To this end, this study aims to advance a research agenda for facilitators and outcomes of human–AI collaboration in AI-powered context-aware services.

Design/methodology/approach

Synthesizing the extant literature on AI and context awareness, the authors advance a theoretical framework that not only differentiates among the three phases of AI-powered context-aware services (i.e. context acquisition, context interpretation and context application) but also outlines plausible research directions for each stage.

Findings

The authors delve into the role of human–AI collaboration and derive future research questions from two directions, namely, the effects of AI-powered context-aware services design on human–AI collaboration and the impact of human–AI collaboration.

Originality/value

This study contributes to the extant literature by identifying knowledge gaps in human–AI collaboration for AI-powered context-aware services and putting forth research directions accordingly. In turn, their proposed framework yields actionable guidance for AI-powered context-aware service designers and practitioners.

Details

Industrial Management & Data Systems, vol. 123 no. 11
Type: Research Article
ISSN: 0263-5577

Keywords

Content available
Book part
Publication date: 7 June 2019

Abstract

Details

Business Ethics
Type: Book
ISBN: 978-1-78973-684-7

Abstract

Details

The Future of Recruitment
Type: Book
ISBN: 978-1-83867-562-2

Article
Publication date: 12 February 2019

Jenifer Sunrise Winter and Elizabeth Davidson

This paper aims to assess the increasing challenges to governing the personal health information (PHI) essential for advancing artificial intelligence (AI) machine learning…

2968

Abstract

Purpose

This paper aims to assess the increasing challenges to governing the personal health information (PHI) essential for advancing artificial intelligence (AI) machine learning innovations in health care. Risks to privacy and justice/equity are discussed, along with potential solutions.

Design/methodology/approach

This conceptual paper highlights the scale and scope of PHI data consumed by deep learning algorithms and their opacity as novel challenges to health data governance.

Findings

This paper argues that these characteristics of machine learning will overwhelm existing data governance approaches such as privacy regulation and informed consent. Enhanced governance techniques and tools will be required to help preserve the autonomy and rights of individuals to control their PHI. Debate among all stakeholders and informed critique of how, and for whom, PHI-fueled health AI are developed and deployed are needed to channel these innovations in societally beneficial directions.

Social implications

Health data may be used to address pressing societal concerns, such as operational and system-level improvement, and innovations such as personalized medicine. This paper informs work seeking to harness these resources for societal good amidst many competing value claims and substantial risks for privacy and security.

Originality/value

This is the first paper focusing on health data governance in relation to AI/machine learning.

Details

Digital Policy, Regulation and Governance, vol. 21 no. 3
Type: Research Article
ISSN: 2398-5038

Keywords

1 – 10 of 419