Search results

1 – 10 of 507
Article
Publication date: 7 December 2021

Yue Wang and Sai Ho Chung

This study is a systematic literature review of the application of artificial intelligence (AI) in safety-critical systems. The authors aim to present the current application…

1317

Abstract

Purpose

This study is a systematic literature review of the application of artificial intelligence (AI) in safety-critical systems. The authors aim to present the current application status according to different AI techniques and propose some research directions and insights to promote its wider application.

Design/methodology/approach

A total of 92 articles were selected for this review through a systematic literature review along with a thematic analysis.

Findings

The literature is divided into three themes: interpretable method, explain model behavior and reinforcement of safe learning. Among AI techniques, the most widely used are Bayesian networks (BNs) and deep neural networks. In addition, given the huge potential in this field, four future research directions were also proposed.

Practical implications

This study is of vital interest to industry practitioners and regulators in safety-critical domain, as it provided a clear picture of the current status and pointed out that some AI techniques have great application potential. For those that are inherently appropriate for use in safety-critical systems, regulators can conduct in-depth studies to validate and encourage their use in the industry.

Originality/value

This is the first review of the application of AI in safety-critical systems in the literature. It marks the first step toward advancing AI in safety-critical domain. The paper has potential values to promote the use of the term “safety-critical” and to improve the phenomenon of literature fragmentation.

Details

Industrial Management & Data Systems, vol. 122 no. 2
Type: Research Article
ISSN: 0263-5577

Keywords

Open Access
Article
Publication date: 5 July 2021

Babak Abedin

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote…

5879

Abstract

Purpose

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote the benefits of explainability or criticize it due to its counterproductive effects. This study addresses this polarized space and aims to identify opposing effects of the explainability of AI and the tensions between them and propose how to manage this tension to optimize AI system performance and trustworthiness.

Design/methodology/approach

The author systematically reviews the literature and synthesizes it using a contingency theory lens to develop a framework for managing the opposing effects of AI explainability.

Findings

The author finds five opposing effects of explainability: comprehensibility, conduct, confidentiality, completeness and confidence in AI (5Cs). The author also proposes six perspectives on managing the tensions between the 5Cs: pragmatism in explanation, contextualization of the explanation, cohabitation of human agency and AI agency, metrics and standardization, regulatory and ethical principles, and other emerging solutions (i.e. AI enveloping, blockchain and AI fuzzy systems).

Research limitations/implications

As in other systematic literature review studies, the results are limited by the content of the selected papers.

Practical implications

The findings show how AI owners and developers can manage tensions between profitability, prediction accuracy and system performance via visibility, accountability and maintaining the “social goodness” of AI. The results guide practitioners in developing metrics and standards for AI explainability, with the context of AI operation as the focus.

Originality/value

This study addresses polarized beliefs amongst scholars and practitioners about the benefits of AI explainability versus its counterproductive effects. It poses that there is no single best way to maximize AI explainability. Instead, the co-existence of enabling and constraining effects must be managed.

Open Access
Article
Publication date: 2 May 2022

Samuli Laato, Miika Tiainen, A.K.M. Najmul Islam and Matti Mäntymäki

Inscrutable machine learning (ML) models are part of increasingly many information systems. Understanding how these models behave, and what their output is based on, is a…

10686

Abstract

Purpose

Inscrutable machine learning (ML) models are part of increasingly many information systems. Understanding how these models behave, and what their output is based on, is a challenge for developers let alone non-technical end users.

Design/methodology/approach

The authors investigate how AI systems and their decisions ought to be explained for end users through a systematic literature review.

Findings

The authors’ synthesis of the literature suggests that AI system communication for end users has five high-level goals: (1) understandability, (2) trustworthiness, (3) transparency, (4) controllability and (5) fairness. The authors identified several design recommendations, such as offering personalized and on-demand explanations and focusing on the explainability of key functionalities instead of aiming to explain the whole system. There exists multiple trade-offs in AI system explanations, and there is no single best solution that fits all cases.

Research limitations/implications

Based on the synthesis, the authors provide a design framework for explaining AI systems to end users. The study contributes to the work on AI governance by suggesting guidelines on how to make AI systems more understandable, fair, trustworthy, controllable and transparent.

Originality/value

This literature review brings together the literature on AI system communication and explainable AI (XAI) for end users. Building on previous academic literature on the topic, it provides synthesized insights, design recommendations and future research agenda.

Article
Publication date: 26 January 2021

Ting-Peng Liang, Lionel Robert, Suprateek Sarker, Christy M.K. Cheung, Christian Matt, Manuel Trenz and Ofir Turel

This paper reports the panel discussion on the topic of artificial intelligence (AI) and robots in our lives. This discussion was held at the Digitization of the Individual (DOTI…

2329

Abstract

Purpose

This paper reports the panel discussion on the topic of artificial intelligence (AI) and robots in our lives. This discussion was held at the Digitization of the Individual (DOTI) workshop at the International Conference on Information Systems in 2019. Three scholars (in alphabetical order: Ting-Peng Liang, Lionel Robert and Suprateek Sarker) who have done AI- and robot-related research (to varying degrees) were invited to participate in the panel discussion. The panel was moderated by Manuel Trenz.

Design/methodology/approach

This paper introduces the topic, chronicles the responses of the three panelists to the questions the workshop chairs posed and summarizes their responses, such that readers can have an overview of research on AI and robots in individuals' lives and insights about future research directions.

Findings

The panelists discussed four questions with regard to their research experiences on AI- and robot-related topics. They expressed their viewpoints on the underlying nature, potential and effects of AI in work and personal life domains. They also commented on the ethical dilemmas for research and practice and provided their outlook for future research in these emerging fields.

Originality/value

This paper aggregates the panelists' viewpoints, as expressed at the DOTI workshop. Crucial ethical and theoretical issues related to AI and robots in both work and personal life domains are addressed. Promising research directions to these cutting-edge research fields are also proposed.

Details

Internet Research, vol. 31 no. 1
Type: Research Article
ISSN: 1066-2243

Keywords

Content available
Article
Publication date: 15 March 2022

Wei Xu, Jianshan Sun and Mengxiang Li

1007

Abstract

Details

Internet Research, vol. 32 no. 2
Type: Research Article
ISSN: 1066-2243

Content available
Article
Publication date: 10 February 2022

Junaid Qadir, Mohammad Qamar Islam and Ala Al-Fuqaha

Along with the various beneficial uses of artificial intelligence (AI), there are various unsavory concomitants including the inscrutability of AI tools (and the opaqueness of…

1321

Abstract

Purpose

Along with the various beneficial uses of artificial intelligence (AI), there are various unsavory concomitants including the inscrutability of AI tools (and the opaqueness of their mechanisms), the fragility of AI models under adversarial settings, the vulnerability of AI models to bias throughout their pipeline, the high planetary cost of running large AI models and the emergence of exploitative surveillance capitalism-based economic logic built on AI technology. This study aims to document these harms of AI technology and study how these technologies and their developers and users can be made more accountable.

Design/methodology/approach

Due to the nature of the problem, a holistic, multi-pronged approach is required to understand and counter these potential harms. This paper identifies the rationale for urgently focusing on human-centered AI and provide an outlook of promising directions including technical proposals.

Findings

AI has the potential to benefit the entire society, but there remains an increased risk for vulnerable segments of society. This paper provides a general survey of the various approaches proposed in the literature to make AI technology more accountable. This paper reports that the development of ethical accountable AI design requires the confluence and collaboration of many fields (ethical, philosophical, legal, political and technical) and that lack of diversity is a problem plaguing the state of the art in AI.

Originality/value

This paper provides a timely synthesis of the various technosocial proposals in the literature spanning technical areas such as interpretable and explainable AI; algorithmic auditability; as well as policy-making challenges and efforts that can operationalize ethical AI and help in making AI accountable. This paper also identifies and shares promising future directions of research.

Details

Journal of Information, Communication and Ethics in Society, vol. 20 no. 2
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 26 July 2021

Zekun Yang and Zhijie Lin

Tags help promote customer engagement on video-sharing platforms. Video tag recommender systems are artificial intelligence-enabled frameworks that strive for recommending precise…

814

Abstract

Purpose

Tags help promote customer engagement on video-sharing platforms. Video tag recommender systems are artificial intelligence-enabled frameworks that strive for recommending precise tags for videos. Extant video tag recommender systems are uninterpretable, which leads to distrust of the recommendation outcome, hesitation in tag adoption and difficulty in the system debugging process. This study aims at constructing an interpretable and novel video tag recommender system to assist video-sharing platform users in tagging their newly uploaded videos.

Design/methodology/approach

The proposed interpretable video tag recommender system is a multimedia deep learning framework composed of convolutional neural networks (CNNs), which receives texts and images as inputs. The interpretability of the proposed system is realized through layer-wise relevance propagation.

Findings

The case study and user study demonstrate that the proposed interpretable multimedia CNN model could effectively explain its recommended tag to users by highlighting keywords and key patches that contribute the most to the recommended tag. Moreover, the proposed model achieves an improved recommendation performance by outperforming state-of-the-art models.

Practical implications

The interpretability of the proposed recommender system makes its decision process more transparent, builds users’ trust in the recommender systems and prompts users to adopt the recommended tags. Through labeling videos with human-understandable and accurate tags, the exposure of videos to their target audiences would increase, which enhances information technology (IT) adoption, customer engagement, value co-creation and precision marketing on the video-sharing platform.

Originality/value

The proposed model is not only the first explainable video tag recommender system but also the first explainable multimedia tag recommender system to the best of our knowledge.

Details

Internet Research, vol. 32 no. 2
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 11 June 2021

Wei Du, Qiang Yan, Wenping Zhang and Jian Ma

Patent trade recommendations necessitate recommendation interpretability in addition to recommendation accuracy because of patent transaction risks and the technological…

Abstract

Purpose

Patent trade recommendations necessitate recommendation interpretability in addition to recommendation accuracy because of patent transaction risks and the technological complexity of patents. This study designs an interpretable knowledge-aware patent recommendation model (IKPRM) for patent trading. IKPRM first creates a patent knowledge graph (PKG) for patent trade recommendations and then leverages paths in the PKG to achieve recommendation interpretability.

Design/methodology/approach

First, we construct a PKG to integrate online company behaviors and patent information using natural language processing techniques. Second, a bidirectional long short-term memory network (BiLSTM) is utilized with an attention mechanism to establish the connecting paths of a company — patent pair in PKG. Finally, the prediction score of a company — patent pair is calculated by assigning different weights to their connecting paths. The semantic relationships in connecting paths help explain why a candidate patent is recommended.

Findings

Experiments on a real dataset from a patent trading platform verify that IKPRM significantly outperforms baseline methods in terms of hit ratio and normalized discounted cumulative gain (nDCG). The analysis of an online user study verified the interpretability of our recommendations.

Originality/value

A meta-path-based recommendation can achieve certain explainability but suffers from low flexibility when reasoning on heterogeneous information. To bridge this gap, we propose the IKPRM to explain the full paths in the knowledge graph. IKPRM demonstrates good performance and transparency and is a solid foundation for integrating interpretable artificial intelligence into complex tasks such as intelligent recommendations.

Details

Internet Research, vol. 32 no. 2
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 9 September 2022

Enrico Bracci

Governments are increasingly turning to artificial intelligence (AI) algorithmic systems to increase efficiency and effectiveness of public service delivery. While the diffusion…

1059

Abstract

Purpose

Governments are increasingly turning to artificial intelligence (AI) algorithmic systems to increase efficiency and effectiveness of public service delivery. While the diffusion of AI offers several desirable benefits, caution and attention should be posed to the accountability of AI algorithm decision-making systems in the public sector. The purpose of this paper is to establish the main challenges that an AI algorithm might bring about to public service accountability. In doing so, the paper also delineates future avenues of investigation for scholars.

Design/methodology/approach

This paper builds on previous literature and anecdotal cases of AI applications in public services, drawing on streams of literature from accounting, public administration and information technology ethics.

Findings

Based on previous literature, the paper highlights the accountability gaps that AI can bring about and the possible countermeasures. The introduction of AI algorithms in public services modifies the chain of responsibility. This distributed responsibility requires an accountability governance, together with technical solutions, to meet multiple accountabilities and close the accountability gaps. The paper also delineates a research agenda for accounting scholars to make accountability more “intelligent”.

Originality/value

The findings of the paper shed new light and perspective on how public service accountability in AI should be considered and addressed. The results developed in this paper will stimulate scholars to explore, also from an interdisciplinary perspective, the issues public service organizations are facing to make AI algorithms accountable.

Details

Accounting, Auditing & Accountability Journal, vol. 36 no. 2
Type: Research Article
ISSN: 0951-3574

Keywords

Article
Publication date: 17 October 2022

Kirill Krinkin, Yulia Shichkina and Andrey Ignatyev

This study aims to show the inconsistency of the approach to the development of artificial intelligence as an independent tool (just one more tool that humans have developed); to…

Abstract

Purpose

This study aims to show the inconsistency of the approach to the development of artificial intelligence as an independent tool (just one more tool that humans have developed); to describe the logic and concept of intelligence development regardless of its substrate: a human or a machine and to prove that the co-evolutionary hybridization of the machine and human intelligence will make it possible to reach a solution for the problems inaccessible to humanity so far (global climate monitoring and control, pandemics, etc.).

Design/methodology/approach

The global trend for artificial intelligence development (has been) was set during the Dartmouth seminar in 1956. The main goal was to define characteristics and research directions for artificial intelligence comparable to or even outperforming human intelligence. It should be able to acquire and create new knowledge in a highly uncertain dynamic environment (the real-world environment is an example) and apply that knowledge to solving practical problems. Nowadays artificial intelligence overperforms human abilities (playing games, speech recognition, search, art generation, extracting patterns from data etc.), but all these examples show that developers have come to a dead end. Narrow artificial intelligence has no connection to real human intelligence and even cannot be successfully used in many cases due to lack of transparency, explainability, computational ineffectiveness and many other limits. A strong artificial intelligence development model can be discussed unrelated to the substrate development of intelligence and its general properties that are inherent in this development. Only then it is to be clarified which part of cognitive functions can be transferred to an artificial medium. The process of development of intelligence (as mutual development (co-development) of human and artificial intelligence) should correspond to the property of increasing cognitive interoperability. The degree of cognitive interoperability is arranged in the same way as the method of measuring the strength of intelligence. It is stronger if knowledge can be transferred between different domains on a higher level of abstraction (Chollet, 2018).

Findings

The key factors behind the development of hybrid intelligence are interoperability – the ability to create a common ontology in the context of the problem being solved, plan and carry out joint activities; co-evolution – ensuring the growth of aggregate intellectual ability without the loss of subjectness by each of the substrates (human, machine). The rate of co-evolution depends on the rate of knowledge interchange and the manufacturability of this process.

Research limitations/implications

Resistance to the idea of developing co-evolutionary hybrid intelligence can be expected from agents and developers who have bet on and invested in data-driven artificial intelligence and machine learning.

Practical implications

Revision of the approach to intellectualization through the development of hybrid intelligence methods will help bridge the gap between the developers of specific solutions and those who apply them. Co-evolution of machine intelligence and human intelligence will ensure seamless integration of smart new solutions into the global division of labor and social institutions.

Originality/value

The novelty of the research is connected with a new look at the principles of the development of machine and human intelligence in the co-evolution style. Also new is the statement that the development of intelligence should take place within the framework of integration of the following four domains: global challenges and tasks, concepts (general hybrid intelligence), technologies and products (specific applications that satisfy the needs of the market).

1 – 10 of 507