Search results

1 – 10 of 273
Open Access
Article
Publication date: 5 July 2021

Babak Abedin

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote…

6005

Abstract

Purpose

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote the benefits of explainability or criticize it due to its counterproductive effects. This study addresses this polarized space and aims to identify opposing effects of the explainability of AI and the tensions between them and propose how to manage this tension to optimize AI system performance and trustworthiness.

Design/methodology/approach

The author systematically reviews the literature and synthesizes it using a contingency theory lens to develop a framework for managing the opposing effects of AI explainability.

Findings

The author finds five opposing effects of explainability: comprehensibility, conduct, confidentiality, completeness and confidence in AI (5Cs). The author also proposes six perspectives on managing the tensions between the 5Cs: pragmatism in explanation, contextualization of the explanation, cohabitation of human agency and AI agency, metrics and standardization, regulatory and ethical principles, and other emerging solutions (i.e. AI enveloping, blockchain and AI fuzzy systems).

Research limitations/implications

As in other systematic literature review studies, the results are limited by the content of the selected papers.

Practical implications

The findings show how AI owners and developers can manage tensions between profitability, prediction accuracy and system performance via visibility, accountability and maintaining the “social goodness” of AI. The results guide practitioners in developing metrics and standards for AI explainability, with the context of AI operation as the focus.

Originality/value

This study addresses polarized beliefs amongst scholars and practitioners about the benefits of AI explainability versus its counterproductive effects. It poses that there is no single best way to maximize AI explainability. Instead, the co-existence of enabling and constraining effects must be managed.

Article
Publication date: 7 November 2023

Jun Yu, Zhengcong Ma and Lin Zhu

This study aims to investigate the configurational effects of five rules – artificial intelligence (AI)-based hiring decision transparency, consistency, voice, explainability and…

559

Abstract

Purpose

This study aims to investigate the configurational effects of five rules – artificial intelligence (AI)-based hiring decision transparency, consistency, voice, explainability and human involvement – on applicants' procedural justice perception (APJP) and applicants' interactional justice perception (AIJP). In addition, this study examines whether the identified configurations could further enhance applicants' organisational commitment (OC).

Design/methodology/approach

Drawing on the justice model of applicants' reactions, the authors conducted a longitudinal survey of 254 newly recruited employees from 36 Chinese companies that utilise AI in their hiring. The authors employed fuzzy-set qualitative comparative analysis (fsQCA) to determine which configurations could improve APJP and AIJP, and the authors used propensity score matching (PSM) to analyse the effects of these configurations on OC.

Findings

The fsQCA generates three patterns involving five configurations that could improve APJP and AIJP. For pattern 1, when AI-based recruitment with high interpersonal rule (AI human involvement) aims for applicants' justice perception (AJP) through the combination of high informational rule (AI explainability) and high procedural rule (AI voice), there must be high levels of AI consistency and AI voice to complement AI explainability, and only this pattern of configurations can further enhance OC. In pattern 2, for the combination of high informational rule (AI explainability) and low procedural rule (absent AI voice), AI recruitment with high interpersonal rule (AI human involvement) should focus on AI transparency and AI explainability rather than the implementation of AI voice. In pattern 3, a mere combination of procedural rules could sufficiently improve AIJP.

Originality/value

This study, which involved real applicants, is one of the few empirical studies to explore the mechanisms behind the impact of AI hiring decisions on AJP and OC, and the findings may inform researchers and managers on how to best utilise AI to make hiring decisions.

Details

Information Technology & People, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0959-3845

Keywords

Open Access
Article
Publication date: 25 October 2022

Heitor Hoffman Nakashima, Daielly Mantovani and Celso Machado Junior

This paper aims to investigate whether professional data analysts’ trust of black-box systems is increased by explainability artifacts.

1040

Abstract

Purpose

This paper aims to investigate whether professional data analysts’ trust of black-box systems is increased by explainability artifacts.

Design/methodology/approach

The study was developed in two phases. First a black-box prediction model was estimated using artificial neural networks, and local explainability artifacts were estimated using local interpretable model-agnostic explanations (LIME) algorithms. In the second phase, the model and explainability outcomes were presented to a sample of data analysts from the financial market and their trust of the models was measured. Finally, interviews were conducted in order to understand their perceptions regarding black-box models.

Findings

The data suggest that users’ trust of black-box systems is high and explainability artifacts do not influence this behavior. The interviews reveal that the nature and complexity of the problem a black-box model addresses influences the users’ perceptions, trust being reduced in situations that represent a threat (e.g. autonomous cars). Concerns about the models’ ethics were also mentioned by the interviewees.

Research limitations/implications

The study considered a small sample of professional analysts from the financial market, which traditionally employs data analysis techniques for credit and risk analysis. Research with personnel in other sectors might reveal different perceptions.

Originality/value

Other studies regarding trust in black-box models and explainability artifacts have focused on ordinary users, with little or no knowledge of data analysis. The present research focuses on expert users, which provides a different perspective and shows that, for them, trust is related to the quality of data and the nature of the problem being solved, as well as the practical consequences. Explanation of the algorithm mechanics itself is not significantly relevant.

Details

Revista de Gestão, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1809-2276

Keywords

Article
Publication date: 17 October 2023

Helmi Issa, Rachid Jabbouri and Rock-Antoine Mehanna

The exponential growth of artificial intelligence (AI) technologies, coupled with advanced algorithms and increased computational capacity, has facilitated their widespread…

Abstract

Purpose

The exponential growth of artificial intelligence (AI) technologies, coupled with advanced algorithms and increased computational capacity, has facilitated their widespread adoption in various industries. Among these, the financial technology (FinTech) sector has been significantly impacted by AI-based decision-making systems. Nevertheless, a knowledge gap remains regarding the intricate mechanisms behind the micro-decision-making process employed by AI algorithms. This paper aims to discuss the aforementioned issue.

Design/methodology/approach

This research utilized a sequential mixed-methods research approach and obtained data through 18 interviews conducted with a single FinTech firm in France, as well as 148 e-surveys administered to participants employed at different FinTechs located throughout Europe.

Findings

Three main themes (ambidexterity, data sovereignty and model explainability) emerge as underpinnings for effective AI micro decision-making in FinTechs.

Practical implications

This research aims to minimize ambiguity by putting forth a proposition for a model that functions as an “infrastructural” layer, providing a more comprehensive illumination of the micro-decisions made by AI.

Originality/value

This research pioneers as the very first empirical exploration delving into the essential factors that underpin effective AI micro-decisions in FinTechs.

Details

Management Decision, vol. 61 no. 11
Type: Research Article
ISSN: 0025-1747

Keywords

Expert briefing
Publication date: 6 August 2021

This presents a safety concern in some areas, and creates difficulties in guaranteeing that AI decisions are unbiased, for example in their treatment of different demographic…

Details

DOI: 10.1108/OXAN-DB263309

ISSN: 2633-304X

Keywords

Geographic
Topical
Article
Publication date: 24 June 2021

Quang-Vinh Dang

This study aims to explain the state-of-the-art machine learning models that are used in the intrusion detection problem for human-being understandable and study the relationship…

Abstract

Purpose

This study aims to explain the state-of-the-art machine learning models that are used in the intrusion detection problem for human-being understandable and study the relationship between the explainability and the performance of the models.

Design/methodology/approach

The authors study a recent intrusion data set collected from real-world scenarios and use state-of-the-art machine learning algorithms to detect the intrusion. The authors apply several novel techniques to explain the models, then evaluate manually the explanation. The authors then compare the performance of model post- and prior-explainability-based feature selection.

Findings

The authors confirm our hypothesis above and claim that by forcing the explainability, the model becomes more robust, requires less computational power but achieves a better predictive performance.

Originality/value

The authors draw our conclusions based on their own research and experimental works.

Details

International Journal of Web Information Systems, vol. 17 no. 5
Type: Research Article
ISSN: 1744-0084

Keywords

Abstract

Details

Ethical AI Surveillance in the Workplace
Type: Book
ISBN: 978-1-83753-772-3

Article
Publication date: 9 August 2022

Vinay Singh, Iuliia Konovalova and Arpan Kumar Kar

Explainable artificial intelligence (XAI) has importance in several industrial applications. The study aims to provide a comparison of two important methods used for explainable…

Abstract

Purpose

Explainable artificial intelligence (XAI) has importance in several industrial applications. The study aims to provide a comparison of two important methods used for explainable AI algorithms.

Design/methodology/approach

In this study multiple criteria has been used to compare between explainable Ranked Area Integrals (xRAI) and integrated gradient (IG) methods for the explainability of AI algorithms, based on a multimethod phase-wise analysis research design.

Findings

The theoretical part includes the comparison of frameworks of two methods. In contrast, the methods have been compared across five dimensions like functional, operational, usability, safety and validation, from a practical point of view.

Research limitations/implications

A comparison has been made by combining criteria from theoretical and practical points of view, which demonstrates tradeoffs in terms of choices for the user.

Originality/value

Our results show that the xRAI method performs better from a theoretical point of view. However, the IG method shows a good result with both model accuracy and prediction quality.

Details

Benchmarking: An International Journal, vol. 30 no. 9
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 22 January 2024

Dinesh Kumar and Nidhi Suthar

Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal…

1436

Abstract

Purpose

Artificial intelligence (AI) has sparked interest in various areas, including marketing. However, this exhilaration is being tempered by growing concerns about the moral and legal implications of using AI in marketing. Although previous research has revealed various ethical and legal issues, such as algorithmic discrimination and data privacy, there are no definitive answers. This paper aims to fill this gap by investigating AI’s ethical and legal concerns in marketing and suggesting feasible solutions.

Design/methodology/approach

The paper synthesises information from academic articles, industry reports, case studies and legal documents through a thematic literature review. A qualitative analysis approach categorises and interprets ethical and legal challenges and proposes potential solutions.

Findings

The findings of this paper raise concerns about ethical and legal challenges related to AI in the marketing area. Ethical concerns related to discrimination, bias, manipulation, job displacement, absence of social interaction, cybersecurity, unintended consequences, environmental impact, privacy and legal issues such as consumer security, responsibility, liability, brand protection, competition law, agreements, data protection, consumer protection and intellectual property rights are discussed in the paper, and their potential solutions are discussed.

Research limitations/implications

Notwithstanding the interesting insights gathered from this investigation of the ethical and legal consequences of AI in marketing, it is important to recognise the limits of this research. Initially, the focus of this study is confined to a review of the most important ethical and legal issues pertaining to AI in marketing. Additional possible repercussions, such as those associated with intellectual property, contracts and licencing, should be investigated more deeply in future studies. Despite the fact that this study gives various answers and best practices for tackling the stated ethical and legal concerns, the viability and efficacy of these solutions may differ depending on the context and industry. Thus, more research and case studies are required to evaluate the applicability and efficacy of these solutions in other circumstances. This research is mostly based on a literature review and may not represent the experiences or opinions of all stakeholders engaged in AI-powered marketing. Further study might involve interviews or surveys with marketing professionals, customers and other key stakeholders to offer a full knowledge of the practical difficulties and solutions. Because of the rapid speed of technical progress, AI’s ethical and regulatory ramifications in marketing are continually increasing. Consequently, this work should be a springboard for more research and continuing conversations on this subject.

Practical implications

This study’s findings have several practical implications for marketing professionals. Emphasising openness and explainability: Marketing professionals should prioritise transparency in their use of AI, ensuring that customers are fully informed about data collection and utilisation for targeted advertising. By promoting openness and explainability, marketers can foster customer trust and avoid the negative consequences of a lack of transparency. Establishing ethical guidelines: Marketing professionals need to develop ethical rules for the creation and implementation of AI-powered marketing strategies. Adhering to ethical principles ensures compliance with legal norms and aligns with the organisation’s values and ideals. Investing in bias detection tools and privacy-enhancing technology: To mitigate risks associated with AI in marketing, marketers should allocate resources to develop and implement bias detection tools and privacy-enhancing technology. These tools can identify and address biases in AI algorithms, safeguard consumer privacy and extract valuable insights from consumer data.

Social implications

This study’s social implications emphasise the need for a comprehensive approach to address the ethical and legal challenges of AI in marketing. This includes adopting a responsible innovation framework, promoting ethical leadership, using ethical decision-making frameworks and conducting multidisciplinary research. By incorporating these approaches, marketers can navigate the complexities of AI in marketing responsibly, foster an ethical organisational culture, make informed ethical decisions and develop effective solutions. Such practices promote public trust, ensure equitable distribution of benefits and risk, and mitigate potential negative social consequences associated with AI in marketing.

Originality/value

To the best of the authors’ knowledge, this paper is among the first to explore potential solutions comprehensively. This paper provides a nuanced understanding of the challenges by using a multidisciplinary framework and synthesising various sources. It contributes valuable insights for academia and industry.

Details

Journal of Information, Communication and Ethics in Society, vol. 22 no. 1
Type: Research Article
ISSN: 1477-996X

Keywords

Article
Publication date: 14 April 2021

Ranjit Tiwari

This study seeks to understand the nexus between intellectual capital and profitability of healthcare firms in India with interaction effects.

Abstract

Purpose

This study seeks to understand the nexus between intellectual capital and profitability of healthcare firms in India with interaction effects.

Design/methodology/approach

Relevant data were extracted from the Centre for Monitoring Indian Economy (CMIE)'s Prowess database for a period of ten years 2009–2018 for a sample of 84 selected firms from the healthcare industry. This study uses value added intellectual coefficient (VAIC) and modified value added intellectual coefficient (MVAIC) as a measure of intellectual capital. Further, the study employs panel regression techniques to explore the relationship between intellectual capital and profitability.

Findings

The empirical findings reveal that the intellectual capital coefficient of healthcare firms in India averages 2.7757. It is also observed that a majority of the healthcare firms' intellectual capital coefficient is below the industry average. From the regression analysis, it is evident that the intellectual capital coefficient is positively related to the profitability of healthcare firms in India. As far as the components of intellectual capital coefficient are concerned, the capital employed coefficient (CEC) is the only component driving the profitability of healthcare firms in India. A further introduction of interaction terms improves model explainability and moderates the impact of the predictor variable on the response variable. Furthermore, it is observed that the intellectual capital coefficient of the healthcare industry is immune to changes in political regimes in India.

Practical implications

The findings reveal that intellectual capital is an important driver of corporate performance, thus healthcare firms in developing economies like India need to enhance their intellectual potential. Therefore, corporates and governments in developing economies should stimulate investments in developing intellectual capital for enhanced corporate performance and economic growth. Thus, this study might be used as a reference by policymakers while drafting the future policy for the development of intellectual capital in general and healthcare sector specifically.

Originality/value

This is among the first few studies to explore such an empirical relationship for healthcare firms in India and among the few studies of this kind across the globe. It also makes novel contributions in considering interaction variables and seeking the consistency of results across different political regimes. However, the study examines one nation and one industry; thus, the generalisation of findings requires caution.

Details

Journal of Intellectual Capital, vol. 23 no. 3
Type: Research Article
ISSN: 1469-1930

Keywords

1 – 10 of 273