Search results

1 – 10 of 148
Article
Publication date: 24 July 2024

Marta Sofia Marques da Encarnacao, Maria Anastasiadou and Vitor Santos

This paper aims to explore explainable artificial intelligence (XAI) in democracy, proposing an applicable framework. With artificial intelligence’s (AI) increasing use in…

Abstract

Purpose

This paper aims to explore explainable artificial intelligence (XAI) in democracy, proposing an applicable framework. With artificial intelligence’s (AI) increasing use in democracies, the demand for transparency and accountability in AI decision-making is recognized. XAI addresses AI “black boxes” by enhancing model transparency.

Design/methodology/approach

This study includes a thorough literature review of XAI. The methodology chosen was design science research to enable design theory and problem identification about XAI’s state of the art. Thereby finding and gathering crucial information to build a framework that aims to help solve issues and gaps where XAI can be of major influence in the service of democracy.

Findings

This framework has four main steps to be applied in the service of democracy by applying the different possible XAI techniques that may help mitigate existing challenges and risks for the democratic system. The proposed artifact intends to display and include all the necessary steps to select the most suitable XAI technology. Examples were given for every step of the artifact to provide a clear understanding of what was being proposed.

Originality/value

An evaluation of the proposed framework was made through interviews with specialists from different areas related to the topics in the study. The interviews were important for measuring the framework’s validity and originality.

Details

Transforming Government: People, Process and Policy, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1750-6166

Keywords

Open Access
Article
Publication date: 2 May 2022

Samuli Laato, Miika Tiainen, A.K.M. Najmul Islam and Matti Mäntymäki

Inscrutable machine learning (ML) models are part of increasingly many information systems. Understanding how these models behave, and what their output is based on, is a…

13783

Abstract

Purpose

Inscrutable machine learning (ML) models are part of increasingly many information systems. Understanding how these models behave, and what their output is based on, is a challenge for developers let alone non-technical end users.

Design/methodology/approach

The authors investigate how AI systems and their decisions ought to be explained for end users through a systematic literature review.

Findings

The authors’ synthesis of the literature suggests that AI system communication for end users has five high-level goals: (1) understandability, (2) trustworthiness, (3) transparency, (4) controllability and (5) fairness. The authors identified several design recommendations, such as offering personalized and on-demand explanations and focusing on the explainability of key functionalities instead of aiming to explain the whole system. There exists multiple trade-offs in AI system explanations, and there is no single best solution that fits all cases.

Research limitations/implications

Based on the synthesis, the authors provide a design framework for explaining AI systems to end users. The study contributes to the work on AI governance by suggesting guidelines on how to make AI systems more understandable, fair, trustworthy, controllable and transparent.

Originality/value

This literature review brings together the literature on AI system communication and explainable AI (XAI) for end users. Building on previous academic literature on the topic, it provides synthesized insights, design recommendations and future research agenda.

Article
Publication date: 24 April 2020

Jenny Bunn

This paper aims to introduce the topic of explainable artificial intelligence (XAI) and reports on the outcomes of an interdisciplinary workshop exploring it. It reflects on XAI

1198

Abstract

Purpose

This paper aims to introduce the topic of explainable artificial intelligence (XAI) and reports on the outcomes of an interdisciplinary workshop exploring it. It reflects on XAI through the frame and concerns of the recordkeeping profession.

Design/methodology/approach

This paper takes a reflective approach. The origins of XAI are outlined as a way of exploring how it can be viewed and how it is currently taking shape. The workshop and its outcomes are briefly described and reflections on the process of investigating and taking part in conversations about XAI are offered.

Findings

The article reinforces the value of undertaking interdisciplinary and exploratory conversations with others. It offers new perspectives on XAI and suggests ways in which recordkeeping can productively engage with it, as both a disruptive force on its thinking and a set of newly emerging record forms to be created and managed.

Originality/value

The value of this paper comes from the way in which the introduction it provides will allow recordkeepers to gain a sense of what XAI is and the different ways in which they are both already engaging and can continue to engage with it.

Details

Records Management Journal, vol. 30 no. 2
Type: Research Article
ISSN: 0956-5698

Keywords

Article
Publication date: 27 August 2024

Paritosh Pramanik, Rabin K. Jana and Indranil Ghosh

New business density (NBD) is the ratio of the number of newly registered liability corporations to the working-age population per year. NBD is critical to assessing a country's…

Abstract

Purpose

New business density (NBD) is the ratio of the number of newly registered liability corporations to the working-age population per year. NBD is critical to assessing a country's business environment. The present work endeavors to discover and gauge the contribution of 28 potential socio-economic enablers of NBD for 2006–2021 across developed and developing economies separately and to make a comparative assessment between those two regions.

Design/methodology/approach

Using World Bank data, the study first performs exploratory data analysis (EDA). Then, it deploys a deep learning (DL)-based regression framework by utilizing a deep neural network (DNN) to perform predictive modeling of NBD for developed and developing nations. Subsequently, we use two explainable artificial intelligence (XAI) techniques, Shapley values and a partial dependence plot, to unveil the influence patterns of chosen enablers. Finally, the results from the DL method are validated with the explainable boosting machine (EBM) method.

Findings

This research analyzes the role of 28 potential socio-economic enablers of NBD in developed and developing countries. This research finds that the NBD in developed countries is predominantly governed by the contribution of manufacturing and service sectors to GDP. In contrast, the propensity for research and development and ease of doing business control the NBD of developing nations. The research findings also indicate four common enablers – business disclosure, ease of doing business, employment in industry and startup procedures for developed and developing countries.

Practical implications

NBD is directly linked to any nation's economic affairs. Therefore, assessing the NBD enablers is of paramount significance for channelizing capital for new business formation. It will guide investment firms and entrepreneurs in discovering the factors that significantly impact the NBD dynamics across different regions of the globe. Entrepreneurs fraught with inevitable market uncertainties while developing a new idea into a successful new business can momentously benefit from the awareness of crucial NBD enablers, which can serve as a basis for business risk assessment.

Originality/value

DL-based regression framework simultaneously caters to successful predictive modeling and model explanation for practical insights about NBD at the global level. It overcomes the limitations in the present literature that assume the NBD is country- and industry-specific, and factors of the NBD cannot be generalized globally. With DL-based regression and XAI methods, we prove our research hypothesis that NBD can be effectively assessed and compared with the help of global macro-level indicators. This research justifies the robustness of the findings by using the socio-economic data from the renowned data repository of the World Bank and by implementing the DL modeling with validation through the EBM method.

Details

Benchmarking: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-5771

Keywords

Book part
Publication date: 13 March 2023

Xiaohang (Flora) Feng, Shunyuan Zhang and Kannan Srinivasan

The growth of social media and the sharing economy is generating abundant unstructured image and video data. Computer vision techniques can derive rich insights from unstructured…

Abstract

The growth of social media and the sharing economy is generating abundant unstructured image and video data. Computer vision techniques can derive rich insights from unstructured data and can inform recommendations for increasing profits and consumer utility – if only the model outputs are interpretable enough to earn the trust of consumers and buy-in from companies. To build a foundation for understanding the importance of model interpretation in image analytics, the first section of this article reviews the existing work along three dimensions: the data type (image data vs. video data), model structure (feature-level vs. pixel-level), and primary application (to increase company profits vs. to maximize consumer utility). The second section discusses how the “black box” of pixel-level models leads to legal and ethical problems, but interpretability can be improved with eXplainable Artificial Intelligence (XAI) methods. We classify and review XAI methods based on transparency, the scope of interpretability (global vs. local), and model specificity (model-specific vs. model-agnostic); in marketing research, transparent, local, and model-agnostic methods are most common. The third section proposes three promising future research directions related to model interpretability: the economic value of augmented reality in 3D product tracking and visualization, field experiments to compare human judgments with the outputs of machine vision systems, and XAI methods to test strategies for mitigating algorithmic bias.

Article
Publication date: 28 June 2024

Ka Shing Cheung

This real estate insight scrutinises the emerging role of Artificial Intelligence (AI), particularly Large Language Models (LLMs) like ChatGPT, in property valuation, advocating…

123

Abstract

Purpose

This real estate insight scrutinises the emerging role of Artificial Intelligence (AI), particularly Large Language Models (LLMs) like ChatGPT, in property valuation, advocating for establishing standardised reporting guidelines in AI-enabled property valuation.

Design/methodology/approach

Through a conceptual exploration, this piece examines the shift towards AI integration in property valuation and the critical role of Explainable Artificial Intelligence (XAI) in this transition. It discusses the CANGARU framework for developing inclusive and universally applicable reporting guidelines and the importance of human oversight in validating AI-enabled valuations.

Findings

Integrating LLMs into property valuation signifies potential efficiency gains and task automation but also introduces risks related to accuracy, bias, and ethical dilemmas. Standardised reporting guidelines are identified as essential for responsibly harnessing AI’s benefits.

Practical implications

The article underscores the need for the real estate industry to adopt transparent reporting practices, with valuers acting as expert interpreters of AI outputs. Emphasising error reporting in XAI not only aids in understanding AI-generated insights but also builds trust among stakeholders, ensuring AI’s ethical and effective application in property valuation.

Originality/value

This commentary contributes to the discourse on AI’s role in property valuation by focusing on the need for standard reporting guidelines that align with professional standards and legal frameworks. It advocates for a balanced approach to AI integration, where technological advancements complement traditional valuation expertise, ensuring accurate, fair, and transparent property valuations.

Details

Journal of Property Investment & Finance, vol. 42 no. 4
Type: Research Article
ISSN: 1463-578X

Keywords

Article
Publication date: 9 August 2022

Vinay Singh, Iuliia Konovalova and Arpan Kumar Kar

Explainable artificial intelligence (XAI) has importance in several industrial applications. The study aims to provide a comparison of two important methods used for explainable…

Abstract

Purpose

Explainable artificial intelligence (XAI) has importance in several industrial applications. The study aims to provide a comparison of two important methods used for explainable AI algorithms.

Design/methodology/approach

In this study multiple criteria has been used to compare between explainable Ranked Area Integrals (xRAI) and integrated gradient (IG) methods for the explainability of AI algorithms, based on a multimethod phase-wise analysis research design.

Findings

The theoretical part includes the comparison of frameworks of two methods. In contrast, the methods have been compared across five dimensions like functional, operational, usability, safety and validation, from a practical point of view.

Research limitations/implications

A comparison has been made by combining criteria from theoretical and practical points of view, which demonstrates tradeoffs in terms of choices for the user.

Originality/value

Our results show that the xRAI method performs better from a theoretical point of view. However, the IG method shows a good result with both model accuracy and prediction quality.

Details

Benchmarking: An International Journal, vol. 30 no. 9
Type: Research Article
ISSN: 1463-5771

Keywords

Article
Publication date: 5 December 2023

Valeriia Baklanova, Aleksei Kurkin and Tamara Teplova

The primary objective of this research is to provide a precise interpretation of the constructed machine learning model and produce definitive summaries that can evaluate the…

Abstract

Purpose

The primary objective of this research is to provide a precise interpretation of the constructed machine learning model and produce definitive summaries that can evaluate the influence of investor sentiment on the overall sales of non-fungible token (NFT) assets. To achieve this objective, the NFT hype index was constructed as well as several approaches of XAI were employed to interpret Black Box models and assess the magnitude and direction of the impact of the features used.

Design/methodology/approach

The research paper involved the construction of a sentiment index termed the NFT hype index, which aims to measure the influence of market actors within the NFT industry. This index was created by analyzing written content posted by 62 high-profile individuals and opinion leaders on the social media platform Twitter. The authors collected posts from the Twitter accounts that were afterward classified by tonality with a help of natural language processing model VADER. Then the machine learning methods and XAI approaches (feature importance, permutation importance and SHAP) were applied to explain the obtained results.

Findings

The built index was subjected to rigorous analysis using the gradient boosting regressor model and explainable AI techniques, which confirmed its significant explanatory power. Remarkably, the NFT hype index exhibited a higher degree of predictive accuracy compared to the well-known sentiment indices.

Practical implications

The NFT hype index, constructed from Twitter textual data, functions as an innovative, sentiment-based indicator for investment decision-making in the NFT market. It offers investors unique insights into the market sentiment that can be used alongside conventional financial analysis techniques to enhance risk management, portfolio optimization and overall investment outcomes within the rapidly evolving NFT ecosystem. Thus, the index plays a crucial role in facilitating well-informed, data-driven investment decisions and ensuring a competitive edge in the digital assets market.

Originality/value

The authors developed a novel index of investor interest for NFT assets (NFT hype index) based on text messages posted by market influencers and compared it to conventional sentiment indices in terms of their explanatory power. With the application of explainable AI, it was shown that sentiment indices may perform as significant predictors for NFT sales and that the NFT hype index works best among all sentiment indices considered.

Details

China Finance Review International, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2044-1398

Keywords

Article
Publication date: 5 July 2022

Jiho Kim, Hanjun Lee and Hongchul Lee

This paper aims to find determinants that can predict the helpfulness of online customer reviews (OCRs) with a novel approach.

Abstract

Purpose

This paper aims to find determinants that can predict the helpfulness of online customer reviews (OCRs) with a novel approach.

Design/methodology/approach

The approach consists of feature engineering using various text mining techniques including BERT and machine learning models that can classify OCRs according to their potential helpfulness. Moreover, explainable artificial intelligence methodologies are used to identify the determinants for helpfulness.

Findings

The important result is that the boosting-based ensemble model showed the highest prediction performance. In addition, it was confirmed that the sentiment features of OCRs and the reputation of reviewers are important determinants that augment the review helpfulness.

Research limitations/implications

Each online community has different purposes, fields and characteristics. Thus, the results of this study cannot be generalized. However, it is expected that this novel approach can be integrated with any platform where online reviews are used.

Originality/value

This paper incorporates feature engineering methodologies for online reviews, including the latest methodology. It also includes novel techniques to contribute to ongoing research on mining the determinants of review helpfulness.

Details

Data Technologies and Applications, vol. 57 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 25 October 2022

Heitor Hoffman Nakashima, Daielly Mantovani and Celso Machado Junior

This paper aims to investigate whether professional data analysts’ trust of black-box systems is increased by explainability artifacts.

1547

Abstract

Purpose

This paper aims to investigate whether professional data analysts’ trust of black-box systems is increased by explainability artifacts.

Design/methodology/approach

The study was developed in two phases. First a black-box prediction model was estimated using artificial neural networks, and local explainability artifacts were estimated using local interpretable model-agnostic explanations (LIME) algorithms. In the second phase, the model and explainability outcomes were presented to a sample of data analysts from the financial market and their trust of the models was measured. Finally, interviews were conducted in order to understand their perceptions regarding black-box models.

Findings

The data suggest that users’ trust of black-box systems is high and explainability artifacts do not influence this behavior. The interviews reveal that the nature and complexity of the problem a black-box model addresses influences the users’ perceptions, trust being reduced in situations that represent a threat (e.g. autonomous cars). Concerns about the models’ ethics were also mentioned by the interviewees.

Research limitations/implications

The study considered a small sample of professional analysts from the financial market, which traditionally employs data analysis techniques for credit and risk analysis. Research with personnel in other sectors might reveal different perceptions.

Originality/value

Other studies regarding trust in black-box models and explainability artifacts have focused on ordinary users, with little or no knowledge of data analysis. The present research focuses on expert users, which provides a different perspective and shows that, for them, trust is related to the quality of data and the nature of the problem being solved, as well as the practical consequences. Explanation of the algorithm mechanics itself is not significantly relevant.

Details

Revista de Gestão, vol. 31 no. 2
Type: Research Article
ISSN: 1809-2276

Keywords

1 – 10 of 148