Search results

1 – 10 of over 94000
Article
Publication date: 1 August 1998

Brendan McSweeney and Sheila Duncan

Considers why different explanations of the same event can be produced and discusses the characteristics of a good explanation. It identifies and analyses a wide range of…

1235

Abstract

Considers why different explanations of the same event can be produced and discusses the characteristics of a good explanation. It identifies and analyses a wide range of different published explanations of a seminal public administration policy‐change. It separates those accounts of that event into families of explanations and describes their common underlying presuppositions. These shared presuppositions are used to construct four models of public policy‐making: sovereign policy‐makers; policy‐makers as relays; policy‐making as the personal; and the discursive construction of policy. Each explanation (and its conceptual model) is challenged by historically grounded counter‐evidence. Based on this analysis the paper suggest ways in which analysis of public management changes might be more fruitfully orientated.

Details

Accounting, Auditing & Accountability Journal, vol. 11 no. 3
Type: Research Article
ISSN: 0951-3574

Keywords

Article
Publication date: 14 August 2017

Marko Bohanec, Marko Robnik-Šikonja and Mirjana Kljajić Borštnar

The purpose of this paper is to address the problem of weak acceptance of machine learning (ML) models in business. The proposed framework of top-performing ML models coupled with…

1942

Abstract

Purpose

The purpose of this paper is to address the problem of weak acceptance of machine learning (ML) models in business. The proposed framework of top-performing ML models coupled with general explanation methods provides additional information to the decision-making process. This builds a foundation for sustainable organizational learning.

Design/methodology/approach

To address user acceptance, participatory approach of action design research (ADR) was chosen. The proposed framework is demonstrated on a B2B sales forecasting process in an organizational setting, following cross-industry standard process for data mining (CRISP-DM) methodology.

Findings

The provided ML model explanations efficiently support business decision makers, reduce forecasting error for new sales opportunities, and facilitate discussion about the context of opportunities in the sales team.

Research limitations/implications

The quality and quantity of available data affect the performance of models and explanations.

Practical implications

The application in the real-world company demonstrates the utility of the approach and provides evidence that transparent explanations of ML models contribute to individual and organizational learning.

Social implications

All used methods are available as an open-source software and can improve the acceptance of ML in data-driven decision making.

Originality/value

The proposed framework incorporates existing ML models and general explanation methodology into a decision-making process. To the authors’ knowledge, this is the first attempt to support organizational learning with a framework combining ML explanations, ADR, and data mining methodology based on the CRISP-DM industry standard.

Details

Industrial Management & Data Systems, vol. 117 no. 7
Type: Research Article
ISSN: 0263-5577

Keywords

Book part
Publication date: 14 June 2018

D. Wade Hands

During the last decade or so, philosophers of science have shown increasing interest in scientific models and modeling. The primary impetus seems to have come from the philosophy…

Abstract

During the last decade or so, philosophers of science have shown increasing interest in scientific models and modeling. The primary impetus seems to have come from the philosophy of biology, but increasingly the philosophy of economics has been drawn into the discussion. This paper will focus on the particular subset of this literature that emphasizes the difference between a scientific model being explanatory and one that provides explanations of specific events. The main differences are in the structure of the models and the characteristics of the explanatory target. Traditionally, scientific explanations have been framed in terms of explaining particular events, but many scientific models have targets that are hypothetical patterns: “patterns of macroscopic behavior across systems that are heterogeneous at smaller scales” (Batterman & Rice, 2014, p. 349). The models with this characteristic are often highly idealized, and have complex and heterogeneous targets; such models are “central to a kind of modeling that is widely used in biology and economics” (Rohwer & Rice, 2013, p. 335). This paper has three main goals: (i) to discuss the literature on such models in the philosophy of biology, (ii) to show that certain economic phenomena possess the same degree of heterogeneity and complexity often encountered in biology (and thus, that hypothetical pattern explanations may be appropriate in certain areas of economics), and (iii) to demonstrate that Hayek’s arguments about “pattern predictions” and “explanations of the principle” are essentially arguments for the importance of this type of modeling in economics.

Details

Including a Symposium on Bruce Caldwell’s Beyond Positivism After 35 Years
Type: Book
ISBN: 978-1-78756-126-7

Keywords

Article
Publication date: 15 February 2016

Yiming Hu, Xinmin Tian and Zhiyong Zhu

In capital market, share prices of listed companies generally respond to accounting information. In 1995, Ohlson proposed a share valuation model based on two accounting…

Abstract

Purpose

In capital market, share prices of listed companies generally respond to accounting information. In 1995, Ohlson proposed a share valuation model based on two accounting indicators: company residual income and book value of net asset. In 2000, Zhang introduced the thought of option pricing and developed a new accounting valuation model. The purpose of this paper is to investigate the valuation deviation and the influence of some market transaction characteristics on pricing models.

Design/methodology/approach

The authors use listed companies from 1999 to 2013 as samples, and conduct comparative analysis with multiple regression.

Findings

The main findings are: first, the accounting valuation model is applicable to the capital market as a whole, and its pricing effect increases as years go by; second, in the environment of out capital market, the maturity of investors is one of important factors that causes the information content of residual income less than that of profit per share and lower pricing effect of valuation models; third, when the price earning (PE) of listed companies reaches certain level, the overall explanation capacity of accounting valuation models will become lower as PE gets higher; fourth, as for companies with higher turnover rate and more active transaction, the pricing effect of accounting valuation model is obviously lower; fifth, the pricing effect of accounting valuation models in a bull market is lower than in a bear market.

Originality/value

These findings establish connection between accounting valuation and market transaction characteristics providing an explorable orientation for the future development of accounting valuation theories and models.

Details

China Finance Review International, vol. 6 no. 1
Type: Research Article
ISSN: 2044-1398

Keywords

Open Access
Article
Publication date: 25 October 2022

Heitor Hoffman Nakashima, Daielly Mantovani and Celso Machado Junior

This paper aims to investigate whether professional data analysts’ trust of black-box systems is increased by explainability artifacts.

1004

Abstract

Purpose

This paper aims to investigate whether professional data analysts’ trust of black-box systems is increased by explainability artifacts.

Design/methodology/approach

The study was developed in two phases. First a black-box prediction model was estimated using artificial neural networks, and local explainability artifacts were estimated using local interpretable model-agnostic explanations (LIME) algorithms. In the second phase, the model and explainability outcomes were presented to a sample of data analysts from the financial market and their trust of the models was measured. Finally, interviews were conducted in order to understand their perceptions regarding black-box models.

Findings

The data suggest that users’ trust of black-box systems is high and explainability artifacts do not influence this behavior. The interviews reveal that the nature and complexity of the problem a black-box model addresses influences the users’ perceptions, trust being reduced in situations that represent a threat (e.g. autonomous cars). Concerns about the models’ ethics were also mentioned by the interviewees.

Research limitations/implications

The study considered a small sample of professional analysts from the financial market, which traditionally employs data analysis techniques for credit and risk analysis. Research with personnel in other sectors might reveal different perceptions.

Originality/value

Other studies regarding trust in black-box models and explainability artifacts have focused on ordinary users, with little or no knowledge of data analysis. The present research focuses on expert users, which provides a different perspective and shows that, for them, trust is related to the quality of data and the nature of the problem being solved, as well as the practical consequences. Explanation of the algorithm mechanics itself is not significantly relevant.

Details

Revista de Gestão, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1809-2276

Keywords

Article
Publication date: 4 May 2010

Jon‐Arild Johannessen

In order to explain a phenomenon/problem, some of the mechanisms which elicit the phenomenon/problem must be clarified, since: “a goal of scientific research is to uncover reality…

357

Abstract

Purpose

In order to explain a phenomenon/problem, some of the mechanisms which elicit the phenomenon/problem must be clarified, since: “a goal of scientific research is to uncover reality beneath appearance”. The purpose of this paper is to investigate the following issue: how can social mechanisms be examined from a systemic point of view?

Design/methodology/approach

The paper investigates, at an abstract level, what is meant by social mechanisms in social systems in Part 1. Social mechanisms and various explanation models are investigated in Part 2, using the systemic approach.

Findings

However well‐functioning the models developed, this procedure will not have developed a theory of the phenomenon. For that purpose, explanations at a more basic level than the model is able to disclose, will be necessary. The empirical causal model says something about the strength in the relation between the variables and can be used in practice in order to change certain variables to facilitate the desired change in the system.

Originality/value

The paper usefully shows that, if possible, explanations at a more basic level would be desirable; but not necessary for the application of insights in practical contexts. By this, the paper has stated that a theory can be desirable, but not necessary, in order to develop, e.g. innovative organisations. Models and social mechanisms, on the other hand, are necessary to organise knowledge for the purpose of use in practical contexts.

Article
Publication date: 9 August 2022

Vinay Singh, Iuliia Konovalova and Arpan Kumar Kar

Explainable artificial intelligence (XAI) has importance in several industrial applications. The study aims to provide a comparison of two important methods used for explainable…

Abstract

Purpose

Explainable artificial intelligence (XAI) has importance in several industrial applications. The study aims to provide a comparison of two important methods used for explainable AI algorithms.

Design/methodology/approach

In this study multiple criteria has been used to compare between explainable Ranked Area Integrals (xRAI) and integrated gradient (IG) methods for the explainability of AI algorithms, based on a multimethod phase-wise analysis research design.

Findings

The theoretical part includes the comparison of frameworks of two methods. In contrast, the methods have been compared across five dimensions like functional, operational, usability, safety and validation, from a practical point of view.

Research limitations/implications

A comparison has been made by combining criteria from theoretical and practical points of view, which demonstrates tradeoffs in terms of choices for the user.

Originality/value

Our results show that the xRAI method performs better from a theoretical point of view. However, the IG method shows a good result with both model accuracy and prediction quality.

Details

Benchmarking: An International Journal, vol. 30 no. 9
Type: Research Article
ISSN: 1463-5771

Keywords

Open Access
Article
Publication date: 13 July 2022

Jiqian Dong, Sikai Chen, Mohammad Miralinaghi, Tiantian Chen and Samuel Labi

Perception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep learning (DL) based computer…

Abstract

Purpose

Perception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep learning (DL) based computer vision models are generally considered to be black boxes due to poor interpretability. These have exacerbated user distrust and further forestalled their widespread deployment in practical usage. This paper aims to develop explainable DL models for autonomous driving by jointly predicting potential driving actions with corresponding explanations. The explainable DL models can not only boost user trust in autonomy but also serve as a diagnostic approach to identify any model deficiencies or limitations during the system development phase.

Design/methodology/approach

This paper proposes an explainable end-to-end autonomous driving system based on “Transformer,” a state-of-the-art self-attention (SA) based model. The model maps visual features from images collected by onboard cameras to guide potential driving actions with corresponding explanations, and aims to achieve soft attention over the image’s global features.

Findings

The results demonstrate the efficacy of the proposed model as it exhibits superior performance (in terms of correct prediction of actions and explanations) compared to the benchmark model by a significant margin with much lower computational cost on a public data set (BDD-OIA). From the ablation studies, the proposed SA module also outperforms other attention mechanisms in feature fusion and can generate meaningful representations for downstream prediction.

Originality/value

In the contexts of situational awareness and driver assistance, the proposed model can perform as a driving alarm system for both human-driven vehicles and autonomous vehicles because it is capable of quickly understanding/characterizing the environment and identifying any infeasible driving actions. In addition, the extra explanation head of the proposed model provides an extra channel for sanity checks to guarantee that the model learns the ideal causal relationships. This provision is critical in the development of autonomous systems.

Details

Journal of Intelligent and Connected Vehicles, vol. 5 no. 3
Type: Research Article
ISSN: 2399-9802

Keywords

Book part
Publication date: 14 June 2018

Luis Mireles-Flores

This essay is a review of the recent literature on the methodology of economics, with a focus on three broad trends that have defined the core lines of research within the…

Abstract

This essay is a review of the recent literature on the methodology of economics, with a focus on three broad trends that have defined the core lines of research within the discipline during the last two decades. These trends are: (a) the philosophical analysis of economic modelling and economic explanation; (b) the epistemology of causal inference, evidence diversity and evidence-based policy and (c) the investigation of the methodological underpinnings and public policy implications of behavioural economics. The final output is inevitably not exhaustive, yet it aims at offering a fair taste of some of the most representative questions in the field on which many philosophers, methodologists and social scientists have recently been placing a great deal of intellectual effort. The topics and references compiled in this review should serve at least as safe introductions to some of the central research questions in the philosophy and methodology of economics.

Details

Including a Symposium on Bruce Caldwell’s Beyond Positivism After 35 Years
Type: Book
ISBN: 978-1-78756-126-7

Keywords

Open Access
Article
Publication date: 5 July 2021

Babak Abedin

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote…

5812

Abstract

Purpose

Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote the benefits of explainability or criticize it due to its counterproductive effects. This study addresses this polarized space and aims to identify opposing effects of the explainability of AI and the tensions between them and propose how to manage this tension to optimize AI system performance and trustworthiness.

Design/methodology/approach

The author systematically reviews the literature and synthesizes it using a contingency theory lens to develop a framework for managing the opposing effects of AI explainability.

Findings

The author finds five opposing effects of explainability: comprehensibility, conduct, confidentiality, completeness and confidence in AI (5Cs). The author also proposes six perspectives on managing the tensions between the 5Cs: pragmatism in explanation, contextualization of the explanation, cohabitation of human agency and AI agency, metrics and standardization, regulatory and ethical principles, and other emerging solutions (i.e. AI enveloping, blockchain and AI fuzzy systems).

Research limitations/implications

As in other systematic literature review studies, the results are limited by the content of the selected papers.

Practical implications

The findings show how AI owners and developers can manage tensions between profitability, prediction accuracy and system performance via visibility, accountability and maintaining the “social goodness” of AI. The results guide practitioners in developing metrics and standards for AI explainability, with the context of AI operation as the focus.

Originality/value

This study addresses polarized beliefs amongst scholars and practitioners about the benefits of AI explainability versus its counterproductive effects. It poses that there is no single best way to maximize AI explainability. Instead, the co-existence of enabling and constraining effects must be managed.

1 – 10 of over 94000