Search results
1 – 10 of over 8000Samuli Laato, Miika Tiainen, A.K.M. Najmul Islam and Matti Mäntymäki
Inscrutable machine learning (ML) models are part of increasingly many information systems. Understanding how these models behave, and what their output is based on, is a…
Abstract
Purpose
Inscrutable machine learning (ML) models are part of increasingly many information systems. Understanding how these models behave, and what their output is based on, is a challenge for developers let alone non-technical end users.
Design/methodology/approach
The authors investigate how AI systems and their decisions ought to be explained for end users through a systematic literature review.
Findings
The authors’ synthesis of the literature suggests that AI system communication for end users has five high-level goals: (1) understandability, (2) trustworthiness, (3) transparency, (4) controllability and (5) fairness. The authors identified several design recommendations, such as offering personalized and on-demand explanations and focusing on the explainability of key functionalities instead of aiming to explain the whole system. There exists multiple trade-offs in AI system explanations, and there is no single best solution that fits all cases.
Research limitations/implications
Based on the synthesis, the authors provide a design framework for explaining AI systems to end users. The study contributes to the work on AI governance by suggesting guidelines on how to make AI systems more understandable, fair, trustworthy, controllable and transparent.
Originality/value
This literature review brings together the literature on AI system communication and explainable AI (XAI) for end users. Building on previous academic literature on the topic, it provides synthesized insights, design recommendations and future research agenda.
Details
Keywords
Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote…
Abstract
Purpose
Research into the interpretability and explainability of data analytics and artificial intelligence (AI) systems is on the rise. However, most recent studies either solely promote the benefits of explainability or criticize it due to its counterproductive effects. This study addresses this polarized space and aims to identify opposing effects of the explainability of AI and the tensions between them and propose how to manage this tension to optimize AI system performance and trustworthiness.
Design/methodology/approach
The author systematically reviews the literature and synthesizes it using a contingency theory lens to develop a framework for managing the opposing effects of AI explainability.
Findings
The author finds five opposing effects of explainability: comprehensibility, conduct, confidentiality, completeness and confidence in AI (5Cs). The author also proposes six perspectives on managing the tensions between the 5Cs: pragmatism in explanation, contextualization of the explanation, cohabitation of human agency and AI agency, metrics and standardization, regulatory and ethical principles, and other emerging solutions (i.e. AI enveloping, blockchain and AI fuzzy systems).
Research limitations/implications
As in other systematic literature review studies, the results are limited by the content of the selected papers.
Practical implications
The findings show how AI owners and developers can manage tensions between profitability, prediction accuracy and system performance via visibility, accountability and maintaining the “social goodness” of AI. The results guide practitioners in developing metrics and standards for AI explainability, with the context of AI operation as the focus.
Originality/value
This study addresses polarized beliefs amongst scholars and practitioners about the benefits of AI explainability versus its counterproductive effects. It poses that there is no single best way to maximize AI explainability. Instead, the co-existence of enabling and constraining effects must be managed.
Details
Keywords
Anna Salonen, Marcus Zimmer and Joona Keränen
The purpose of this study is to explain how the application of fuzzy-set qualitative comparative analysis (fsQCA) and experiments can advance theory development in the field of…
Abstract
Purpose
The purpose of this study is to explain how the application of fuzzy-set qualitative comparative analysis (fsQCA) and experiments can advance theory development in the field of servitization by generating better causal explanations.
Design/methodology/approach
FsQCA and experiments are established research methods that are suited for developing causal explanations but are rarely utilized by servitization scholars. To support their application, we explain how fsQCA and experiments represent distinct ways of developing causal explanations, provide guidelines for their practical application and highlight potential application areas for a future research agenda in the servitization domain.
Findings
FsQCA enables specification of cause–effects relationships that result in equifinal paths to an intended outcome. Experiments have the highest explanatory power and enable the drawing of direct causal conclusions through reliance on an interventionist logic. Together, these methods provide complementary ways of developing and testing theory when the research objective is to understand the causal pathways that lead to observed outcomes.
Practical implications
Applications of fsQCA help to explain to managers why there are numerous causal routes to attaining an intended outcome from servitization. Experiments support managerial decision-making by providing definitive “yes/no” answers to key managerial questions that address clearly specified cause–effect relationships.
Originality/value
The main contribution of this study is to help advance theory development in servitization by encouraging greater methodological plurality in a field that relies primarily on the qualitative case study methodology.
Details
Keywords
Jiqian Dong, Sikai Chen, Mohammad Miralinaghi, Tiantian Chen and Samuel Labi
Perception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep learning (DL) based computer…
Abstract
Purpose
Perception has been identified as the main cause underlying most autonomous vehicle related accidents. As the key technology in perception, deep learning (DL) based computer vision models are generally considered to be black boxes due to poor interpretability. These have exacerbated user distrust and further forestalled their widespread deployment in practical usage. This paper aims to develop explainable DL models for autonomous driving by jointly predicting potential driving actions with corresponding explanations. The explainable DL models can not only boost user trust in autonomy but also serve as a diagnostic approach to identify any model deficiencies or limitations during the system development phase.
Design/methodology/approach
This paper proposes an explainable end-to-end autonomous driving system based on “Transformer,” a state-of-the-art self-attention (SA) based model. The model maps visual features from images collected by onboard cameras to guide potential driving actions with corresponding explanations, and aims to achieve soft attention over the image’s global features.
Findings
The results demonstrate the efficacy of the proposed model as it exhibits superior performance (in terms of correct prediction of actions and explanations) compared to the benchmark model by a significant margin with much lower computational cost on a public data set (BDD-OIA). From the ablation studies, the proposed SA module also outperforms other attention mechanisms in feature fusion and can generate meaningful representations for downstream prediction.
Originality/value
In the contexts of situational awareness and driver assistance, the proposed model can perform as a driving alarm system for both human-driven vehicles and autonomous vehicles because it is capable of quickly understanding/characterizing the environment and identifying any infeasible driving actions. In addition, the extra explanation head of the proposed model provides an extra channel for sanity checks to guarantee that the model learns the ideal causal relationships. This provision is critical in the development of autonomous systems.
Details
Keywords
Heitor Hoffman Nakashima, Daielly Mantovani and Celso Machado Junior
This paper aims to investigate whether professional data analysts’ trust of black-box systems is increased by explainability artifacts.
Abstract
Purpose
This paper aims to investigate whether professional data analysts’ trust of black-box systems is increased by explainability artifacts.
Design/methodology/approach
The study was developed in two phases. First a black-box prediction model was estimated using artificial neural networks, and local explainability artifacts were estimated using local interpretable model-agnostic explanations (LIME) algorithms. In the second phase, the model and explainability outcomes were presented to a sample of data analysts from the financial market and their trust of the models was measured. Finally, interviews were conducted in order to understand their perceptions regarding black-box models.
Findings
The data suggest that users’ trust of black-box systems is high and explainability artifacts do not influence this behavior. The interviews reveal that the nature and complexity of the problem a black-box model addresses influences the users’ perceptions, trust being reduced in situations that represent a threat (e.g. autonomous cars). Concerns about the models’ ethics were also mentioned by the interviewees.
Research limitations/implications
The study considered a small sample of professional analysts from the financial market, which traditionally employs data analysis techniques for credit and risk analysis. Research with personnel in other sectors might reveal different perceptions.
Originality/value
Other studies regarding trust in black-box models and explainability artifacts have focused on ordinary users, with little or no knowledge of data analysis. The present research focuses on expert users, which provides a different perspective and shows that, for them, trust is related to the quality of data and the nature of the problem being solved, as well as the practical consequences. Explanation of the algorithm mechanics itself is not significantly relevant.
Details
Keywords
Ryszard Kłeczek and Monika Hajdas
This study aims to investigate how art events can enrich novice visitors by transforming their practices.
Abstract
Purpose
This study aims to investigate how art events can enrich novice visitors by transforming their practices.
Design/methodology/approach
This research uses an interpretive case study of the art exhibition “1/1/1/1/1” in the Oppenheim gallery in Wroclaw. It draws on multiple sources of evidence, namely, novice visitors’ interviews, observation including photo studies and content analysis of art-makers’ mediation sources. This study is an example of contextual theorizing from case studies and participatory action research with researchers as change agents.
Findings
The evidence highlights that aesthetic values and experiences are contextual to practices and are transformable into other values. The findings illustrate the role of practice theory in studying how art-makers inspire the transformation of practices, including values driving the latter.
Research limitations/implications
The findings provide implications for transformations of co-creating contextual values in contemporary visual art consumption and customer experience management.
Practical implications
Practical implications to arts organizations are also provided regarding cultural mediation conducted by art-makers. Exhibition makers should explain the meanings of the particularly visible artefacts to allow visitors to develop a congruent understanding of the meanings. The explanations should not provide ready answers or solutions to the problem art-makers suggest to rethink.
Social implications
The social implication of our findings is that stakeholders in artistic ventures may undertake adequate, qualified and convergent actions to maintain or transform the defined interactive practices between them in co-creating contextual aesthetic values.
Originality/value
The study provides new insights into co-creating values in practices in the domain of contemporary art exhibitions by bringing the practice theory together with an audience enrichment category, thus illustrating how novice visitors get enriched by transforming their practices led by contextual values of “liking” and “understanding”.
Details
Keywords
Jan A. Pfister, Peeter Peda and David Otley
The purpose of this paper is to reflect on how to apply the abductive research process for developing a theoretical explanation in studies on performance management and management…
Abstract
Purpose
The purpose of this paper is to reflect on how to apply the abductive research process for developing a theoretical explanation in studies on performance management and management control systems. This is important because theoretically ambitious research tends to require explanatory study outcomes, but prior research frameworks provide little guidance in this regard, potentially facilitating ill-defined research designs and a lack of common vocabulary and criteria for evaluating studies.
Design/methodology/approach
The authors introduce a methodological framework that distinguishes three interwoven theoretical abstraction levels: descriptive, analytical and explanatory. They use a recently published qualitative field study to illustrate an application of the framework.
Findings
The framework and its illustrated application make the systematic logic of the abductive research process visible and accessible to researchers. The authors explain how the framework supports moving from empirical description to theoretical explanation during the research process and where the three levels might open spaces for the positioning of novel practices and conceptual and theoretical innovations.
Originality/value
The framework provides guidance for an explanatory research design and theory-building purpose and has been developed in response to recent criticism in the field that highlights the wide gap between leading-edge practice and the lagging state of theory. It offers interdisciplinary vocabulary and evaluation criteria that can be applied by any accounting and management researcher regardless of whether they pursue critical, interpretive or positivist research and whether they primarily use qualitative or quantitative research methods.
Details
Keywords
The purpose of this study is to review the reasoning of the judgment of the United Kingdom Supreme Court in Versloot Dredging BV and another (Appellants) v. HDI Gerling Industrie…
Abstract
Purpose
The purpose of this study is to review the reasoning of the judgment of the United Kingdom Supreme Court in Versloot Dredging BV and another (Appellants) v. HDI Gerling Industrie Versichering AG and Others (Respondents) [2016] UKSC 45 in finding that there is no remedy or sanction for the use of fraudulent devices (so-called “collateral lies”) in insurance claims and to consider potential implications for underwriters.
Design/methodology/approach
The methodology is a typical case law analysis starting from case facts and the reasoning with short comments on legal implications.
Findings
Despite no sanction provided by law for the use of fraudulent devices, the room still opens for the underwriters to stipulate the consequence of using the fraudulent devices by the express term in the insurance contract.
Research limitations/implications
The main implication from the judgment is that underwriters are likely to incur more investigating costs for insurance claims.
Originality/value
This work raises awareness of the marine insurance industry (especially underwriters) as to the approach of the English law towards the use of fraudulent devices.
Details
Keywords
We investigate intraday data for KOSPI 200 index and KOSPI 200 index futures. Hourly theoretical futures prices are calculated based on cost of carry model. we compare hourly…
Abstract
We investigate intraday data for KOSPI 200 index and KOSPI 200 index futures. Hourly theoretical futures prices are calculated based on cost of carry model. we compare hourly index futures prices with their theoretical prices. Consistent with a large body of previous researches in this area, we find the persistent deviation of futures prices from their theoretical prices. Futures prices are undervalued relative to their theoretical prices. The data indicate that the difference between futures price and its theoretical price exhibits U-shaped pattern over the trading hours. The differences are higher at open and at 15:00 and are lower over intraday trading hours, implying that previous studies using daily closing prices overstate this mispricing.
We also examine the effect of intraday spot return on the behavior of the difference between the hourly futures price and its theoretical price. The finding indicates that the intraday momentum generates U-shaped pattern of this mispricing. This contrasts with Kim and Park (2011)'s finding that the difference also increases as the prior 60 day spot return increases. Our finding invalidates their explanation the activities of arbitrageurs bring monotonic increasing pattern of the magnitude of this mispricing in their daily data.
We propose a new explanation the U shaped patttern of the difference between the futures price and its theoretical price generated by the intraday spot return's moment. We introduce risk-seeking trader in our new explanation. The trader's risk-seeking behavior is based on prospect theory (Kahneman and Tversky (1979)). We argue that the risk-seeking traders cause intraday momentum effect to generate the U-shaped pattern of this mispricing. We add speculator's variables to Kim and Park (2011)'s regression equation and estimate it. The results from the regression analysis lend support to our new explanation as well as theirs, implying that speculators and arbitrageurs are present and active in the spot and futures markets and generate different pattern of the mispricing.
Details