Search results
1 – 2 of 2This paper aims to argue that the Global Political Economy (GPE) theory of neomercantilism provides a sound explanation to the American military involvement in the Persian Gulf…
Abstract
Purpose
This paper aims to argue that the Global Political Economy (GPE) theory of neomercantilism provides a sound explanation to the American military involvement in the Persian Gulf. Accordingly, this paper also proposes the concept of “Neomercantilist War” which analyses the use of military force to protect a strategically vital economic resource (such as Gulf oil). Neomercantilist War is a point of similarity between the GPE school of neomercantilism and the International Relations (IR) school of realism.
Design/methodology/approach
The 1991 Gulf War and the American invasion of Iraq in 2003 are two major events of American military involvement to protect and/or seize Gulf oil. These two events will be tested for neomercantilism, in addition to the concept of “Neomercantilist War” as presented in the paper. The first feature, or definitional component, of neomercantilism is the major role of the state, the second is the preponderance of security/geopolitical goals over economic goals and the third is the zero-sum, relative gains mentality to dealing between states IR.
Findings
The GPE school of neomercantilism and the concept of Neomercantilist War do offer a sound explanation of American military involvement in the Gulf.
Originality/value
The American military involvement in the Gulf region has been analysed using the IR schools of realism and liberalism, but never using GPE theory. Even though GPE is mostly concerned with economic activity, the scope of GPE should be expanded to include military policies if they affect economic resources and activity.
Details
Keywords
Samuli Laato, Miika Tiainen, A.K.M. Najmul Islam and Matti Mäntymäki
Inscrutable machine learning (ML) models are part of increasingly many information systems. Understanding how these models behave, and what their output is based on, is a…
Abstract
Purpose
Inscrutable machine learning (ML) models are part of increasingly many information systems. Understanding how these models behave, and what their output is based on, is a challenge for developers let alone non-technical end users.
Design/methodology/approach
The authors investigate how AI systems and their decisions ought to be explained for end users through a systematic literature review.
Findings
The authors’ synthesis of the literature suggests that AI system communication for end users has five high-level goals: (1) understandability, (2) trustworthiness, (3) transparency, (4) controllability and (5) fairness. The authors identified several design recommendations, such as offering personalized and on-demand explanations and focusing on the explainability of key functionalities instead of aiming to explain the whole system. There exists multiple trade-offs in AI system explanations, and there is no single best solution that fits all cases.
Research limitations/implications
Based on the synthesis, the authors provide a design framework for explaining AI systems to end users. The study contributes to the work on AI governance by suggesting guidelines on how to make AI systems more understandable, fair, trustworthy, controllable and transparent.
Originality/value
This literature review brings together the literature on AI system communication and explainable AI (XAI) for end users. Building on previous academic literature on the topic, it provides synthesized insights, design recommendations and future research agenda.
Details