Search results

1 – 10 of over 32000
Book part
Publication date: 30 May 2018

Francesco Moscone, Veronica Vinciotti and Elisa Tosetti

This chapter reviews graphical modeling techniques for estimating large covariance matrices and their inverse. The chapter provides a selective survey of different models and…

Abstract

This chapter reviews graphical modeling techniques for estimating large covariance matrices and their inverse. The chapter provides a selective survey of different models and estimators proposed by the graphical modeling literature and offers some practical examples where these methods could be applied in the area of health economics.

Book part
Publication date: 19 November 2014

Daniel Felix Ahelegbey and Paolo Giudici

The latest financial crisis has stressed the need of understanding the world financial system as a network of interconnected institutions, where financial linkages play a…

Abstract

The latest financial crisis has stressed the need of understanding the world financial system as a network of interconnected institutions, where financial linkages play a fundamental role in the spread of systemic risks. In this paper we propose to enrich the topological perspective of network models with a more structured statistical framework, that of Bayesian Gaussian graphical models. From a statistical viewpoint, we propose a new class of hierarchical Bayesian graphical models that can split correlations between institutions into country specific and idiosyncratic ones, in a way that parallels the decomposition of returns in the well-known Capital Asset Pricing Model. From a financial economics viewpoint, we suggest a way to model systemic risk that can explicitly take into account frictions between different financial markets, particularly suited to study the ongoing banking union process in Europe. From a computational viewpoint, we develop a novel Markov chain Monte Carlo algorithm based on Bayes factor thresholding.

Article
Publication date: 11 September 2009

Ryan K.L. Ko, Stephen S.G. Lee and Eng Wah Lee

In the last two decades, a proliferation of business process management (BPM) modeling languages, standards and software systems has given rise to much confusion and obstacles to…

16113

Abstract

Purpose

In the last two decades, a proliferation of business process management (BPM) modeling languages, standards and software systems has given rise to much confusion and obstacles to adoption. Since new BPM languages and notation terminologies were not well defined, duplicate features are common. This paper seeks to make sense of the myriad BPM standards, organising them in a classification framework, and to identify key industry trends.

Design/methodology/approach

An extensive literature review is conducted and relevant BPM notations, languages and standards are referenced against the proposed BPM Standards Classification Framework, which lists each standard's distinct features, strengths and weaknesses.

Findings

The paper is unaware of any classification of BPM languages. An attempt is made to classify BPM languages, standards and notations into four main groups: execution, interchange, graphical, and diagnosis standards. At the present time, there is a lack of established diagnosis standards. It is hoped that such a classification facilitates the meaningful adoption of BPM languages, standards and notations.

Practical implications

The paper differentiates BPM standards, thereby resolving common misconceptions; establishes the need for diagnosis standards; identifies the strengths and limitations of current standards; and highlights current knowledge gaps and future trends. Researchers and practitioners may wish to position their work around this review.

Originality/value

Currently, to the best of one's knowledge, such an overview and such an analysis of BPM standards have not so far been undertaken.

Details

Business Process Management Journal, vol. 15 no. 5
Type: Research Article
ISSN: 1463-7154

Keywords

Book part
Publication date: 15 December 1998

R.S. Tunaru and D.F. Jarrett

The technique of graphical modelling (Whittaker, 1990) can be used to identify the dependence relationships between variables representing characteristics of recorded road…

Abstract

The technique of graphical modelling (Whittaker, 1990) can be used to identify the dependence relationships between variables representing characteristics of recorded road accidents. It allows large multi-dimensional tables to be analysed by looking for conditional independence relationships among the variables. The variables under study can often be divided into groups that are ordered in time or by a hypothesised causal assumption. For these situations graphical chain models (Whittaker, 1990) are used to explore causal relationships between the variables. Some examples are given for a six-dimensional and a ten-dimensional contingency table.

Details

Mathematics in Transport Planning and Control
Type: Book
ISBN: 978-0-08-043430-8

Article
Publication date: 5 June 2017

Esben Rahbek Gjerdrum Pedersen, Linne Marie Lauesen and Arno Kourula

The purpose of this paper is to examine to what extent the conventional stakeholder model mirrors managerial perceptions of the stakeholder environment in the Swedish fashion…

Abstract

Purpose

The purpose of this paper is to examine to what extent the conventional stakeholder model mirrors managerial perceptions of the stakeholder environment in the Swedish fashion industry. The authors aim to adopt a novel approach to stakeholder measurement, as the traditional stakeholder model is constrained by its static two-dimensional nature, which captures neither the nuances of the stakeholder literature nor the dynamics of the firm’s stakeholder universe.

Design/methodology/approach

Empirically, the paper is based on findings from a survey among 492 Swedish fashion manufacturers and retailers.

Findings

The paper reports significant discrepancy between the conventional stakeholder model and the perceptions of real-life managers of the stakeholder environment. On the surface, their understanding is more in line with the managerial model of the firm from which the stakeholder literature originally departs. It is argued, however, that the discrepancy may be rooted in technology rather than theory as the stakeholder model is constrained by its static two-dimensional nature, which captures neither the nuances of the stakeholder literature nor the dynamics of the firm’s stakeholder universe. The paper, therefore, introduces an animated alternative to the conventional stakeholder model that provides a richer graphical representation of a firm’s stakeholder universe.

Research limitations/implications

The paper refers to the open-ended questions in the survey in terms of descriptive statistics, and not the entire quantitative measures in the survey. This is because these questions are crucial to the authors’ approach to the suggested new stakeholder model, which is not tested quantitatively, but should be perceived as explorative – as a qualitative outcome of the survey. The survey is conducted through the web in the Swedish fashion industry only; thus; the authors’ suggested model needs further quantitative qualification, which the authors plead for in future research.

Originality/value

The originality of the paper is its novel approach to stakeholder measurement based on the perceptions of real-life managers of the stakeholder environment of the Swedish fashion industry. The traditional stakeholder model is constrained by its static two-dimensional nature, which the paper’s animated three-dimensional alternative provides a richer graphical representation of a firm’s stakeholder universe.

Details

Social Responsibility Journal, vol. 13 no. 2
Type: Research Article
ISSN: 1747-1117

Keywords

Article
Publication date: 7 June 2011

Emmanuel M. Tadjouddine

As agent‐based systems are increasingly used to model real‐life applications such as the internet, electronic markets or disaster management scenarios, it is important to study…

Abstract

Purpose

As agent‐based systems are increasingly used to model real‐life applications such as the internet, electronic markets or disaster management scenarios, it is important to study the computational complexity of such usually combinatorial systems with respect to some desirable properties. The purpose of this paper is to consider two computational models: graphical games encoding the interactions between rational and selfish agents; and weighted directed acyclic graphs (DAG) for evaluating derivatives of numerical functions. The author studies the complexity of a certain number of search problems in both models.

Design/methodology/approach

The author's approach is essentially theoretical, studying the problem of verifying game‐theoretic properties for graphical games representing interactions between self‐motivated and rational agents, as well as the problem of searching for an optimal elimination ordering in a weighted DAG for evaluating derivatives of functions represented by computer programs.

Findings

A certain class of games has been identified for which Nash or Bayesian Nash equilibria can be verified in polynomial time; then, it has been shown that verifying a dominant strategy equilibrium is non‐deterministic polynomial (NP)‐complete even for normal form games. Finally, it has been shown that the optimal vertex elimination ordering for weighted DAGs is NP‐complete.

Originality/value

This paper presents a general framework for graphical games. The presented results are novel and illustrate how modeling real‐life scenarios involving intelligent agents can lead to computationally hard problems while showing interesting cases that are tractable.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 4 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 5 October 2015

Katsuhiro Motokawa

This paper aims to show the associations between the amount of voluntary human capital (HC) disclosures and company profiles, including required HC and accounting information to…

2094

Abstract

Purpose

This paper aims to show the associations between the amount of voluntary human capital (HC) disclosures and company profiles, including required HC and accounting information to verify a disclosure theory that consolidates four traditional theories. It also verifies the previously found association between voluntary HC information and share price.

Design/methodology/approach

This research uses regression analysis and graphical modelling of a stratified random sample from the Tokyo Stock Exchange. Text mining software is used for content analysis of annual reports to quantify the amount of qualitative HC information.

Findings

This study finds associations between the amount of voluntary HC information and the number of employees and the average salary. In particular, information about competence/qualification and personnel are related.

Research limitations/implications

The results provide some support for the consolidated theory and are presumably consistent with the signalling theory and stakeholder theory in terms of the labour market rather than the financial market.

Originality/value

By using both regression analysis and, graphical modelling this study shows the difference between the outputs of Germany and Japan and how HC characteristics of a firm relate to its disclosure behaviour, revealing hidden aspects that traditional prior studies have ignored.

Details

Journal of Financial Reporting and Accounting, vol. 13 no. 2
Type: Research Article
ISSN: 1985-2517

Keywords

Article
Publication date: 11 February 2019

Purnomo Yustianto, Robin Doss and Suhardi

The modelling landscape experiences a rich proliferation of modelling language, or metamodel. The emergence of cross-disciplinary disciplines, such as enterprise engineering and…

Abstract

Purpose

The modelling landscape experiences a rich proliferation of modelling language, or metamodel. The emergence of cross-disciplinary disciplines, such as enterprise engineering and service engineering, necessitates a multi-perspective approach to traverse the component from strategic level to technological aspect. This paper aims to find a unifying structure of metamodels introduced by academics and industries.

Design/methodology/approach

A grounded approach is taken to define the structure by collating the metamodels to form an emerging structure. Metamodels were collected from a literature survey from several interrelated disciplines: software engineering, system engineering, enterprise architecture, service engineering, business process management and financial accounting.

Findings

The result suggests seven stereotypes of metamodel, characterized by its label: goal, enterprise, business model, service, process, software and system. The aspect of “process” holds a central role in connecting all other aspect in the modelling continuum. Service engineering can be viewed as an alternative abstraction of enterprise engineering in containing the concepts of “business model”, “capability”, “value”, “interaction”, “process” and “software”.

Research limitations/implications

Metamodel collection was performed to emphasize on representativeness rather than comprehensiveness, in which old and unpopular metamodel were disregarded unless it offer unique characteristic not yet represented in the collection. Owing to its bottom-up approach, the paper is not intended to identify a gap in metamodel offering.

Originality/value

This paper produces a structure of metamodel landscape in a graphical format to illustrate correlation between metamodels in which evolutive patterns of metamodel proliferation can be observed. The produced structure can serve as map in metamodel continuum.

Details

Journal of Modelling in Management, vol. 14 no. 1
Type: Research Article
ISSN: 1746-5664

Keywords

Content available
Book part
Publication date: 30 May 2018

Abstract

Details

Health Econometrics
Type: Book
ISBN: 978-1-78714-541-2

Book part
Publication date: 30 August 2019

Percy K. Mistry and Michael D. Lee

Jeliazkov and Poirier (2008) analyze the daily incidence of violence during the Second Intifada in a statistical way using an analytical Bayesian implementation of a second-order…

Abstract

Jeliazkov and Poirier (2008) analyze the daily incidence of violence during the Second Intifada in a statistical way using an analytical Bayesian implementation of a second-order discrete Markov process. We tackle the same data and modeling problem from our perspective as cognitive scientists. First, we propose a psychological model of violence, based on a latent psychological construct we call “build up” that controls the retaliatory and repetitive violent behavior by both sides in the conflict. Build up is based on a social memory of recent violence and generates the probability and intensity of current violence. Our psychological model is implemented as a generative probabilistic graphical model, which allows for fully Bayesian inference using computational methods. We show that our model is both descriptively adequate, based on posterior predictive checks, and has good predictive performance. We then present a series of results that show how inferences based on the model can provide insight into the nature of the conflict. These inferences consider the base rates of violence in different periods of the Second Intifada, the nature of the social memory for recent violence, and the way repetitive versus retaliatory violent behavior affects each side in the conflict. Finally, we discuss possible extensions of our model and draw conclusions about the potential theoretical and methodological advantages of treating societal conflict as a cognitive modeling problem.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

1 – 10 of over 32000