Search results

1 – 10 of 766
Article
Publication date: 8 February 2013

Ofir Ben‐Assuli and Moshe Leshno

Although very significant and applicable, there have been no formal justifications for the use of MonteCarlo models and Markov chains in evaluating hospital admission decisions…

Abstract

Purpose

Although very significant and applicable, there have been no formal justifications for the use of MonteCarlo models and Markov chains in evaluating hospital admission decisions or concrete data supporting their use. For these reasons, this research was designed to provide a deeper understanding of these models. The purpose of this paper is to examine the usefulness of a computerized MonteCarlo simulation of admission decisions under the constraints of emergency departments.

Design/methodology/approach

The authors construct a simple decision tree using the expected utility method to represent the complex admission decision process terms of quality adjusted life years (QALY) then show the advantages of using a MonteCarlo simulation in evaluating admission decisions in a cohort simulation, using a decision tree and a Markov chain.

Findings

After showing that the MonteCarlo simulation outperforms an expected utility method without a simulation, the authors develop a decision tree with such a model. real cohort simulation data are used to demonstrate that the integration of a MonteCarlo simulation shows which patients should be admitted.

Research limitations/implications

This paper may encourage researchers to use MonteCarlo simulation in evaluating admission decision implications. The authors also propose applying the model when using a computer simulation that deals with various CVD symptoms in clinical cohorts.

Originality/value

Aside from demonstrating the value of a MonteCarlo simulation as a powerful analysis tool, the paper's findings may prompt researchers to conduct a decision analysis with a MonteCarlo simulation in the healthcare environment.

Details

Journal of Enterprise Information Management, vol. 26 no. 1/2
Type: Research Article
ISSN: 1741-0398

Keywords

Content available
Article
Publication date: 23 October 2023

Adam Biggs and Joseph Hamilton

Evaluating warfighter lethality is a critical aspect of military performance. Raw metrics such as marksmanship speed and accuracy can provide some insight, yet interpreting subtle…

Abstract

Purpose

Evaluating warfighter lethality is a critical aspect of military performance. Raw metrics such as marksmanship speed and accuracy can provide some insight, yet interpreting subtle differences can be challenging. For example, is a speed difference of 300 milliseconds more important than a 10% accuracy difference on the same drill? Marksmanship evaluations must have objective methods to differentiate between critical factors while maintaining a holistic view of human performance.

Design/methodology/approach

Monte Carlo simulations are one method to circumvent speed/accuracy trade-offs within marksmanship evaluations. They can accommodate both speed and accuracy implications simultaneously without needing to hold one constant for the sake of the other. Moreover, Monte Carlo simulations can incorporate variability as a key element of performance. This approach thus allows analysts to determine consistency of performance expectations when projecting future outcomes.

Findings

The review divides outcomes into both theoretical overview and practical implication sections. Each aspect of the Monte Carlo simulation can be addressed separately, reviewed and then incorporated as a potential component of small arms combat modeling. This application allows for new human performance practitioners to more quickly adopt the method for different applications.

Originality/value

Performance implications are often presented as inferential statistics. By using the Monte Carlo simulations, practitioners can present outcomes in terms of lethality. This method should help convey the impact of any marksmanship evaluation to senior leadership better than current inferential statistics, such as effect size measures.

Details

Journal of Defense Analytics and Logistics, vol. 7 no. 2
Type: Research Article
ISSN: 2399-6439

Keywords

Content available
Article
Publication date: 18 May 2023

Adam Biggs, Greg Huffman, Joseph Hamilton, Ken Javes, Jacob Brookfield, Anthony Viggiani, John Costa and Rachel R. Markwald

Marksmanship data is a staple of military and law enforcement evaluations. This ubiquitous nature creates a critical need to use all relevant information and to convey outcomes in…

Abstract

Purpose

Marksmanship data is a staple of military and law enforcement evaluations. This ubiquitous nature creates a critical need to use all relevant information and to convey outcomes in a meaningful way for the end users. The purpose of this study is to demonstrate how simple simulation techniques can improve interpretations of marksmanship data.

Design/methodology/approach

This study uses three simulations to demonstrate the advantages of small arms combat modeling, including (1) the benefits of incorporating a Markov Chain into Monte Carlo shooting simulations; (2) how small arms combat modeling is superior to point-based evaluations; and (3) why continuous-time chains better capture performance than discrete-time chains.

Findings

The proposed method reduces ambiguity in low-accuracy scenarios while also incorporating a more holistic view of performance as outcomes simultaneously incorporate speed and accuracy rather than holding one constant.

Practical implications

This process determines the probability of winning an engagement against a given opponent while circumventing arbitrary discussions of speed and accuracy trade-offs. Someone wins 70% of combat engagements against a given opponent rather than scoring 15 more points. Moreover, risk exposure is quantified by determining the likely casualties suffered to achieve victory. This combination makes the practical consequences of human performance differences tangible to the end users. Taken together, this approach advances the operations research analyses of squad-level combat engagements.

Originality/value

For more than a century, marksmanship evaluations have used point-based systems to classify shooters. However, these scoring methods were developed for competitive integrity rather than lethality as points do not adequately capture combat capabilities. The proposed method thus represents a major shift in the marksmanship scoring paradigm.

Details

Journal of Defense Analytics and Logistics, vol. 7 no. 1
Type: Research Article
ISSN: 2399-6439

Keywords

Article
Publication date: 5 July 2022

Firano Zakaria and Anass Benbachir

One of the crucial issues in the contemporary finance is the prediction of the volatility of financial assets. In this paper, the authors are interested in modelling the…

Abstract

Purpose

One of the crucial issues in the contemporary finance is the prediction of the volatility of financial assets. In this paper, the authors are interested in modelling the stochastic volatility of the MAD/EURO and MAD/USD exchange rates.

Design/methodology/approach

For this purpose, the authors have adopted Bayesian approach based on the MCMC (Monte Carlo Markov Chain) algorithm which permits to reproduce the main stylized empirical facts of the assets studied. The data used in this study are the daily historical series of MAD/EURO and MAD/USD exchange rates covering the period from February 2, 2000, to March 3, 2017, which represent 4,456 observations.

Findings

By the aid of this approach, the authors were able to estimate all the random parameters of the stochastic volatility model which permit the prediction of the future exchange rates. The authors also have simulated the histograms, the posterior densities as well as the cumulative averages of the model parameters. The predictive efficiency of the stochastic volatility model for Morocco is capable to facilitate the management of the exchange rate in more flexible exchange regime to ensure better targeting of monetary and exchange policies.

Originality/value

To the best of the authors’ knowledge, the novelty of the paper lies in the production of a tool for predicting the evolution of the Moroccan exchange rate and also the design of a tool for the monetary authorities who are today in a proactive conception of management of the rate of exchange. Cyclical policies such as monetary policy and exchange rate policy will introduce this type of modelling into the decision-making process to achieve a better stabilization of the macroeconomic and financial framework.

Details

Journal of Modelling in Management, vol. 18 no. 5
Type: Research Article
ISSN: 1746-5664

Keywords

Book part
Publication date: 19 November 2014

Martin Burda

The BEKK GARCH class of models presents a popular set of tools for applied analysis of dynamic conditional covariances. Within this class the analyst faces a range of model…

Abstract

The BEKK GARCH class of models presents a popular set of tools for applied analysis of dynamic conditional covariances. Within this class the analyst faces a range of model choices that trade off flexibility with parameter parsimony. In the most flexible unrestricted BEKK the parameter dimensionality increases quickly with the number of variables. Covariance targeting decreases model dimensionality but induces a set of nonlinear constraints on the underlying parameter space that are difficult to implement. Recently, the rotated BEKK (RBEKK) has been proposed whereby a targeted BEKK model is applied after the spectral decomposition of the conditional covariance matrix. An easily estimable RBEKK implies a full albeit constrained BEKK for the unrotated returns. However, the degree of the implied restrictiveness is currently unknown. In this paper, we suggest a Bayesian approach to estimation of the BEKK model with targeting based on Constrained Hamiltonian Monte Carlo (CHMC). We take advantage of suitable parallelization of the problem within CHMC utilizing the newly available computing power of multi-core CPUs and Graphical Processing Units (GPUs) that enables us to deal effectively with the inherent nonlinear constraints posed by covariance targeting in relatively high dimensions. Using parallel CHMC we perform a model comparison in terms of predictive ability of the targeted BEKK with the RBEKK in the context of an application concerning a multivariate dynamic volatility analysis of a Dow Jones Industrial returns portfolio. Although the RBEKK does improve over a diagonal BEKK restriction, it is clearly dominated by the full targeted BEKK model.

Details

Bayesian Model Comparison
Type: Book
ISBN: 978-1-78441-185-5

Keywords

Article
Publication date: 29 November 2018

Dilip Sembakutti, Aldin Ardian, Mustafa Kumral and Agus Pulung Sasmito

The purpose of this paper is twofold: an approach is proposed to determine the optimum replacement time for shovel teeth; and a risk-quantification approached is developed to…

Abstract

Purpose

The purpose of this paper is twofold: an approach is proposed to determine the optimum replacement time for shovel teeth; and a risk-quantification approached is developed to derive a confidence interval for replacement time.

Design/methodology/approach

The risk-quantification approach is based on a combination of Monte Carlo simulation and Markov chain. Monte Carlo simulation whereby the wear of shovel teeth is probabilistically monitored over time is used.

Findings

Results show that a proper replacement strategy has potential to increase operation efficiency and the uncertainties associated with this strategy can be managed.

Research limitations/implications

The failure time distribution of a tooth is assumed to remain “identically distributed and independent.” Planned tooth replacements are always done when the shovel is not in operation (e.g. between a shift change).

Practical implications

The proposed approach can be effectively used to determine a replacement strategy, along with the level of confidence level, for preventive maintenance planning.

Originality/value

The originality of the paper rests on developing a novel approach to monitor wear on mining shovels probabilistically. Uncertainty associated with production targets is quantified.

Details

International Journal of Quality & Reliability Management, vol. 35 no. 10
Type: Research Article
ISSN: 0265-671X

Keywords

Article
Publication date: 13 November 2019

Richard Ohene Asiedu and William Gyadu-Asiedu

This paper aims to focus on developing a baseline model for time overrun.

Abstract

Purpose

This paper aims to focus on developing a baseline model for time overrun.

Design/methodology/approach

Information on 321 completed construction projects used to assess the predictive performance of two statistical techniques, namely, multiple regression and the Bayesian approach.

Findings

The eventual results from the Bayesian Markov chain Monte Carlo model were observed to improve the predictive ability of the model compared with multiple linear regression. Besides the unique nuances peculiar with projects executed, the scope factors initial duration, gross floor area and number of storeys have been observed to be stable predictors of time overrun.

Originality/value

This current model contributes to improving the reliability of predicting time overruns.

Details

Journal of Engineering, Design and Technology , vol. 18 no. 3
Type: Research Article
ISSN: 1726-0531

Keywords

Book part
Publication date: 30 August 2019

Md. Nazmul Ahsan and Jean-Marie Dufour

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult…

Abstract

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult to apply due to the presence of latent variables. The existing methods are either computationally costly and/or inefficient. In this paper, we propose computationally simple estimators for the SV model, which are at the same time highly efficient. The proposed class of estimators uses a small number of moment equations derived from an ARMA representation associated with the SV model, along with the possibility of using “winsorization” to improve stability and efficiency. We call these ARMA-SV estimators. Closed-form expressions for ARMA-SV estimators are obtained, and no numerical optimization procedure or choice of initial parameter values is required. The asymptotic distributional theory of the proposed estimators is studied. Due to their computational simplicity, the ARMA-SV estimators allow one to make reliable – even exact – simulation-based inference, through the application of Monte Carlo (MC) test or bootstrap methods. We compare them in a simulation experiment with a wide array of alternative estimation methods, in terms of bias, root mean square error and computation time. In addition to confirming the enormous computational advantage of the proposed estimators, the results show that ARMA-SV estimators match (or exceed) alternative estimators in terms of precision, including the widely used Bayesian estimator. The proposed methods are applied to daily observations on the returns for three major stock prices (Coca-Cola, Walmart, Ford) and the S&P Composite Price Index (2000–2017). The results confirm the presence of stochastic volatility with strong persistence.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

Open Access
Article
Publication date: 7 August 2019

Markus Neumayer, Thomas Suppan and Thomas Bretterklieber

The application of statistical inversion theory provides a powerful approach for solving estimation problems including the ability for uncertainty quantification (UQ) by means of…

Abstract

Purpose

The application of statistical inversion theory provides a powerful approach for solving estimation problems including the ability for uncertainty quantification (UQ) by means of Markov chain Monte Carlo (MCMC) methods and Monte Carlo integration. This paper aims to analyze the application of a state reduction technique within different MCMC techniques to improve the computational efficiency and the tuning process of these algorithms.

Design/methodology/approach

A reduced state representation is constructed from a general prior distribution. For sampling the Metropolis Hastings (MH) Algorithm and the Gibbs sampler are used. Efficient proposal generation techniques and techniques for conditional sampling are proposed and evaluated for an exemplary inverse problem.

Findings

For the MH-algorithm, high acceptance rates can be obtained with a simple proposal kernel. For the Gibbs sampler, an efficient technique for conditional sampling was found. The state reduction scheme stabilizes the ill-posed inverse problem, allowing a solution without a dedicated prior distribution. The state reduction is suitable to represent general material distributions.

Practical implications

The state reduction scheme and the MCMC techniques can be applied in different imaging problems. The stabilizing nature of the state reduction improves the solution of ill-posed problems. The tuning of the MCMC methods is simplified.

Originality/value

The paper presents a method to improve the solution process of inverse problems within the Bayesian framework. The stabilization of the inverse problem due to the state reduction improves the solution. The approach simplifies the tuning of MCMC methods.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 38 no. 5
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 6 March 2017

Zbigniew Bulinski and Helcio R.B. Orlande

This paper aims to present development and application of the Bayesian inverse approach for retrieving parameters of non-linear diffusion coefficient based on the integral…

Abstract

Purpose

This paper aims to present development and application of the Bayesian inverse approach for retrieving parameters of non-linear diffusion coefficient based on the integral information.

Design/methodology/approach

The Bayes formula was used to construct posterior distribution of the unknown parameters of non-linear diffusion coefficient. The resulting aposteriori distribution of sought parameters was integrated using Markov Chain Monte Carlo method to obtain expected values of estimated diffusivity parameters as well as their confidence intervals. Unsteady non-linear diffusion equation was discretised with the Global Radial Basis Function Collocation method and solved in time using Crank–Nicholson technique.

Findings

A number of manufactured analytical solutions of the non-linear diffusion problem was used to verify accuracy of the developed inverse approach. Reasonably good agreement, even for highly correlated parameters, was obtained. Therefore, the technique was used to compute concentration dependent diffusion coefficient of water in paper.

Originality/value

An original inverse technique, which couples efficiently meshless solution of the diffusion problem with the Bayesian inverse methodology, is presented in the paper. This methodology was extensively verified and applied to the real-life problem.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 27 no. 3
Type: Research Article
ISSN: 0961-5539

Keywords

1 – 10 of 766