Search results

1 – 10 of 571
Article
Publication date: 8 February 2013

Ofir Ben‐Assuli and Moshe Leshno

Although very significant and applicable, there have been no formal justifications for the use of MonteCarlo models and Markov chains in evaluating hospital admission decisions…

Abstract

Purpose

Although very significant and applicable, there have been no formal justifications for the use of MonteCarlo models and Markov chains in evaluating hospital admission decisions or concrete data supporting their use. For these reasons, this research was designed to provide a deeper understanding of these models. The purpose of this paper is to examine the usefulness of a computerized MonteCarlo simulation of admission decisions under the constraints of emergency departments.

Design/methodology/approach

The authors construct a simple decision tree using the expected utility method to represent the complex admission decision process terms of quality adjusted life years (QALY) then show the advantages of using a MonteCarlo simulation in evaluating admission decisions in a cohort simulation, using a decision tree and a Markov chain.

Findings

After showing that the MonteCarlo simulation outperforms an expected utility method without a simulation, the authors develop a decision tree with such a model. real cohort simulation data are used to demonstrate that the integration of a MonteCarlo simulation shows which patients should be admitted.

Research limitations/implications

This paper may encourage researchers to use MonteCarlo simulation in evaluating admission decision implications. The authors also propose applying the model when using a computer simulation that deals with various CVD symptoms in clinical cohorts.

Originality/value

Aside from demonstrating the value of a MonteCarlo simulation as a powerful analysis tool, the paper's findings may prompt researchers to conduct a decision analysis with a MonteCarlo simulation in the healthcare environment.

Details

Journal of Enterprise Information Management, vol. 26 no. 1/2
Type: Research Article
ISSN: 1741-0398

Keywords

Content available
Article
Publication date: 23 October 2023

Adam Biggs and Joseph Hamilton

Evaluating warfighter lethality is a critical aspect of military performance. Raw metrics such as marksmanship speed and accuracy can provide some insight, yet interpreting subtle…

Abstract

Purpose

Evaluating warfighter lethality is a critical aspect of military performance. Raw metrics such as marksmanship speed and accuracy can provide some insight, yet interpreting subtle differences can be challenging. For example, is a speed difference of 300 milliseconds more important than a 10% accuracy difference on the same drill? Marksmanship evaluations must have objective methods to differentiate between critical factors while maintaining a holistic view of human performance.

Design/methodology/approach

Monte Carlo simulations are one method to circumvent speed/accuracy trade-offs within marksmanship evaluations. They can accommodate both speed and accuracy implications simultaneously without needing to hold one constant for the sake of the other. Moreover, Monte Carlo simulations can incorporate variability as a key element of performance. This approach thus allows analysts to determine consistency of performance expectations when projecting future outcomes.

Findings

The review divides outcomes into both theoretical overview and practical implication sections. Each aspect of the Monte Carlo simulation can be addressed separately, reviewed and then incorporated as a potential component of small arms combat modeling. This application allows for new human performance practitioners to more quickly adopt the method for different applications.

Originality/value

Performance implications are often presented as inferential statistics. By using the Monte Carlo simulations, practitioners can present outcomes in terms of lethality. This method should help convey the impact of any marksmanship evaluation to senior leadership better than current inferential statistics, such as effect size measures.

Details

Journal of Defense Analytics and Logistics, vol. 7 no. 2
Type: Research Article
ISSN: 2399-6439

Keywords

Content available
Article
Publication date: 18 May 2023

Adam Biggs, Greg Huffman, Joseph Hamilton, Ken Javes, Jacob Brookfield, Anthony Viggiani, John Costa and Rachel R. Markwald

Marksmanship data is a staple of military and law enforcement evaluations. This ubiquitous nature creates a critical need to use all relevant information and to convey outcomes in…

Abstract

Purpose

Marksmanship data is a staple of military and law enforcement evaluations. This ubiquitous nature creates a critical need to use all relevant information and to convey outcomes in a meaningful way for the end users. The purpose of this study is to demonstrate how simple simulation techniques can improve interpretations of marksmanship data.

Design/methodology/approach

This study uses three simulations to demonstrate the advantages of small arms combat modeling, including (1) the benefits of incorporating a Markov Chain into Monte Carlo shooting simulations; (2) how small arms combat modeling is superior to point-based evaluations; and (3) why continuous-time chains better capture performance than discrete-time chains.

Findings

The proposed method reduces ambiguity in low-accuracy scenarios while also incorporating a more holistic view of performance as outcomes simultaneously incorporate speed and accuracy rather than holding one constant.

Practical implications

This process determines the probability of winning an engagement against a given opponent while circumventing arbitrary discussions of speed and accuracy trade-offs. Someone wins 70% of combat engagements against a given opponent rather than scoring 15 more points. Moreover, risk exposure is quantified by determining the likely casualties suffered to achieve victory. This combination makes the practical consequences of human performance differences tangible to the end users. Taken together, this approach advances the operations research analyses of squad-level combat engagements.

Originality/value

For more than a century, marksmanship evaluations have used point-based systems to classify shooters. However, these scoring methods were developed for competitive integrity rather than lethality as points do not adequately capture combat capabilities. The proposed method thus represents a major shift in the marksmanship scoring paradigm.

Details

Journal of Defense Analytics and Logistics, vol. 7 no. 1
Type: Research Article
ISSN: 2399-6439

Keywords

Article
Publication date: 23 March 2012

Boris Mitavskiy, Jonathan Rowe and Chris Cannings

The purpose of this paper is to establish a version of a theorem that originated from population genetics and has been later adopted in evolutionary computation theory that will…

Abstract

Purpose

The purpose of this paper is to establish a version of a theorem that originated from population genetics and has been later adopted in evolutionary computation theory that will lead to novel MonteCarlo sampling algorithms that provably increase the AI potential.

Design/methodology/approach

In the current paper the authors set up a mathematical framework, state and prove a version of a Geiringer‐like theorem that is very well‐suited for the development of Mote‐Carlo sampling algorithms to cope with randomness and incomplete information to make decisions.

Findings

This work establishes an important theoretical link between classical population genetics, evolutionary computation theory and model free reinforcement learning methodology. Not only may the theory explain the success of the currently existing MonteCarlo tree sampling methodology, but it also leads to the development of novel MonteCarlo sampling techniques guided by rigorous mathematical foundation.

Practical implications

The theoretical foundations established in the current work provide guidance for the design of powerful MonteCarlo sampling algorithms in model free reinforcement learning, to tackle numerous problems in computational intelligence.

Originality/value

Establishing a Geiringer‐like theorem with non‐homologous recombination was a long‐standing open problem in evolutionary computation theory. Apart from overcoming this challenge, in a mathematically elegant fashion and establishing a rather general and powerful version of the theorem, this work leads directly to the development of novel provably powerful algorithms for decision making in the environment involving randomness, hidden or incomplete information.

Article
Publication date: 29 November 2018

Dilip Sembakutti, Aldin Ardian, Mustafa Kumral and Agus Pulung Sasmito

The purpose of this paper is twofold: an approach is proposed to determine the optimum replacement time for shovel teeth; and a risk-quantification approached is developed to…

Abstract

Purpose

The purpose of this paper is twofold: an approach is proposed to determine the optimum replacement time for shovel teeth; and a risk-quantification approached is developed to derive a confidence interval for replacement time.

Design/methodology/approach

The risk-quantification approach is based on a combination of Monte Carlo simulation and Markov chain. Monte Carlo simulation whereby the wear of shovel teeth is probabilistically monitored over time is used.

Findings

Results show that a proper replacement strategy has potential to increase operation efficiency and the uncertainties associated with this strategy can be managed.

Research limitations/implications

The failure time distribution of a tooth is assumed to remain “identically distributed and independent.” Planned tooth replacements are always done when the shovel is not in operation (e.g. between a shift change).

Practical implications

The proposed approach can be effectively used to determine a replacement strategy, along with the level of confidence level, for preventive maintenance planning.

Originality/value

The originality of the paper rests on developing a novel approach to monitor wear on mining shovels probabilistically. Uncertainty associated with production targets is quantified.

Details

International Journal of Quality & Reliability Management, vol. 35 no. 10
Type: Research Article
ISSN: 0265-671X

Keywords

Book part
Publication date: 30 August 2019

Md. Nazmul Ahsan and Jean-Marie Dufour

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult…

Abstract

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult to apply due to the presence of latent variables. The existing methods are either computationally costly and/or inefficient. In this paper, we propose computationally simple estimators for the SV model, which are at the same time highly efficient. The proposed class of estimators uses a small number of moment equations derived from an ARMA representation associated with the SV model, along with the possibility of using “winsorization” to improve stability and efficiency. We call these ARMA-SV estimators. Closed-form expressions for ARMA-SV estimators are obtained, and no numerical optimization procedure or choice of initial parameter values is required. The asymptotic distributional theory of the proposed estimators is studied. Due to their computational simplicity, the ARMA-SV estimators allow one to make reliable – even exact – simulation-based inference, through the application of Monte Carlo (MC) test or bootstrap methods. We compare them in a simulation experiment with a wide array of alternative estimation methods, in terms of bias, root mean square error and computation time. In addition to confirming the enormous computational advantage of the proposed estimators, the results show that ARMA-SV estimators match (or exceed) alternative estimators in terms of precision, including the widely used Bayesian estimator. The proposed methods are applied to daily observations on the returns for three major stock prices (Coca-Cola, Walmart, Ford) and the S&P Composite Price Index (2000–2017). The results confirm the presence of stochastic volatility with strong persistence.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

Book part
Publication date: 24 April 2023

Florens Odendahl, Barbara Rossi and Tatevik Sekhposyan

The authors propose novel tests for the detection of Markov switching deviations from forecast rationality. Existing forecast rationality tests either focus on constant deviations…

Abstract

The authors propose novel tests for the detection of Markov switching deviations from forecast rationality. Existing forecast rationality tests either focus on constant deviations from forecast rationality over the full sample or are constructed to detect smooth deviations based on non-parametric techniques. In contrast, the proposed tests are parametric and have an advantage in detecting abrupt departures from unbiasedness and efficiency, which the authors demonstrate with Monte Carlo simulations. Using the proposed tests, the authors investigate whether Blue Chip Financial Forecasts (BCFF) for the Federal Funds Rate (FFR) are unbiased. The tests find evidence of a state-dependent bias: forecasters tend to systematically overpredict interest rates during periods of monetary easing, while the forecasts are unbiased otherwise. The authors show that a similar state-dependent bias is also present in market-based forecasts of interest rates, but not in the forecasts of real GDP growth and GDP deflator-based inflation. The results emphasize the special role played by monetary policy in shaping interest rate expectations above and beyond macroeconomic fundamentals.

Details

Essays in Honor of Joon Y. Park: Econometric Methodology in Empirical Applications
Type: Book
ISBN: 978-1-83753-212-4

Keywords

Article
Publication date: 5 July 2022

Firano Zakaria and Anass Benbachir

One of the crucial issues in the contemporary finance is the prediction of the volatility of financial assets. In this paper, the authors are interested in modelling the…

Abstract

Purpose

One of the crucial issues in the contemporary finance is the prediction of the volatility of financial assets. In this paper, the authors are interested in modelling the stochastic volatility of the MAD/EURO and MAD/USD exchange rates.

Design/methodology/approach

For this purpose, the authors have adopted Bayesian approach based on the MCMC (Monte Carlo Markov Chain) algorithm which permits to reproduce the main stylized empirical facts of the assets studied. The data used in this study are the daily historical series of MAD/EURO and MAD/USD exchange rates covering the period from February 2, 2000, to March 3, 2017, which represent 4,456 observations.

Findings

By the aid of this approach, the authors were able to estimate all the random parameters of the stochastic volatility model which permit the prediction of the future exchange rates. The authors also have simulated the histograms, the posterior densities as well as the cumulative averages of the model parameters. The predictive efficiency of the stochastic volatility model for Morocco is capable to facilitate the management of the exchange rate in more flexible exchange regime to ensure better targeting of monetary and exchange policies.

Originality/value

To the best of the authors’ knowledge, the novelty of the paper lies in the production of a tool for predicting the evolution of the Moroccan exchange rate and also the design of a tool for the monetary authorities who are today in a proactive conception of management of the rate of exchange. Cyclical policies such as monetary policy and exchange rate policy will introduce this type of modelling into the decision-making process to achieve a better stabilization of the macroeconomic and financial framework.

Details

Journal of Modelling in Management, vol. 18 no. 5
Type: Research Article
ISSN: 1746-5664

Keywords

Book part
Publication date: 19 November 2014

Garland Durham and John Geweke

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially…

Abstract

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benefits algorithms that conform to parallel computing environments are needed. This paper presents a sequential posterior simulator designed to operate efficiently in this context. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable, and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm facilitates Bayesian model comparison by producing marginal likelihood approximations of unprecedented accuracy as an incidental by-product, is robust to pathological posterior distributions, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference.

Article
Publication date: 26 September 2008

Ling Wang, Jian Chu and Weijie Mao

The purpose of this paper is to develop a condition‐based replacement and spare provisioning policy for deteriorating systems with a number of identical units.

1325

Abstract

Purpose

The purpose of this paper is to develop a condition‐based replacement and spare provisioning policy for deteriorating systems with a number of identical units.

Design/methodology/approach

The deterioration of units is modeled based on discrete‐time Markov chains, which can be classified into one of a finite number of states. Then, a condition‐based replacement and spare provisioning policy is proposed for deteriorating systems with a number of identical units. This policy combines the condition‐based replacement policy and the (S, s) type inventory policy, where S is the maximum stock level and s is the reorder level. The Monte Carlo approach is utilized for evaluating the average cost rate of the system under the proposed policy. Finally, numerical examples are given to illustrate the performance of the proposed policy, as well as the sensitivity analysis of cost parameters.

Findings

The negative influences of increasing the lead time can be reduced by optimizing the decisions of condition‐based replacement and spare order based on the proposed policy.

Practical implications

This policy would be applicable for jointly optimizing the spare provisioning decisions and the condition‐based maintenance of the units in deteriorating systems (e.g. a group of identical motors included in a fleet of vehicles).

Originality/value

The paper considers simultaneously two aspects that influence condition‐based maintenance decisions: the availability of spares and the deterioration state of units.

Details

Journal of Quality in Maintenance Engineering, vol. 14 no. 4
Type: Research Article
ISSN: 1355-2511

Keywords

1 – 10 of 571