Search results

1 – 10 of over 6000
Content available
Article
Publication date: 23 October 2023

Adam Biggs and Joseph Hamilton

Evaluating warfighter lethality is a critical aspect of military performance. Raw metrics such as marksmanship speed and accuracy can provide some insight, yet interpreting subtle…

Abstract

Purpose

Evaluating warfighter lethality is a critical aspect of military performance. Raw metrics such as marksmanship speed and accuracy can provide some insight, yet interpreting subtle differences can be challenging. For example, is a speed difference of 300 milliseconds more important than a 10% accuracy difference on the same drill? Marksmanship evaluations must have objective methods to differentiate between critical factors while maintaining a holistic view of human performance.

Design/methodology/approach

Monte Carlo simulations are one method to circumvent speed/accuracy trade-offs within marksmanship evaluations. They can accommodate both speed and accuracy implications simultaneously without needing to hold one constant for the sake of the other. Moreover, Monte Carlo simulations can incorporate variability as a key element of performance. This approach thus allows analysts to determine consistency of performance expectations when projecting future outcomes.

Findings

The review divides outcomes into both theoretical overview and practical implication sections. Each aspect of the Monte Carlo simulation can be addressed separately, reviewed and then incorporated as a potential component of small arms combat modeling. This application allows for new human performance practitioners to more quickly adopt the method for different applications.

Originality/value

Performance implications are often presented as inferential statistics. By using the Monte Carlo simulations, practitioners can present outcomes in terms of lethality. This method should help convey the impact of any marksmanship evaluation to senior leadership better than current inferential statistics, such as effect size measures.

Details

Journal of Defense Analytics and Logistics, vol. 7 no. 2
Type: Research Article
ISSN: 2399-6439

Keywords

Abstract

Details

Panel Data and Structural Labour Market Models
Type: Book
ISBN: 978-0-44450-319-0

Article
Publication date: 18 May 2023

Tamara Schamberger

Structural equation modeling (SEM) is a well-established and frequently applied method in various disciplines. New methods in the context of SEM are being introduced in an ongoing…

Abstract

Purpose

Structural equation modeling (SEM) is a well-established and frequently applied method in various disciplines. New methods in the context of SEM are being introduced in an ongoing manner. Since formal proof of statistical properties is difficult or impossible, new methods are frequently justified using Monte Carlo simulations. For SEM with covariance-based estimators, several tools are available to perform Monte Carlo simulations. Moreover, several guidelines on how to conduct a Monte Carlo simulation for SEM with these tools have been introduced. In contrast, software to estimate structural equation models with variance-based estimators such as partial least squares path modeling (PLS-PM) is limited.

Design/methodology/approach

As a remedy, the R package cSEM which allows researchers to estimate structural equation models and to perform Monte Carlo simulations for SEM with variance-based estimators has been introduced. This manuscript provides guidelines on how to conduct a Monte Carlo simulation for SEM with variance-based estimators using the R packages cSEM and cSEM.DGP.

Findings

The author introduces and recommends a six-step procedure to be followed in conducting each Monte Carlo simulation.

Originality/value

For each of the steps, common design patterns are given. Moreover, these guidelines are illustrated by an example Monte Carlo simulation with ready-to-use R code showing that PLS-PM needs the constructs to be embedded in a nomological net to yield valuable results.

Details

Industrial Management & Data Systems, vol. 123 no. 6
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 5 October 2012

I. Doltsinis

The purpose of this paper is to expose computational methods as applied to engineering systems and evolutionary processes with randomness in external actions and inherent…

Abstract

Purpose

The purpose of this paper is to expose computational methods as applied to engineering systems and evolutionary processes with randomness in external actions and inherent parameters.

Design/methodology/approach

In total, two approaches are distinguished that rely on solvers from deterministic algorithms. Probabilistic analysis is referred to as the approximation of the response by a Taylor series expansion about the mean input. Alternatively, stochastic simulation implies random sampling of the input and statistical evaluation of the output.

Findings

Beyond the characterization of random response, methods of reliability assessment are discussed. Concepts of design improvement are presented. Optimization for robustness diminishes the sensitivity of the system to fluctuating parameters.

Practical implications

Deterministic algorithms available for the primary problem are utilized for stochastic analysis by statistical Monte Carlo sampling. The computational effort for the repeated solution of the primary problem depends on the variability of the system and is usually high. Alternatively, the analytic Taylor series expansion requires extension of the primary solver to the computation of derivatives of the response with respect to the random input. The method is restricted to the computation of output mean values and variances/covariances, with the effort determined by the amount of the random input. The results of the two methods are comparable within the domain of applicability.

Originality/value

The present account addresses the main issues related to the presence of randomness in engineering systems and processes. They comprise the analysis of stochastic systems, reliability, design improvement, optimization and robustness against randomness of the data. The analytical Taylor approach is contrasted to the statistical Monte Carlo sampling throughout. In both cases, algorithms known from the primary, deterministic problem are the starting point of stochastic treatment. The reader benefits from the comprehensive presentation of the matter in a concise manner.

Article
Publication date: 9 May 2016

Anukal Chiralaksanakul

The purpose of this paper is to investigate the impact of bias error resulted from using Monte Carlo simulation in evaluating the American-style option value.

Abstract

Purpose

The purpose of this paper is to investigate the impact of bias error resulted from using Monte Carlo simulation in evaluating the American-style option value.

Design/methodology/approach

The authors develop an analytical approximation formula to quantify the bias error under the assumption of conditionally independent and identically distributed samples of asset prices. The bias arises from the nested optimization and expectation calculation. The formula is then used to numerically quantify the bias and as an objective function for bias minimization for a given budget of samples.

Findings

Monte Carlo methods used in valuation of American-style options can results in bias error ranging from 2 to 10 per cent of the option value. The bias error can be reduced up to 50 per cent either by performing a better scheme for sampling or by efficiently allocating sample size.

Research limitations/implications

The running time of the proposed procedure can be improved by using a specialized algorithm to solve the sample size allocation problem instead of using a commercially available subroutine MINOS. Other sampling procedures for bias reduction may be extended and applied to this multi-stage problem.

Practical implications

The methodology can help to more accurately approximate the option value.

Originality/value

The paper provides a method to develop an analytical approximation for bias error and provide a numerical experiment to test the methodology.

Details

Journal of Modelling in Management, vol. 11 no. 2
Type: Research Article
ISSN: 1746-5664

Keywords

Book part
Publication date: 21 December 2010

Tong Zeng and R. Carter Hill

In this paper we use Monte Carlo sampling experiments to examine the properties of pretest estimators in the random parameters logit (RPL) model. The pretests are for the presence…

Abstract

In this paper we use Monte Carlo sampling experiments to examine the properties of pretest estimators in the random parameters logit (RPL) model. The pretests are for the presence of random parameters. We study the Lagrange multiplier (LM), likelihood ratio (LR), and Wald tests, using conditional logit as the restricted model. The LM test is the fastest test to implement among these three test procedures since it only uses restricted, conditional logit, estimates. However, the LM-based pretest estimator has poor risk properties. The ratio of LM-based pretest estimator root mean squared error (RMSE) to the random parameters logit model estimator RMSE diverges from one with increases in the standard deviation of the parameter distribution. The LR and Wald tests exhibit properties of consistent tests, with the power approaching one as the specification error increases, so that the pretest estimator is consistent. We explore the power of these three tests for the random parameters by calculating the empirical percentile values, size, and rejection rates of the test statistics. We find the power of LR and Wald tests decreases with increases in the mean of the coefficient distribution. The LM test has the weakest power for presence of the random coefficient in the RPL model.

Details

Maximum Simulated Likelihood Methods and Applications
Type: Book
ISBN: 978-0-85724-150-4

Book part
Publication date: 19 December 2012

Lee C. Adkins and Mary N. Gade

Monte Carlo simulations are a very powerful way to demonstrate the basic sampling properties of various statistics in econometrics. The commercial software package Stata makes…

Abstract

Monte Carlo simulations are a very powerful way to demonstrate the basic sampling properties of various statistics in econometrics. The commercial software package Stata makes these methods accessible to a wide audience of students and practitioners. The purpose of this chapter is to present a self-contained primer for conducting Monte Carlo exercises as part of an introductory econometrics course. More experienced econometricians that are new to Stata may find this useful as well. Many examples are given that can be used as templates for various exercises. Examples include linear regression, confidence intervals, the size and power of t-tests, lagged dependent variable models, heteroskedastic and autocorrelated regression models, instrumental variables estimators, binary choice, censored regression, and nonlinear regression models. Stata do-files for all examples are available from the authors' website http://learneconometrics.com/pdf/MCstata/.

Details

30th Anniversary Edition
Type: Book
ISBN: 978-1-78190-309-4

Keywords

Article
Publication date: 23 March 2012

Boris Mitavskiy, Jonathan Rowe and Chris Cannings

The purpose of this paper is to establish a version of a theorem that originated from population genetics and has been later adopted in evolutionary computation theory that will…

Abstract

Purpose

The purpose of this paper is to establish a version of a theorem that originated from population genetics and has been later adopted in evolutionary computation theory that will lead to novel MonteCarlo sampling algorithms that provably increase the AI potential.

Design/methodology/approach

In the current paper the authors set up a mathematical framework, state and prove a version of a Geiringer‐like theorem that is very well‐suited for the development of Mote‐Carlo sampling algorithms to cope with randomness and incomplete information to make decisions.

Findings

This work establishes an important theoretical link between classical population genetics, evolutionary computation theory and model free reinforcement learning methodology. Not only may the theory explain the success of the currently existing MonteCarlo tree sampling methodology, but it also leads to the development of novel MonteCarlo sampling techniques guided by rigorous mathematical foundation.

Practical implications

The theoretical foundations established in the current work provide guidance for the design of powerful MonteCarlo sampling algorithms in model free reinforcement learning, to tackle numerous problems in computational intelligence.

Originality/value

Establishing a Geiringer‐like theorem with non‐homologous recombination was a long‐standing open problem in evolutionary computation theory. Apart from overcoming this challenge, in a mathematically elegant fashion and establishing a rather general and powerful version of the theorem, this work leads directly to the development of novel provably powerful algorithms for decision making in the environment involving randomness, hidden or incomplete information.

Abstract

Details

Functional Structure and Approximation in Econometrics
Type: Book
ISBN: 978-0-44450-861-4

Book part
Publication date: 19 November 2014

Garland Durham and John Geweke

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially…

Abstract

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benefits algorithms that conform to parallel computing environments are needed. This paper presents a sequential posterior simulator designed to operate efficiently in this context. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable, and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm facilitates Bayesian model comparison by producing marginal likelihood approximations of unprecedented accuracy as an incidental by-product, is robust to pathological posterior distributions, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference.

1 – 10 of over 6000