Search results

1 – 10 of over 7000
Article
Publication date: 3 August 2012

Anand Prakash, Sanjay Kumar Jha and Rajendra Prasad Mohanty

The purpose of this paper is to propose the idea of linking the use of the Monte Carlo simulation with scenario planning to assist strategy makers in formulating strategy in the…

1030

Abstract

Purpose

The purpose of this paper is to propose the idea of linking the use of the Monte Carlo simulation with scenario planning to assist strategy makers in formulating strategy in the face of uncertainty relating to service quality gaps for life insurance business, where discontinuities always remain for need‐based selling.

Design/methodology/approach

The paper reviews briefly some applications of scenario planning. Scenario planning emphasizes the development of a strategic plan that is robust across different scenarios. The paper provides considerable evidence to suggest a new strategic approach using Monte Carlo simulation for making scenario planning.

Findings

The paper highlights which particular service quality gap attribute as risk impacts most and least for the possibility of occurrences as best case, worst case, and most likely case.

Research limitations/implications

This study suffers from methodological limitations associated with convenience sampling and anonymous survey‐based research.

Practical implications

The approach using Monte Carlo simulation increases the credibility of the scenario to an acceptable level, so that it will be used by managers and other decision makers.

Social implications

The paper provides a thorough documentation on scenario planning upon studying the impact of risk and uncertainty in service quality gap for making rational decisions in management of services such that managers make better justification and communication for their arguments.

Originality/value

The paper offers empirical understanding of the application of Monte Carlo simulation to scenario planning and identifies key drivers which impact most and least on service quality gap.

Details

Journal of Strategy and Management, vol. 5 no. 3
Type: Research Article
ISSN: 1755-425X

Keywords

Article
Publication date: 21 May 2020

Osman Hürol Türkakın, Ekrem Manisalı and David Arditi

In smaller projects with limited resources, schedule updates are often not performed. In these situations, traditional delay analysis methods cannot be used as they all require…

Abstract

Purpose

In smaller projects with limited resources, schedule updates are often not performed. In these situations, traditional delay analysis methods cannot be used as they all require updated schedules. The objective of this study is to develop a model that performs delay analysis by using only an as-planned schedule and the expense records kept on site.

Design/methodology/approach

This study starts out by developing an approach that estimates activity duration ranges in a network schedule by using as-planned and as-built s-curves. Monte Carlo simulation is performed to generate candidate as-built schedules using these activity duration ranges. If necessary, the duration ranges are refined by a follow-up procedure that systematically relaxes the ranges and develops new as-built schedules. The candidate schedule that has the closest s-curve to the actual s-curve is considered to be the most realistic as-built schedule. Finally, the as-planned vs. as-built delay analysis method is performed to determine which activity(ies) caused project delay. This process is automated using Matlab. A test case is used to demonstrate that the proposed automated method can work well.

Findings

The automated process developed in this study has the capability to develop activity duration ranges, perform Monte Carlo simulation, generate a large number of candidate as-built schedules, build s-curves for each of the candidate schedules and identify the most realistic one that has an s-curve that is closest to the actual as-built s-curve. The test case confirmed that the proposed automated system works well as it resulted in an as-built schedule that has an s-curve that is identical to the actual as-built s-curve. To develop an as-built schedule using this method is a reasonable way to make a case in or out of a court of law.

Research limitations/implications

Practitioners specifying activity ranges to perform Monte Carlo simulation can be characterized as subjective and perhaps arbitrary. To minimize the effects of this limitation, this study proposes a method that determines duration ranges by comparing as-built and as-planned cash-flows, and then by systematically modifying the search space. Another limitation is the assumption that the precedence logic in the as-planned network remains the same throughout construction. Since updated schedules are not available in the scenario considered in this study, and since in small projects the logic relationships are fairly stable over the short project duration, the assumption of a stable logic throughout construction may be reasonable, but this issue needs to be explored further in future research.

Practical implications

Delays are common in construction projects regardless of the size of the project. The critical path method (CPM) schedules of many smaller projects, especially in developing countries, are not updated during construction. In case updated schedules are not available, the method presented in this paper represents an automated, practical and easy-to-use tool that allows parties to a contract to perform delay analysis with only an as-planned schedule and the expense logs kept on site.

Originality/value

Since an as-built schedule cannot be built without updated schedules, and since the absence of an as-built schedule precludes the use of any delay analysis method that is acceptable in courts of law, using the method presented in this paper may very well be the only solution to the problem.

Details

Engineering, Construction and Architectural Management, vol. 27 no. 10
Type: Research Article
ISSN: 0969-9988

Keywords

Abstract

This article surveys recent developments in the evaluation of point and density forecasts in the context of forecasts made by vector autoregressions. Specific emphasis is placed on highlighting those parts of the existing literature that are applicable to direct multistep forecasts and those parts that are applicable to iterated multistep forecasts. This literature includes advancements in the evaluation of forecasts in population (based on true, unknown model coefficients) and the evaluation of forecasts in the finite sample (based on estimated model coefficients). The article then examines in Monte Carlo experiments the finite-sample properties of some tests of equal forecast accuracy, focusing on the comparison of VAR forecasts to AR forecasts. These experiments show the tests to behave as should be expected given the theory. For example, using critical values obtained by bootstrap methods, tests of equal accuracy in population have empirical size about equal to nominal size.

Details

VAR Models in Macroeconomics – New Developments and Applications: Essays in Honor of Christopher A. Sims
Type: Book
ISBN: 978-1-78190-752-8

Keywords

Content available
Article
Publication date: 23 October 2023

Adam Biggs and Joseph Hamilton

Evaluating warfighter lethality is a critical aspect of military performance. Raw metrics such as marksmanship speed and accuracy can provide some insight, yet interpreting subtle…

Abstract

Purpose

Evaluating warfighter lethality is a critical aspect of military performance. Raw metrics such as marksmanship speed and accuracy can provide some insight, yet interpreting subtle differences can be challenging. For example, is a speed difference of 300 milliseconds more important than a 10% accuracy difference on the same drill? Marksmanship evaluations must have objective methods to differentiate between critical factors while maintaining a holistic view of human performance.

Design/methodology/approach

Monte Carlo simulations are one method to circumvent speed/accuracy trade-offs within marksmanship evaluations. They can accommodate both speed and accuracy implications simultaneously without needing to hold one constant for the sake of the other. Moreover, Monte Carlo simulations can incorporate variability as a key element of performance. This approach thus allows analysts to determine consistency of performance expectations when projecting future outcomes.

Findings

The review divides outcomes into both theoretical overview and practical implication sections. Each aspect of the Monte Carlo simulation can be addressed separately, reviewed and then incorporated as a potential component of small arms combat modeling. This application allows for new human performance practitioners to more quickly adopt the method for different applications.

Originality/value

Performance implications are often presented as inferential statistics. By using the Monte Carlo simulations, practitioners can present outcomes in terms of lethality. This method should help convey the impact of any marksmanship evaluation to senior leadership better than current inferential statistics, such as effect size measures.

Details

Journal of Defense Analytics and Logistics, vol. 7 no. 2
Type: Research Article
ISSN: 2399-6439

Keywords

Article
Publication date: 3 July 2017

Anand Prakash and Rajendra P. Mohanty

Automakers are engaged in manufacturing both efficient and inefficient green cars. The purpose of this paper is to categorize efficient green cars and inefficient green cars…

Abstract

Purpose

Automakers are engaged in manufacturing both efficient and inefficient green cars. The purpose of this paper is to categorize efficient green cars and inefficient green cars followed by improving efficiencies of identified inefficient green cars for distribution fitting.

Design/methodology/approach

The authors have used 2014 edition of secondary data published by the Automotive Research Centre of the Automobile Club of Southern California. The paper provides the methodology of applying data envelopment analysis (DEA) consisting of 50 decision-making units (DMUs) of green cars with six input indices (emission, braking, ride quality, acceleration, turning circle, and luggage capacity) and two output indices (miles per gallon and torque) integrated with Monte Carlo simulation for drawing significant statistical inferences graphically.

Findings

The findings of this study showed that there are 27 efficient and 23 inefficient DMUs along with improvement matrix. Additionally, the study highlighted the best distribution fitting of improved efficient green cars for respective indices.

Research limitations/implications

This study suffers from limitations associated with 2014 edition of secondary data used in this research.

Practical implications

This study may be useful for motorists with efficient listing of green cars, whereas automakers can be benefitted with distribution fitting of improved efficient green cars using Monte Carlo simulation for calibration.

Originality/value

The paper uses DEA to empirically examine classification of green cars and applies Monte Carlo simulation for distribution fitting to improved efficient green cars to decide appropriate range of their attributes for calibration.

Details

Benchmarking: An International Journal, vol. 24 no. 5
Type: Research Article
ISSN: 1463-5771

Keywords

Book part
Publication date: 19 December 2012

Lee C. Adkins and Mary N. Gade

Monte Carlo simulations are a very powerful way to demonstrate the basic sampling properties of various statistics in econometrics. The commercial software package Stata makes…

Abstract

Monte Carlo simulations are a very powerful way to demonstrate the basic sampling properties of various statistics in econometrics. The commercial software package Stata makes these methods accessible to a wide audience of students and practitioners. The purpose of this chapter is to present a self-contained primer for conducting Monte Carlo exercises as part of an introductory econometrics course. More experienced econometricians that are new to Stata may find this useful as well. Many examples are given that can be used as templates for various exercises. Examples include linear regression, confidence intervals, the size and power of t-tests, lagged dependent variable models, heteroskedastic and autocorrelated regression models, instrumental variables estimators, binary choice, censored regression, and nonlinear regression models. Stata do-files for all examples are available from the authors' website http://learneconometrics.com/pdf/MCstata/.

Details

30th Anniversary Edition
Type: Book
ISBN: 978-1-78190-309-4

Keywords

Book part
Publication date: 6 January 2016

Laura E. Jackson, M. Ayhan Kose, Christopher Otrok and Michael T. Owyang

We compare methods to measure comovement in business cycle data using multi-level dynamic factor models. To do so, we employ a Monte Carlo procedure to evaluate model performance…

Abstract

We compare methods to measure comovement in business cycle data using multi-level dynamic factor models. To do so, we employ a Monte Carlo procedure to evaluate model performance for different specifications of factor models across three different estimation procedures. We consider three general factor model specifications used in applied work. The first is a single-factor model, the second a two-level factor model, and the third a three-level factor model. Our estimation procedures are the Bayesian approach of Otrok and Whiteman (1998), the Bayesian state-space approach of Kim and Nelson (1998) and a frequentist principal components approach. The latter serves as a benchmark to measure any potential gains from the more computationally intensive Bayesian procedures. We then apply the three methods to a novel new dataset on house prices in advanced and emerging markets from Cesa-Bianchi, Cespedes, and Rebucci (2015) and interpret the empirical results in light of the Monte Carlo results.

Details

Dynamic Factor Models
Type: Book
ISBN: 978-1-78560-353-2

Keywords

Book part
Publication date: 30 August 2019

Md. Nazmul Ahsan and Jean-Marie Dufour

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult…

Abstract

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult to apply due to the presence of latent variables. The existing methods are either computationally costly and/or inefficient. In this paper, we propose computationally simple estimators for the SV model, which are at the same time highly efficient. The proposed class of estimators uses a small number of moment equations derived from an ARMA representation associated with the SV model, along with the possibility of using “winsorization” to improve stability and efficiency. We call these ARMA-SV estimators. Closed-form expressions for ARMA-SV estimators are obtained, and no numerical optimization procedure or choice of initial parameter values is required. The asymptotic distributional theory of the proposed estimators is studied. Due to their computational simplicity, the ARMA-SV estimators allow one to make reliable – even exact – simulation-based inference, through the application of Monte Carlo (MC) test or bootstrap methods. We compare them in a simulation experiment with a wide array of alternative estimation methods, in terms of bias, root mean square error and computation time. In addition to confirming the enormous computational advantage of the proposed estimators, the results show that ARMA-SV estimators match (or exceed) alternative estimators in terms of precision, including the widely used Bayesian estimator. The proposed methods are applied to daily observations on the returns for three major stock prices (Coca-Cola, Walmart, Ford) and the S&P Composite Price Index (2000–2017). The results confirm the presence of stochastic volatility with strong persistence.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

Abstract

Details

Functional Structure and Approximation in Econometrics
Type: Book
ISBN: 978-0-44450-861-4

Book part
Publication date: 2 November 2009

Per Hjertstrand

Weak separability is an important concept in many fields of economic theory. This chapter uses Monte Carlo experiments to investigate the performance of newly developed…

Abstract

Weak separability is an important concept in many fields of economic theory. This chapter uses Monte Carlo experiments to investigate the performance of newly developed nonparametric revealed preference tests for weak separability. A main finding is that the bias of the sequentially implemented test for weak separability proposed by Fleissig and Whitney (2003) is low. The theoretically unbiased Swofford and Whitney test (1994) is found to perform better than all sequentially implemented test procedures but is found to suffer from an empirical bias, most likely because of the complexity in executing the test procedure. As a further source of information, we also perform sensitivity analyses on the nonparametric revealed preference tests. It is found that the Fleissig and Whitney test seems to be sensitive to measurement errors in the data.

Details

Measurement Error: Consequences, Applications and Solutions
Type: Book
ISBN: 978-1-84855-902-8

1 – 10 of over 7000