Search results

1 – 10 of over 5000

Abstract

Details

Panel Data and Structural Labour Market Models
Type: Book
ISBN: 978-0-44450-319-0

Article
Publication date: 5 October 2012

I. Doltsinis

The purpose of this paper is to expose computational methods as applied to engineering systems and evolutionary processes with randomness in external actions and inherent…

Abstract

Purpose

The purpose of this paper is to expose computational methods as applied to engineering systems and evolutionary processes with randomness in external actions and inherent parameters.

Design/methodology/approach

In total, two approaches are distinguished that rely on solvers from deterministic algorithms. Probabilistic analysis is referred to as the approximation of the response by a Taylor series expansion about the mean input. Alternatively, stochastic simulation implies random sampling of the input and statistical evaluation of the output.

Findings

Beyond the characterization of random response, methods of reliability assessment are discussed. Concepts of design improvement are presented. Optimization for robustness diminishes the sensitivity of the system to fluctuating parameters.

Practical implications

Deterministic algorithms available for the primary problem are utilized for stochastic analysis by statistical Monte Carlo sampling. The computational effort for the repeated solution of the primary problem depends on the variability of the system and is usually high. Alternatively, the analytic Taylor series expansion requires extension of the primary solver to the computation of derivatives of the response with respect to the random input. The method is restricted to the computation of output mean values and variances/covariances, with the effort determined by the amount of the random input. The results of the two methods are comparable within the domain of applicability.

Originality/value

The present account addresses the main issues related to the presence of randomness in engineering systems and processes. They comprise the analysis of stochastic systems, reliability, design improvement, optimization and robustness against randomness of the data. The analytical Taylor approach is contrasted to the statistical Monte Carlo sampling throughout. In both cases, algorithms known from the primary, deterministic problem are the starting point of stochastic treatment. The reader benefits from the comprehensive presentation of the matter in a concise manner.

Article
Publication date: 9 May 2016

Anukal Chiralaksanakul

The purpose of this paper is to investigate the impact of bias error resulted from using Monte Carlo simulation in evaluating the American-style option value.

Abstract

Purpose

The purpose of this paper is to investigate the impact of bias error resulted from using Monte Carlo simulation in evaluating the American-style option value.

Design/methodology/approach

The authors develop an analytical approximation formula to quantify the bias error under the assumption of conditionally independent and identically distributed samples of asset prices. The bias arises from the nested optimization and expectation calculation. The formula is then used to numerically quantify the bias and as an objective function for bias minimization for a given budget of samples.

Findings

Monte Carlo methods used in valuation of American-style options can results in bias error ranging from 2 to 10 per cent of the option value. The bias error can be reduced up to 50 per cent either by performing a better scheme for sampling or by efficiently allocating sample size.

Research limitations/implications

The running time of the proposed procedure can be improved by using a specialized algorithm to solve the sample size allocation problem instead of using a commercially available subroutine MINOS. Other sampling procedures for bias reduction may be extended and applied to this multi-stage problem.

Practical implications

The methodology can help to more accurately approximate the option value.

Originality/value

The paper provides a method to develop an analytical approximation for bias error and provide a numerical experiment to test the methodology.

Details

Journal of Modelling in Management, vol. 11 no. 2
Type: Research Article
ISSN: 1746-5664

Keywords

Book part
Publication date: 21 December 2010

Tong Zeng and R. Carter Hill

In this paper we use Monte Carlo sampling experiments to examine the properties of pretest estimators in the random parameters logit (RPL) model. The pretests are for the…

Abstract

In this paper we use Monte Carlo sampling experiments to examine the properties of pretest estimators in the random parameters logit (RPL) model. The pretests are for the presence of random parameters. We study the Lagrange multiplier (LM), likelihood ratio (LR), and Wald tests, using conditional logit as the restricted model. The LM test is the fastest test to implement among these three test procedures since it only uses restricted, conditional logit, estimates. However, the LM-based pretest estimator has poor risk properties. The ratio of LM-based pretest estimator root mean squared error (RMSE) to the random parameters logit model estimator RMSE diverges from one with increases in the standard deviation of the parameter distribution. The LR and Wald tests exhibit properties of consistent tests, with the power approaching one as the specification error increases, so that the pretest estimator is consistent. We explore the power of these three tests for the random parameters by calculating the empirical percentile values, size, and rejection rates of the test statistics. We find the power of LR and Wald tests decreases with increases in the mean of the coefficient distribution. The LM test has the weakest power for presence of the random coefficient in the RPL model.

Details

Maximum Simulated Likelihood Methods and Applications
Type: Book
ISBN: 978-0-85724-150-4

Book part
Publication date: 19 December 2012

Lee C. Adkins and Mary N. Gade

Monte Carlo simulations are a very powerful way to demonstrate the basic sampling properties of various statistics in econometrics. The commercial software package Stata…

Abstract

Monte Carlo simulations are a very powerful way to demonstrate the basic sampling properties of various statistics in econometrics. The commercial software package Stata makes these methods accessible to a wide audience of students and practitioners. The purpose of this chapter is to present a self-contained primer for conducting Monte Carlo exercises as part of an introductory econometrics course. More experienced econometricians that are new to Stata may find this useful as well. Many examples are given that can be used as templates for various exercises. Examples include linear regression, confidence intervals, the size and power of t-tests, lagged dependent variable models, heteroskedastic and autocorrelated regression models, instrumental variables estimators, binary choice, censored regression, and nonlinear regression models. Stata do-files for all examples are available from the authors' website http://learneconometrics.com/pdf/MCstata/.

Details

30th Anniversary Edition
Type: Book
ISBN: 978-1-78190-309-4

Keywords

Article
Publication date: 23 March 2012

Boris Mitavskiy, Jonathan Rowe and Chris Cannings

The purpose of this paper is to establish a version of a theorem that originated from population genetics and has been later adopted in evolutionary computation theory…

Abstract

Purpose

The purpose of this paper is to establish a version of a theorem that originated from population genetics and has been later adopted in evolutionary computation theory that will lead to novel MonteCarlo sampling algorithms that provably increase the AI potential.

Design/methodology/approach

In the current paper the authors set up a mathematical framework, state and prove a version of a Geiringer‐like theorem that is very well‐suited for the development of Mote‐Carlo sampling algorithms to cope with randomness and incomplete information to make decisions.

Findings

This work establishes an important theoretical link between classical population genetics, evolutionary computation theory and model free reinforcement learning methodology. Not only may the theory explain the success of the currently existing MonteCarlo tree sampling methodology, but it also leads to the development of novel MonteCarlo sampling techniques guided by rigorous mathematical foundation.

Practical implications

The theoretical foundations established in the current work provide guidance for the design of powerful MonteCarlo sampling algorithms in model free reinforcement learning, to tackle numerous problems in computational intelligence.

Originality/value

Establishing a Geiringer‐like theorem with non‐homologous recombination was a long‐standing open problem in evolutionary computation theory. Apart from overcoming this challenge, in a mathematically elegant fashion and establishing a rather general and powerful version of the theorem, this work leads directly to the development of novel provably powerful algorithms for decision making in the environment involving randomness, hidden or incomplete information.

Abstract

Details

Functional Structure and Approximation in Econometrics
Type: Book
ISBN: 978-0-44450-861-4

Book part
Publication date: 19 November 2014

Garland Durham and John Geweke

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and…

Abstract

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benefits algorithms that conform to parallel computing environments are needed. This paper presents a sequential posterior simulator designed to operate efficiently in this context. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable, and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm facilitates Bayesian model comparison by producing marginal likelihood approximations of unprecedented accuracy as an incidental by-product, is robust to pathological posterior distributions, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference.

Book part
Publication date: 12 August 2017

Jennifer McLeer

This paper introduces a method by which researchers can assess the strength of their status manipulations in experimental research by comparing them against Monte Carlo

Abstract

Purpose

This paper introduces a method by which researchers can assess the strength of their status manipulations in experimental research by comparing them against Monte Carlo simulated distributions that use aggregate Status Characteristics Theory (SCT) data.

Methodology

This paper uses Monte Carlo methods to simulate the m and q parameter distributions and the proportion of stay (P(s)) score distributions for four commonly used status situations. It also presents findings from an experiment that highlight the processes by which researchers can utilize these simulated distributions in their assessment of novel status manipulations.

Findings

Findings indicate that implicitly relevant status manipulations have considerably more overlapping P(s) scores in the simulated distributions of high and low states of a status characteristic than explicitly relevant status manipulations. Findings also show that a novel status manipulation, the handedness manipulation, sufficiently creates high- and low-status differences in P(s) scores.

Research implications

Future researchers can use these simulated distributions to plot the mean P(s) scores of each of their experimental conditions on the overlapping distribution for the corresponding status manipulation. Manipulations that produce scores that fall outside of the range of overlapping values are also likely to create status differences between conditions in other settings or populations.

Book part
Publication date: 30 April 2008

Jae J. Lee

Many economic and business problems require a set of random variates from the posterior density of the unknown parameters. The set of random variates can be used to…

Abstract

Many economic and business problems require a set of random variates from the posterior density of the unknown parameters. The set of random variates can be used to integrate numerically many forms of functions. Since a closed form of the posterior density of models in time series analysis is not usually well known, it is not easy to generate a set of random variates. As a sampling scheme based on the probabilities proportional to sizes of the sample space, sampling importance resampling (SIR) method can be applied to generate a set of random variates from the posterior density. Application of SIR to signal extraction model of time series analysis is illustrated and given a set of random variates, the procedures to compute the Monte Carlo estimator of the component of signal extraction model are discussed. The procedures are illustrated with simulated data.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-0-85724-787-2

1 – 10 of over 5000