Search results

1 – 10 of 710
Book part
Publication date: 19 December 2012

Lee C. Adkins and Mary N. Gade

Monte Carlo simulations are a very powerful way to demonstrate the basic sampling properties of various statistics in econometrics. The commercial software package Stata makes…

Abstract

Monte Carlo simulations are a very powerful way to demonstrate the basic sampling properties of various statistics in econometrics. The commercial software package Stata makes these methods accessible to a wide audience of students and practitioners. The purpose of this chapter is to present a self-contained primer for conducting Monte Carlo exercises as part of an introductory econometrics course. More experienced econometricians that are new to Stata may find this useful as well. Many examples are given that can be used as templates for various exercises. Examples include linear regression, confidence intervals, the size and power of t-tests, lagged dependent variable models, heteroskedastic and autocorrelated regression models, instrumental variables estimators, binary choice, censored regression, and nonlinear regression models. Stata do-files for all examples are available from the authors' website http://learneconometrics.com/pdf/MCstata/.

Details

30th Anniversary Edition
Type: Book
ISBN: 978-1-78190-309-4

Keywords

Abstract

This article surveys recent developments in the evaluation of point and density forecasts in the context of forecasts made by vector autoregressions. Specific emphasis is placed on highlighting those parts of the existing literature that are applicable to direct multistep forecasts and those parts that are applicable to iterated multistep forecasts. This literature includes advancements in the evaluation of forecasts in population (based on true, unknown model coefficients) and the evaluation of forecasts in the finite sample (based on estimated model coefficients). The article then examines in Monte Carlo experiments the finite-sample properties of some tests of equal forecast accuracy, focusing on the comparison of VAR forecasts to AR forecasts. These experiments show the tests to behave as should be expected given the theory. For example, using critical values obtained by bootstrap methods, tests of equal accuracy in population have empirical size about equal to nominal size.

Details

VAR Models in Macroeconomics – New Developments and Applications: Essays in Honor of Christopher A. Sims
Type: Book
ISBN: 978-1-78190-752-8

Keywords

Book part
Publication date: 30 August 2019

Md. Nazmul Ahsan and Jean-Marie Dufour

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult…

Abstract

Statistical inference (estimation and testing) for the stochastic volatility (SV) model Taylor (1982, 1986) is challenging, especially likelihood-based methods which are difficult to apply due to the presence of latent variables. The existing methods are either computationally costly and/or inefficient. In this paper, we propose computationally simple estimators for the SV model, which are at the same time highly efficient. The proposed class of estimators uses a small number of moment equations derived from an ARMA representation associated with the SV model, along with the possibility of using “winsorization” to improve stability and efficiency. We call these ARMA-SV estimators. Closed-form expressions for ARMA-SV estimators are obtained, and no numerical optimization procedure or choice of initial parameter values is required. The asymptotic distributional theory of the proposed estimators is studied. Due to their computational simplicity, the ARMA-SV estimators allow one to make reliable – even exact – simulation-based inference, through the application of Monte Carlo (MC) test or bootstrap methods. We compare them in a simulation experiment with a wide array of alternative estimation methods, in terms of bias, root mean square error and computation time. In addition to confirming the enormous computational advantage of the proposed estimators, the results show that ARMA-SV estimators match (or exceed) alternative estimators in terms of precision, including the widely used Bayesian estimator. The proposed methods are applied to daily observations on the returns for three major stock prices (Coca-Cola, Walmart, Ford) and the S&P Composite Price Index (2000–2017). The results confirm the presence of stochastic volatility with strong persistence.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

Abstract

Details

Panel Data and Structural Labour Market Models
Type: Book
ISBN: 978-0-44450-319-0

Book part
Publication date: 21 December 2010

Tong Zeng and R. Carter Hill

In this paper we use Monte Carlo sampling experiments to examine the properties of pretest estimators in the random parameters logit (RPL) model. The pretests are for the presence…

Abstract

In this paper we use Monte Carlo sampling experiments to examine the properties of pretest estimators in the random parameters logit (RPL) model. The pretests are for the presence of random parameters. We study the Lagrange multiplier (LM), likelihood ratio (LR), and Wald tests, using conditional logit as the restricted model. The LM test is the fastest test to implement among these three test procedures since it only uses restricted, conditional logit, estimates. However, the LM-based pretest estimator has poor risk properties. The ratio of LM-based pretest estimator root mean squared error (RMSE) to the random parameters logit model estimator RMSE diverges from one with increases in the standard deviation of the parameter distribution. The LR and Wald tests exhibit properties of consistent tests, with the power approaching one as the specification error increases, so that the pretest estimator is consistent. We explore the power of these three tests for the random parameters by calculating the empirical percentile values, size, and rejection rates of the test statistics. We find the power of LR and Wald tests decreases with increases in the mean of the coefficient distribution. The LM test has the weakest power for presence of the random coefficient in the RPL model.

Details

Maximum Simulated Likelihood Methods and Applications
Type: Book
ISBN: 978-0-85724-150-4

Book part
Publication date: 12 August 2017

Jennifer McLeer

This paper introduces a method by which researchers can assess the strength of their status manipulations in experimental research by comparing them against Monte Carlo simulated…

Abstract

Purpose

This paper introduces a method by which researchers can assess the strength of their status manipulations in experimental research by comparing them against Monte Carlo simulated distributions that use aggregate Status Characteristics Theory (SCT) data.

Methodology

This paper uses Monte Carlo methods to simulate the m and q parameter distributions and the proportion of stay (P(s)) score distributions for four commonly used status situations. It also presents findings from an experiment that highlight the processes by which researchers can utilize these simulated distributions in their assessment of novel status manipulations.

Findings

Findings indicate that implicitly relevant status manipulations have considerably more overlapping P(s) scores in the simulated distributions of high and low states of a status characteristic than explicitly relevant status manipulations. Findings also show that a novel status manipulation, the handedness manipulation, sufficiently creates high- and low-status differences in P(s) scores.

Research implications

Future researchers can use these simulated distributions to plot the mean P(s) scores of each of their experimental conditions on the overlapping distribution for the corresponding status manipulation. Manipulations that produce scores that fall outside of the range of overlapping values are also likely to create status differences between conditions in other settings or populations.

Book part
Publication date: 19 November 2014

Garland Durham and John Geweke

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially…

Abstract

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benefits algorithms that conform to parallel computing environments are needed. This paper presents a sequential posterior simulator designed to operate efficiently in this context. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable, and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm facilitates Bayesian model comparison by producing marginal likelihood approximations of unprecedented accuracy as an incidental by-product, is robust to pathological posterior distributions, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference.

Abstract

Details

Advances in Accounting Education Teaching and Curriculum Innovations
Type: Book
ISBN: 978-0-85724-052-1

Book part
Publication date: 15 April 2020

Joshua C. C. Chan, Chenghan Hou and Thomas Tao Yang

Importance sampling is a popular Monte Carlo method used in a variety of areas in econometrics. When the variance of the importance sampling estimator is infinite, the central…

Abstract

Importance sampling is a popular Monte Carlo method used in a variety of areas in econometrics. When the variance of the importance sampling estimator is infinite, the central limit theorem does not apply and estimates tend to be erratic even when the simulation size is large. The authors consider asymptotic trimming in such a setting. Specifically, the authors propose a bias-corrected tail-trimmed estimator such that it is consistent and has finite variance. The authors show that the proposed estimator is asymptotically normal, and has good finite-sample properties in a Monte Carlo study.

Book part
Publication date: 5 December 2017

Gail P. Clarkson and Mike A. Kelly

The implications and influence of different cognitive map structures on decision-making, reasoning, predictions about future events, affect, and behavior remain poorly understood…

Abstract

The implications and influence of different cognitive map structures on decision-making, reasoning, predictions about future events, affect, and behavior remain poorly understood. To-date, we have not had the mechanisms to determine whether any measure of cognitive map structure picks up anything more than would be detected on a purely random basis. We report a Monte Carlo method of simulation used to empirically estimate parameterized probability outcomes as a means to better understand the behavior of cognitive map. Using worked examples, we demonstrate how the results of our simulation permit the use of exact statistics which can be applied by hand to an individual map or groups of maps, providing maximum utility for the collective and cumulative process of theory building and testing.

Details

Methodological Challenges and Advances in Managerial and Organizational Cognition
Type: Book
ISBN: 978-1-78743-677-0

Keywords

1 – 10 of 710