Search results

1 – 10 of 830

Abstract

Details

Panel Data and Structural Labour Market Models
Type: Book
ISBN: 978-0-44450-319-0

Book part
Publication date: 21 December 2010

Tong Zeng and R. Carter Hill

In this paper we use Monte Carlo sampling experiments to examine the properties of pretest estimators in the random parameters logit (RPL) model. The pretests are for the presence…

Abstract

In this paper we use Monte Carlo sampling experiments to examine the properties of pretest estimators in the random parameters logit (RPL) model. The pretests are for the presence of random parameters. We study the Lagrange multiplier (LM), likelihood ratio (LR), and Wald tests, using conditional logit as the restricted model. The LM test is the fastest test to implement among these three test procedures since it only uses restricted, conditional logit, estimates. However, the LM-based pretest estimator has poor risk properties. The ratio of LM-based pretest estimator root mean squared error (RMSE) to the random parameters logit model estimator RMSE diverges from one with increases in the standard deviation of the parameter distribution. The LR and Wald tests exhibit properties of consistent tests, with the power approaching one as the specification error increases, so that the pretest estimator is consistent. We explore the power of these three tests for the random parameters by calculating the empirical percentile values, size, and rejection rates of the test statistics. We find the power of LR and Wald tests decreases with increases in the mean of the coefficient distribution. The LM test has the weakest power for presence of the random coefficient in the RPL model.

Details

Maximum Simulated Likelihood Methods and Applications
Type: Book
ISBN: 978-0-85724-150-4

Book part
Publication date: 19 December 2012

Lee C. Adkins and Mary N. Gade

Monte Carlo simulations are a very powerful way to demonstrate the basic sampling properties of various statistics in econometrics. The commercial software package Stata makes…

Abstract

Monte Carlo simulations are a very powerful way to demonstrate the basic sampling properties of various statistics in econometrics. The commercial software package Stata makes these methods accessible to a wide audience of students and practitioners. The purpose of this chapter is to present a self-contained primer for conducting Monte Carlo exercises as part of an introductory econometrics course. More experienced econometricians that are new to Stata may find this useful as well. Many examples are given that can be used as templates for various exercises. Examples include linear regression, confidence intervals, the size and power of t-tests, lagged dependent variable models, heteroskedastic and autocorrelated regression models, instrumental variables estimators, binary choice, censored regression, and nonlinear regression models. Stata do-files for all examples are available from the authors' website http://learneconometrics.com/pdf/MCstata/.

Details

30th Anniversary Edition
Type: Book
ISBN: 978-1-78190-309-4

Keywords

Abstract

Details

Functional Structure and Approximation in Econometrics
Type: Book
ISBN: 978-0-44450-861-4

Book part
Publication date: 19 November 2014

Garland Durham and John Geweke

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially…

Abstract

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benefits algorithms that conform to parallel computing environments are needed. This paper presents a sequential posterior simulator designed to operate efficiently in this context. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable, and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm facilitates Bayesian model comparison by producing marginal likelihood approximations of unprecedented accuracy as an incidental by-product, is robust to pathological posterior distributions, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference.

Book part
Publication date: 12 August 2017

Jennifer McLeer

This paper introduces a method by which researchers can assess the strength of their status manipulations in experimental research by comparing them against Monte Carlo simulated…

Abstract

Purpose

This paper introduces a method by which researchers can assess the strength of their status manipulations in experimental research by comparing them against Monte Carlo simulated distributions that use aggregate Status Characteristics Theory (SCT) data.

Methodology

This paper uses Monte Carlo methods to simulate the m and q parameter distributions and the proportion of stay (P(s)) score distributions for four commonly used status situations. It also presents findings from an experiment that highlight the processes by which researchers can utilize these simulated distributions in their assessment of novel status manipulations.

Findings

Findings indicate that implicitly relevant status manipulations have considerably more overlapping P(s) scores in the simulated distributions of high and low states of a status characteristic than explicitly relevant status manipulations. Findings also show that a novel status manipulation, the handedness manipulation, sufficiently creates high- and low-status differences in P(s) scores.

Research implications

Future researchers can use these simulated distributions to plot the mean P(s) scores of each of their experimental conditions on the overlapping distribution for the corresponding status manipulation. Manipulations that produce scores that fall outside of the range of overlapping values are also likely to create status differences between conditions in other settings or populations.

Book part
Publication date: 30 April 2008

Jae J. Lee

Many economic and business problems require a set of random variates from the posterior density of the unknown parameters. The set of random variates can be used to integrate…

Abstract

Many economic and business problems require a set of random variates from the posterior density of the unknown parameters. The set of random variates can be used to integrate numerically many forms of functions. Since a closed form of the posterior density of models in time series analysis is not usually well known, it is not easy to generate a set of random variates. As a sampling scheme based on the probabilities proportional to sizes of the sample space, sampling importance resampling (SIR) method can be applied to generate a set of random variates from the posterior density. Application of SIR to signal extraction model of time series analysis is illustrated and given a set of random variates, the procedures to compute the Monte Carlo estimator of the component of signal extraction model are discussed. The procedures are illustrated with simulated data.

Details

Advances in Business and Management Forecasting
Type: Book
ISBN: 978-0-85724-787-2

Abstract

Details

Functional Structure and Approximation in Econometrics
Type: Book
ISBN: 978-0-44450-861-4

Book part
Publication date: 19 October 2020

Sophia Ding and Peter H. Egger

This chapter proposes an approach toward the estimation of cross-sectional sample selection models, where the shocks on the units of observation feature some interdependence…

Abstract

This chapter proposes an approach toward the estimation of cross-sectional sample selection models, where the shocks on the units of observation feature some interdependence through spatial or network autocorrelation. In particular, this chapter improves on prior Bayesian work on this subject by proposing a modified approach toward sampling the multivariate-truncated, cross-sectionally dependent latent variable of the selection equation. This chapter outlines the model and implementation approach and provides simulation results documenting the better performance of the proposed approach relative to existing ones.

Book part
Publication date: 12 December 2003

R.Carter Hill, Lee C. Adkins and Keith A. Bender

The Heckman two-step estimator (Heckit) for the selectivity model is widely applied in Economics and other social sciences. In this model a non-zero outcome variable is observed…

Abstract

The Heckman two-step estimator (Heckit) for the selectivity model is widely applied in Economics and other social sciences. In this model a non-zero outcome variable is observed only if a latent variable is positive. The asymptotic covariance matrix for a two-step estimation procedure must account for the estimation error introduced in the first stage. We examine the finite sample size of tests based on alternative covariance matrix estimators. We do so by using Monte Carlo experiments to evaluate bootstrap generated critical values and critical values based on asymptotic theory.

Details

Maximum Likelihood Estimation of Misspecified Models: Twenty Years Later
Type: Book
ISBN: 978-1-84950-253-5

1 – 10 of 830