Search results

1 – 10 of 615
Book part
Publication date: 19 December 2012

Lee C. Adkins, Randall C. Campbell, Viera Chmelarova and R. Carter Hill

The Hausman test is used in applied economic work as a test of misspecification. It is most commonly thought of as a test of whether one or more explanatory variables in a…

Abstract

The Hausman test is used in applied economic work as a test of misspecification. It is most commonly thought of as a test of whether one or more explanatory variables in a regression model are endogenous. The usual Hausman contrast test requires one estimator to be efficient under the null hypothesis. If data are heteroskedastic, the least squares estimator is no longer efficient. The first option is to estimate the covariance matrix of the difference of the contrasted estimators, as suggested by Hahn, Ham, and Moon (2011). Other options for carrying out a Hausman-like test in this case include estimating an artificial regression and using robust standard errors. Alternatively, we might seek additional power by estimating the artificial regression using feasible generalized least squares. Finally, we might stack moment conditions leading to the two estimators and estimate the resulting system by GMM. We examine these options in a Monte Carlo experiment. We conclude that the test based on the procedure by Hahn, Ham, and Moon has good properties. The generalized least squares-based tests have higher size-corrected power when heteroskedasticity is detected in the DWH regression, and the heteroskedasticity is associated with a strong external IV. We do not consider the properties of the implied pretest estimator.

Details

Essays in Honor of Jerry Hausman
Type: Book
ISBN: 978-1-78190-308-7

Keywords

Abstract

Details

Structural Road Accident Models
Type: Book
ISBN: 978-0-08-043061-4

Abstract

Details

Panel Data Econometrics Theoretical Contributions and Empirical Applications
Type: Book
ISBN: 978-1-84950-836-0

Book part
Publication date: 5 April 2024

Ziwen Gao, Steven F. Lehrer, Tian Xie and Xinyu Zhang

Motivated by empirical features that characterize cryptocurrency volatility data, the authors develop a forecasting strategy that can account for both model uncertainty and…

Abstract

Motivated by empirical features that characterize cryptocurrency volatility data, the authors develop a forecasting strategy that can account for both model uncertainty and heteroskedasticity of unknown form. The theoretical investigation establishes the asymptotic optimality of the proposed heteroskedastic model averaging heterogeneous autoregressive (H-MAHAR) estimator under mild conditions. The authors additionally examine the convergence rate of the estimated weights of the proposed H-MAHAR estimator. This analysis sheds new light on the asymptotic properties of the least squares model averaging estimator under alternative complicated data generating processes (DGPs). To examine the performance of the H-MAHAR estimator, the authors conduct an out-of-sample forecasting application involving 22 different cryptocurrency assets. The results emphasize the importance of accounting for both model uncertainty and heteroskedasticity in practice.

Abstract

Identification of shocks of interest is a central problem in structural vector autoregressive (SVAR) modeling. Identification is often achieved by imposing restrictions on the impact or long-run effects of shocks or by considering sign restrictions for the impulse responses. In a number of articles changes in the volatility of the shocks have also been used for identification. The present study focuses on the latter device. Some possible setups for identification via heteroskedasticity are reviewed and their potential and limitations are discussed. Two detailed examples are considered to illustrate the approach.

Details

VAR Models in Macroeconomics – New Developments and Applications: Essays in Honor of Christopher A. Sims
Type: Book
ISBN: 978-1-78190-752-8

Keywords

Book part
Publication date: 19 December 2012

John C. Chao, Jerry A. Hausman, Whitney K. Newey, Norman R. Swanson and Tiemen Woutersen

This chapter shows how a weighted average of a forward and reverse Jackknife IV estimator (JIVE) yields estimators that are robust against heteroscedasticity and many instruments…

Abstract

This chapter shows how a weighted average of a forward and reverse Jackknife IV estimator (JIVE) yields estimators that are robust against heteroscedasticity and many instruments. These estimators, called HFUL (Heteroscedasticity robust Fuller) and HLIM (Heteroskedasticity robust limited information maximum likelihood (LIML)) were introduced by Hausman, Newey, Woutersen, Chao, and Swanson (2012), but without derivation. Combining consistent estimators is a theme that is associated with Jerry Hausman and, therefore, we present this derivation in this volume. Additionally, and in order to further understand and interpret HFUL and HLIM in the context of jackknife type variance ratio estimators, we show that a new variant of HLIM, under specific grouped data settings with dummy instruments, simplifies to the Bekker and van der Ploeg (2005) MM (method of moments) estimator.

Details

Essays in Honor of Jerry Hausman
Type: Book
ISBN: 978-1-78190-308-7

Keywords

Book part
Publication date: 29 March 2006

Maria S. Heracleous and Aris Spanos

This paper proposes the Student's t Dynamic Linear Regression (St-DLR) model as an alternative to the various extensions/modifications of the ARCH type volatility model. The…

Abstract

This paper proposes the Student's t Dynamic Linear Regression (St-DLR) model as an alternative to the various extensions/modifications of the ARCH type volatility model. The St-DLR differs from the latter models of volatility because it can incorporate exogenous variables in the conditional variance in a natural way. Moreover, it also addresses the following issues: (i) apparent long memory of the conditional variance, (ii) distributional assumption of the error, (iii) existence of higher moments, and (iv) coefficient positivity restrictions. The model is illustrated using Dow Jones data and the three-month T-bill rate. The empirical results seem promising, as the contemporaneous variable appears to account for a large portion of the volatility.

Details

Econometric Analysis of Financial and Economic Time Series
Type: Book
ISBN: 978-0-76231-274-0

Book part
Publication date: 29 March 2006

Chor-yiu Sin

Macroeconomic or financial data are often modelled with cointegration and GARCH (Generalized Auto-Regressive Conditional Heteroskedasticity). Noticeable examples include those…

Abstract

Macroeconomic or financial data are often modelled with cointegration and GARCH (Generalized Auto-Regressive Conditional Heteroskedasticity). Noticeable examples include those studies of price discovery in which stock prices of the same underlying asset are cointegrated and they exhibit multivariate GARCH. It was not until recently that Li, Ling, and Wong's (2001) Biometrika, 88, 1135–1152, paper formally derived the asymptotic distribution of the estimators for the error-correction model (ECM) parameters, in the presence of conditional heteroskedasticity. As far as ECM parameters are concerned, the efficiency gain may be huge even when the deflated error is symmetrically distributed. Taking into consideration the different rates of convergence, this paper first shows that the standard distribution applies to a portmanteau test, even when the conditional mean is an ECM. Assuming the usual null of no multivariate GARCH, the performance of this test in finite samples is examined through Monte Carlo experiments. We then apply the test for GARCH to the yearly or quarterly (extended) Nelson–Plosser data, embedded with some prototype multivariate models. We also apply the test to the intra-daily HSI (Hang Seng Index) and its derivatives, with the spread as the ECT (error-correction term). The empirical results throw doubt on the efficiency of the usual estimation of the ECM parameters, and more importantly, on the validity of the significance tests of an ECM.

Details

Econometric Analysis of Financial and Economic Time Series
Type: Book
ISBN: 978-0-76231-274-0

Book part
Publication date: 1 December 2016

Peter Burridge, J. Paul Elhorst and Katarina Zigova

This paper tests the feasibility and empirical implications of a spatial econometric model with a full set of interaction effects and weight matrix defined as an equally weighted…

Abstract

This paper tests the feasibility and empirical implications of a spatial econometric model with a full set of interaction effects and weight matrix defined as an equally weighted group interaction matrix applied to research productivity of individuals. We also elaborate two extensions of this model, namely with group fixed effects and with heteroskedasticity. In our setting, the model with a full set of interaction effects is overparameterised: only the SDM and SDEM specifications produce acceptable results. They imply comparable spillover effects, but by applying a Bayesian approach taken from LeSage (2014), we are able to show that the SDEM specification is more appropriate and thus that colleague interaction effects work through observed and unobserved exogenous characteristics common to researchers within a group.

Details

Spatial Econometrics: Qualitative and Limited Dependent Variables
Type: Book
ISBN: 978-1-78560-986-2

Keywords

Book part
Publication date: 12 December 2003

R.Carter Hill, Lee C. Adkins and Keith A. Bender

The Heckman two-step estimator (Heckit) for the selectivity model is widely applied in Economics and other social sciences. In this model a non-zero outcome variable is observed…

Abstract

The Heckman two-step estimator (Heckit) for the selectivity model is widely applied in Economics and other social sciences. In this model a non-zero outcome variable is observed only if a latent variable is positive. The asymptotic covariance matrix for a two-step estimation procedure must account for the estimation error introduced in the first stage. We examine the finite sample size of tests based on alternative covariance matrix estimators. We do so by using Monte Carlo experiments to evaluate bootstrap generated critical values and critical values based on asymptotic theory.

Details

Maximum Likelihood Estimation of Misspecified Models: Twenty Years Later
Type: Book
ISBN: 978-1-84950-253-5

1 – 10 of 615