Maximum Likelihood Estimation of Misspecified Models: Twenty Years Later: Volume 17

Subject:

Table of contents

(14 chapters)
click here to view access options

LIST OF CONTRIBUTORS

Pages VII-VIII
click here to view access options

INTRODUCTION

Pages IX-XIII
click here to view access options

In the spirit of White’s (1982) paper, this paper examines the consequences of model misspecification using a panel data regression model. Maximum likelihood, random and fixed effects estimators are compared using Monte Carlo experiments under normality of the disturbances but with a possibly misspecified variance-covariance matrix. We show that the correct GLS (ML) procedure is always the best according to MSE performance, but the researcher does not have perfect foresight on the true form of the variance covariance matrix. In this case, we show that a pretest estimator is a viable alternative given that its performance is a close second to correct GLS (ML) whether the true specification is a two-way, a one-way error component model or a pooled regression model. Incorrect GLS, ML or fixed effects estimators may lead to a big loss in MSE.

We examine the global warming temperature data sets of Jones et al. (1999) and Vinnikov et al. (1994) in the context of the multivariate deterministic trend-testing framework of Franses and Vogelsang (2002). We find that, across all seasons, global warming seems to be present for the globe and for the northern and southern hemispheres. Globally and within hemispheres, it appears that seasons are not warming equally fast. In particular, winters appear to be warming faster than summers. Across hemispheres, it appears that the winters in the northern and southern hemispheres are warming equally fast whereas the remaining seasons appear to have unequal warming rates. The results obtained here seem to coincide with the findings of Kaufmann and Stern (2002) who use cointegration analysis and find that the hemispheres are warming at different rates.

click here to view access options

This article examines the history, development, and application of the sandwich estimate of variance. In describing this estimator, we pay attention to applications that have appeared in the literature and examine the nature of the problems for which this estimator is used. We describe various adjustments to the estimate for use with small samples, and illustrate the estimator’s construction for a variety of models. Finally, we discuss interpretation of results.

The Heckman two-step estimator (Heckit) for the selectivity model is widely applied in Economics and other social sciences. In this model a non-zero outcome variable is observed only if a latent variable is positive. The asymptotic covariance matrix for a two-step estimation procedure must account for the estimation error introduced in the first stage. We examine the finite sample size of tests based on alternative covariance matrix estimators. We do so by using Monte Carlo experiments to evaluate bootstrap generated critical values and critical values based on asymptotic theory.

To date, the literature on quantile regression and least absolute deviation regression has assumed either explicitly or implicitly that the conditional quantile regression model is correctly specified. When the model is misspecified, confidence intervals and hypothesis tests based on the conventional covariance matrix are invalid. Although misspecification is a generic phenomenon and correct specification is rare in reality, there has to date been no theory proposed for inference when a conditional quantile model may be misspecified. In this paper, we allow for possible misspecification of a linear conditional quantile regression model. We obtain consistency of the quantile estimator for certain “pseudo-true” parameter values and asymptotic normality of the quantile estimator when the model is misspecified. In this case, the asymptotic covariance matrix has a novel form, not seen in earlier work, and we provide a consistent estimator of the asymptotic covariance matrix. We also propose a quick and simple test for conditional quantile misspecification based on the quantile residuals.

We propose a quasi–maximum likelihood estimator for the location parameters of a linear regression model with bounded and symmetrically distributed errors. The error outcomes are restated as the convex combination of the bounds, and we use the method of maximum entropy to derive the quasi–log likelihood function. Under the stated model assumptions, we show that the proposed estimator is unbiased, consistent, and asymptotically normal. We then conduct a series of Monte Carlo exercises designed to illustrate the sampling properties of the quasi–maximum likelihood estimator relative to the least squares estimator. Although the least squares estimator has smaller quadratic risk under normal and skewed error processes, the proposed QML estimator dominates least squares for the bounded and symmetric error distribution considered in this paper.

In this chapter, we use the minimum cross-entropy method to derive an approximate joint probability model for a multivariate economic process based on limited information about the marginal quasi-density functions and the joint moment conditions. The modeling approach is related to joint probability models derived from copula functions, but we note that the entropy approach has some practical advantages over copula-based models. Under suitable regularity conditions, the quasi-maximum likelihood estimator (QMLE) of the model parameters is consistent and asymptotically normal. We demonstrate the procedure with an application to the joint probability model of trading volume and price variability for the Chicago Board of Trade soybean futures contract.

This paper relaxes the assumption of conditional normal innovations used by Fornari and Mele (1997) in modelling the asymmetric reaction of the conditional volatility to the arrival of news. We compare the performance of the Sign and Volatility Switching ARCH model of Fornari and Mele (1997) and the GJR model of Glosten et al. (1993) under the assumption that the innovations follow the Generalized Student’s t distribution. Moreover, we hedge against the possibility of misspecification by basing the inferences on the robust variance-covariance matrix suggested by White (1982). The results suggest that using more flexible distributional assumptions on the financial data can have a significant impact on the inferences drawn.

Most economic models in essence specify the mean of some explained variables, conditional on a number of explanatory variables. Since the publication of White’s (1982) Econometrica paper, a vast literature has been devoted to the quasi- or pseudo-maximum likelihood estimator (QMLE or PMLE). Among others, it was shown that QMLE of a density from the linear exponential family (LEF) provides a consistent estimate of the true parameters of the conditional mean, despite misspecification of other aspects of the conditional distribution. In this paper, we first show that it is not the case when the weighting matrix of the density and the mean parameter vector are functionally related. A prominent example is an autoregressive moving-average (ARMA) model with generalized autoregressive conditional heteroscedasticity (GARCH) error. As a result, the mean specification test is not readily modified as heteroscedasticity insensitive. However, correct specification of the conditional variance adds conditional moment conditions for estimating the parameters in conditional mean. Based on the recent literature of efficient instrumental variables estimator (IVE) or generalized method of moments (GMM), we propose an estimator which is modified upon the QMLE of a density from the quadratic exponential family (QEF). Moreover, GARCH-M is also allowed. We thus document a detailed comparison between the quadratic exponential QMLE with IVE. The asymptotic variance of this modified QMLE attains the lower bound for minimax risk.

click here to view access options

This paper proposes a new approach to testing in the generalized method of moments (GMM) framework. The new tests are constructed using heteroskedasticity autocorrelation (HAC) robust standard errors computed using nonparametric spectral density estimators without truncation. While such standard errors are not consistent, a new asymptotic theory shows that they lead to valid tests nonetheless. In an over-identified linear instrumental variables model, simulations suggest that the new tests and the associated limiting distribution theory provide a more accurate first order asymptotic null approximation than both standard nonparametric HAC robust tests and VAR based parametric HAC robust tests. Finite sample power of the new tests is shown to be comparable to standard tests.

One way to control for the heterogeneity in panel data is to allow for time-invariant, individual specific parameters. This fixed effect approach introduces many parameters into the model which causes the “incidental parameter problem”: the maximum likelihood estimator is in general inconsistent. Woutersen (2001) shows how to approximately separate the parameters of interest from the fixed effects using a reparametrization. He then shows how a Bayesian method gives a general solution to the incidental parameter for correctly specified models. This paper extends Woutersen (2001) to misspecified models. Following White (1982), we assume that the expectation of the score of the integrated likelihood is zero at the true values of the parameters. We then derive the conditions under which a Bayesian estimator converges at rate N where N is the number of individuals. Under these conditions, we show that the variance-covariance matrix of the Bayesian estimator has the form of White (1982). We illustrate our approach by the dynamic linear model with fixed effects and a duration model with fixed effects.

DOI
10.1016/S0731-9053(2003)17
Publication date
Book series
Advances in Econometrics
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-0-76231-075-3
eISBN
978-1-84950-253-5
Book series ISSN
0731-9053