Search results

1 – 10 of 26
Book part
Publication date: 23 June 2016

Eric Renault and Daniela Scidá

Many Information Theoretic Measures have been proposed for a quantitative assessment of causality relationships. While Gouriéroux, Monfort, and Renault (1987) had introduced the…

Abstract

Many Information Theoretic Measures have been proposed for a quantitative assessment of causality relationships. While Gouriéroux, Monfort, and Renault (1987) had introduced the so-called “Kullback Causality Measures,” extending Geweke’s (1982) work in the context of Gaussian VAR processes, Schreiber (2000) has set a special focus on Granger causality and dubbed the same measure “transfer entropy.” Both papers measure causality in the context of Markov processes. One contribution of this paper is to set the focus on the interplay between measurement of (non)-markovianity and measurement of Granger causality. Both of them can be framed in terms of prediction: how much is the forecast accuracy deteriorated when forgetting some relevant conditioning information? In this paper we argue that this common feature between (non)-markovianity and Granger causality has led people to overestimate the amount of causality because what they consider as a causality measure may also convey a measure of the amount of (non)-markovianity. We set a special focus on the design of measures that properly disentangle these two components. Furthermore, this disentangling leads us to revisit the equivalence between the Sims and Granger concepts of noncausality and the log-likelihood ratio tests for each of them. We argue that Granger causality implies testing for non-nested hypotheses.

Book part
Publication date: 18 January 2022

Dante Amengual, Enrique Sentana and Zhanyuan Tian

We study the statistical properties of Pearson correlation coefficients of Gaussian ranks, and Gaussian rank regressions – ordinary least-squares (OLS) models applied to those…

Abstract

We study the statistical properties of Pearson correlation coefficients of Gaussian ranks, and Gaussian rank regressions – ordinary least-squares (OLS) models applied to those ranks. We show that these procedures are fully efficient when the true copula is Gaussian and the margins are non-parametrically estimated, and remain consistent for their population analogs otherwise. We compare them to Spearman and Pearson correlations and their regression counterparts theoretically and in extensive Monte Carlo simulations. Empirical applications to migration and growth across US states, the augmented Solow growth model and momentum and reversal effects in individual stock returns confirm that Gaussian rank procedures are insensitive to outliers.

Details

Essays in Honor of M. Hashem Pesaran: Panel Modeling, Micro Applications, and Econometric Methodology
Type: Book
ISBN: 978-1-80262-065-8

Keywords

Book part
Publication date: 23 June 2016

Yangin Fan and Emmanuel Guerre

The asymptotic bias and variance of a general class of local polynomial estimators of M-regression functions are studied over the whole compact support of the multivariate

Abstract

The asymptotic bias and variance of a general class of local polynomial estimators of M-regression functions are studied over the whole compact support of the multivariate covariate under a minimal assumption on the support. The support assumption ensures that the vicinity of the boundary of the support will be visited by the multivariate covariate. The results show that like in the univariate case, multivariate local polynomial estimators have good bias and variance properties near the boundary. For the local polynomial regression estimator, we establish its asymptotic normality near the boundary and the usual optimal uniform convergence rate over the whole support. For local polynomial quantile regression, we establish a uniform linearization result which allows us to obtain similar results to the local polynomial regression. We demonstrate both theoretically and numerically that with our uniform results, the common practice of trimming local polynomial regression or quantile estimators to avoid “the boundary effect” is not needed.

Book part
Publication date: 24 April 2023

Martín Almuzara, Gabriele Fiorentini and Enrique Sentana

The authors analyze a model for N different measurements of a persistent latent time series when measurement errors are mean-reverting, which implies a common trend among…

Abstract

The authors analyze a model for N different measurements of a persistent latent time series when measurement errors are mean-reverting, which implies a common trend among measurements. The authors study the consequences of overdifferencing, finding potentially large biases in maximum likelihood estimators (MLE) of the dynamics parameters and reductions in the precision of smoothed estimates of the latent variable, especially for multiperiod objects such as quinquennial growth rates. The authors also develop an R2 measure of common trend observability that determines the severity of misspecification. Finally, the authors apply their framework to US quarterly data on GDE and GDI, obtaining an improved aggregate output measure.

Details

Essays in Honor of Joon Y. Park: Econometric Methodology in Empirical Applications
Type: Book
ISBN: 978-1-83753-212-4

Keywords

Book part
Publication date: 12 December 2003

Tae-Hwan Kim and Halbert White

To date, the literature on quantile regression and least absolute deviation regression has assumed either explicitly or implicitly that the conditional quantile regression model…

Abstract

To date, the literature on quantile regression and least absolute deviation regression has assumed either explicitly or implicitly that the conditional quantile regression model is correctly specified. When the model is misspecified, confidence intervals and hypothesis tests based on the conventional covariance matrix are invalid. Although misspecification is a generic phenomenon and correct specification is rare in reality, there has to date been no theory proposed for inference when a conditional quantile model may be misspecified. In this paper, we allow for possible misspecification of a linear conditional quantile regression model. We obtain consistency of the quantile estimator for certain “pseudo-true” parameter values and asymptotic normality of the quantile estimator when the model is misspecified. In this case, the asymptotic covariance matrix has a novel form, not seen in earlier work, and we provide a consistent estimator of the asymptotic covariance matrix. We also propose a quick and simple test for conditional quantile misspecification based on the quantile residuals.

Details

Maximum Likelihood Estimation of Misspecified Models: Twenty Years Later
Type: Book
ISBN: 978-1-84950-253-5

Book part
Publication date: 21 November 2014

Jan F. Kiviet and Jerzy Niemczyk

IV estimation is examined when some instruments may be invalid. This is relevant because the initial just-identifying orthogonality conditions are untestable, whereas their…

Abstract

IV estimation is examined when some instruments may be invalid. This is relevant because the initial just-identifying orthogonality conditions are untestable, whereas their validity is required when testing the orthogonality of additional instruments by so-called overidentification restriction tests. Moreover, these tests have limited power when samples are small, especially when instruments are weak. Distinguishing between conditional and unconditional settings, we analyze the limiting distribution of inconsistent IV and examine normal first-order asymptotic approximations to its density in finite samples. For simple classes of models we compare these approximations with their simulated empirical counterparts over almost the full parameter space. The latter is expressed in measures for: model fit, simultaneity, instrument invalidity, and instrument weakness. Our major findings are that for the accuracy of large sample asymptotic approximations instrument weakness is much more detrimental than instrument invalidity. Also, IV estimators obtained from strong but possibly invalid instruments are usually much closer to the true parameter values than those obtained from valid but weak instruments.

Abstract

Details

Applying Maximum Entropy to Econometric Problems
Type: Book
ISBN: 978-0-76230-187-4

Book part
Publication date: 10 April 2019

Iraj Rahmani and Jeffrey M. Wooldridge

We extend Vuong’s (1989) model-selection statistic to allow for complex survey samples. As a further extension, we use an M-estimation setting so that the tests apply to general…

Abstract

We extend Vuong’s (1989) model-selection statistic to allow for complex survey samples. As a further extension, we use an M-estimation setting so that the tests apply to general estimation problems – such as linear and nonlinear least squares, Poisson regression and fractional response models, to name just a few – and not only to maximum likelihood settings. With stratified sampling, we show how the difference in objective functions should be weighted in order to obtain a suitable test statistic. Interestingly, the weights are needed in computing the model-selection statistic even in cases where stratification is appropriately exogenous, in which case the usual unweighted estimators for the parameters are consistent. With cluster samples and panel data, we show how to combine the weighted objective function with a cluster-robust variance estimator in order to expand the scope of the model-selection tests. A small simulation study shows that the weighted test is promising.

Details

The Econometrics of Complex Survey Data
Type: Book
ISBN: 978-1-78756-726-9

Keywords

Book part
Publication date: 18 January 2022

Artūras Juodis

This chapter analyzes the properties of an alternative least-squares based estimator for linear panel data models with general predetermined regressors. This approach uses…

Abstract

This chapter analyzes the properties of an alternative least-squares based estimator for linear panel data models with general predetermined regressors. This approach uses backward means of regressors to approximate individual specific fixed effects (FE). The author analyzes sufficient conditions for this estimator to be asymptotically efficient, and argue that, in comparison with the FE estimator, the use of backward means leads to a non-trivial bias-variance tradeoff. The author complements theoretical analysis with an extensive Monte Carlo study, where the author finds that some of the currently available results for restricted AR(1) model cannot be easily generalized, and should be extrapolated with caution.

Details

Essays in Honor of M. Hashem Pesaran: Panel Modeling, Micro Applications, and Econometric Methodology
Type: Book
ISBN: 978-1-80262-065-8

Keywords

Book part
Publication date: 24 April 2023

Saraswata Chaudhuri, Eric Renault and Oscar Wahlstrom

The authors discuss the econometric underpinnings of Barro (2006)'s defense of the rare disaster model as a way to bring back an asset pricing model “into the right ballpark for…

Abstract

The authors discuss the econometric underpinnings of Barro (2006)'s defense of the rare disaster model as a way to bring back an asset pricing model “into the right ballpark for explaining the equity-premium and related asset-market puzzles.” Arbitrarily low-probability economic disasters can restore the validity of model-implied moment conditions only if the amplitude of disasters may be arbitrary large in due proportion. The authors prove an impossibility theorem that in case of potentially unbounded disasters, there is no such thing as a population empirical likelihood (EL)-based model-implied probability distribution. That is, one cannot identify some belief distortions for which the EL-based implied probabilities in sample, as computed by Julliard and Ghosh (2012), could be a consistent estimator. This may lead to consider alternative statistical discrepancy measures to avoid the problem with EL. Indeed, the authors prove that, under sufficient integrability conditions, power divergence Cressie-Read measures with positive power coefficients properly define a unique population model-implied probability measure. However, when this computation is useful because the reference asset pricing model is misspecified, each power divergence will deliver different model-implied beliefs distortion. One way to provide economic underpinnings to the choice of a particular belief distortion is to see it as the endogenous result of investor's choice when optimizing a recursive multiple-priors utility a la Chen and Epstein (2002). Jeong et al. (2015)'s econometric study confirms that this way of accommodating ambiguity aversion may help to address the Equity Premium puzzle.

Details

Essays in Honor of Joon Y. Park: Econometric Methodology in Empirical Applications
Type: Book
ISBN: 978-1-83753-212-4

Keywords

1 – 10 of 26