Essays in Honor of Peter C. B. Phillips: Volume 33
Table of contents
(27 chapters)Abstract
These moments of the asymptotic distribution of the least-squares estimator of the local-to-unity autoregressive model are computed using computationally simple integration. These calculations show that conventional simulation estimation of moments can be substantially inaccurate unless the simulation sample size is very large. We also explore the minimax efficiency of autoregressive coefficient estimation, and numerically show that a simple Stein shrinkage estimator has minimax risk which is uniformly better than least squares, even though the estimation dimension is just one.
Abstract
New asymptotic approximations are established for the Wald and t statistics in the presence of unknown but strong autocorrelation. The asymptotic theory extends the usual fixed-smoothing asymptotics under weak dependence to allow for near-unit-root and weak-unit-root processes. As the locality parameter that characterizes the neighborhood of the autoregressive root increases from zero to infinity, the new fixed-smoothing asymptotic distribution changes smoothly from the unit-root fixed-smoothing asymptotics to the usual fixed-smoothing asymptotics under weak dependence. Simulations show that the new approximation is more accurate than the usual fixed-smoothing approximation.
Abstract
An extensive literature in econometrics focuses on finding the exact and approximate first and second moments of the least-squares estimator in the stable first-order linear autoregressive model with normally distributed errors. Recently, Kiviet and Phillips (2005) developed approximate moments for the linear autoregressive model with a unit root and normally distributed errors. An objective of this paper is to analyze moments of the estimator in the first-order autoregressive model with a unit root and nonnormal errors. In particular, we develop new analytical approximations for the first two moments in terms of model parameters and the distribution parameters. Through Monte Carlo simulations, we find that our approximate formula perform quite well across different distribution specifications in small samples. However, when the noise to signal ratio is huge, bias distortion can be quite substantial, and our approximations do not fare well.
Abstract
We analyze the sizes of standard cointegration tests applied to data subject to linear interpolation, discovering evidence of substantial size distortions induced by the interpolation. We propose modifications to these tests to effectively eliminate size distortions from such tests conducted on data interpolated from end-of-period sampled low-frequency series. Our results generally do not support linear interpolation when alternatives such as aggregation or mixed-frequency-modified tests are possible.
Abstract
This paper proposes an efficient test designed to have power against alternatives where the error correction term follows a Markov switching dynamics. The adjustment to long run equilibrium is different in different regimes characterised by the hidden state Markov chain process. Using a general nonlinear MS-ECM framework, we propose an optimal test for the null of no cointegration against an alternative of a globally stationary MS cointegration. The Monte Carlo studies demonstrate that our proposed tests display superior powers compared to the linear tests. In an application to price-dividend relationships, our test is able to find cointegration while linear based tests fail to do so.
Abstract
This paper considers a class of parametric models with nonparametric autoregressive errors. A new test is established and studied to deal with the parametric specification of the nonparametric autoregressive errors with either stationarity or nonstationarity. Such a test procedure can initially avoid misspecification through the need to parametrically specify the form of the errors. In other words, we estimate the form of the errors and test for stationarity or nonstationarity simultaneously. We establish asymptotic distributions of the proposed test. Both the setting and the results differ from earlier work on testing for unit roots in parametric time series regression. We provide both simulated and real-data examples to show that the proposed nonparametric unit root test works in practice.
Abstract
This paper provides a selective survey of the panel macroeconometric techniques that focus on controlling the impact of “unobserved heterogeneity” across individuals and over time to obtain valid inference for “structures” that are common across individuals and over time. We consider issues of (i) estimating vector autoregressive models; (ii) testing of unit root or cointegration; (iii) statistical inference for dynamic simultaneous equations models; (iv) policy evaluation; and (v) aggregation and prediction.
Abstract
This paper proposes a new class of estimators for the autoregressive coefficient of a dynamic panel data model with random individual effects and nonstationary initial condition. The new estimators we introduce are weighted averages of the well-known first difference (FD) GMM/IV estimator and the pooled ordinary least squares (POLS) estimator. The proposed procedure seeks to exploit the differing strengths of the FD GMM/IV estimator relative to the pooled OLS estimator. In particular, the latter is inconsistent in the stationary case but is consistent and asymptotically normal with a faster rate of convergence than the former when the underlying panel autoregressive process has a unit root. By averaging the two estimators in an appropriate way, we are able to construct a class of estimators which are consistent and asymptotically standard normal, when suitably standardized, in both the stationary and the unit root case. The results of our simulation study also show that our proposed estimator has favorable finite sample properties when compared to a number of existing estimators.
Abstract
This paper is concerned with estimation and inference for difference-in-difference regressions with errors that exhibit high serial dependence, including near unit roots, unit roots, and linear trends. We propose a couple of solutions based on a parametric formulation of the error covariance. First stage estimates of autoregressive structures are obtained by using the Han, Phillips, and Sul (2011, 2013) X-differencing transformation. The X-differencing method is simple to implement and is unbiased in large N settings. Compared to similar parametric methods, the approach is computationally simple and requires fewer restrictions on the permissible parameter space of the error process. Simulations suggest that our methods perform well in the finite sample across a wide range of panel dimensions and dependence structures.
Abstract
This paper examines a nonparametric CUSUM-type test for common trends in large panel data sets with individual fixed effects. We consider, as in Zhang, Su, and Phillips (2012), a partial linear regression model with unknown functional form for the trend component, although our test does not involve local smoothings. This conveniently forgoes the need to choose a bandwidth parameter, which due to a lack of a clear and sensible information criteria is difficult for testing purposes. We are able to do so after making use that the number of individuals increases with no limit. After removing the parametric component of the model, when the errors are homoscedastic, our test statistic converges to a Gaussian process whose critical values are easily tabulated. We also examine the consequences of having heteroscedasticity as well as discussing the problem of how to compute valid critical values due to the very complicated covariance structure of the limiting process. Finally, we present a small Monte Carlo experiment to shed some light on the finite sample performance of the test.
Abstract
This paper studies test of hypotheses for the slope parameter in a linear time trend panel data model with serially correlated error component disturbances. We propose a test statistic that uses a bias corrected estimator of the serial correlation parameter. The proposed test statistic which is based on the corresponding fixed effects feasible generalized least squares (FE-FGLS) estimator of the slope parameter has the standard normal limiting distribution which is valid whether the remainder error is I(0) or I(1). This performs well in Monte Carlo experiments and is recommended.
Abstract
We consider conditional distribution and conditional density functionals in the space of generalized functions. The approach follows Phillips (1985, 1991, 1995) who employed generalized functions to overcome non-differentiability in order to develop expansions. We obtain the limit of the kernel estimators for weakly dependent data, even under non-differentiability of the distribution function; the limit Gaussian process is characterized as a stochastic random functional (random generalized function) on the suitable function space. An alternative simple to compute estimator based on the empirical distribution function is proposed for the generalized random functional. For test statistics based on this estimator, limit properties are established. A Monte Carlo experiment demonstrates good finite sample performance of the statistics for testing logit and probit specification in binary choice models.
Abstract
IV estimation is examined when some instruments may be invalid. This is relevant because the initial just-identifying orthogonality conditions are untestable, whereas their validity is required when testing the orthogonality of additional instruments by so-called overidentification restriction tests. Moreover, these tests have limited power when samples are small, especially when instruments are weak. Distinguishing between conditional and unconditional settings, we analyze the limiting distribution of inconsistent IV and examine normal first-order asymptotic approximations to its density in finite samples. For simple classes of models we compare these approximations with their simulated empirical counterparts over almost the full parameter space. The latter is expressed in measures for: model fit, simultaneity, instrument invalidity, and instrument weakness. Our major findings are that for the accuracy of large sample asymptotic approximations instrument weakness is much more detrimental than instrument invalidity. Also, IV estimators obtained from strong but possibly invalid instruments are usually much closer to the true parameter values than those obtained from valid but weak instruments.
Abstract
We provide a new characterization of the equality of two positive-definite matrices A and B, and we use this to propose several new computationally convenient statistical tests for the equality of two unknown positive-definite matrices. Our primary focus is on testing the information matrix equality (e.g. White, 1982, 1994). We characterize the asymptotic behavior of our new trace-determinant information matrix test statistics under the null and the alternative and investigate their finite-sample performance for a variety of models: linear regression, exponential duration, probit, and Tobit. The parametric bootstrap suggested by Horowitz (1994) delivers critical values that provide admirable level behavior, even in samples as small as n = 50. Our new tests often have better power than the parametric-bootstrap version of the traditional IMT; when they do not, they nevertheless perform respectably.
Abstract
When a parameter of interest is nondifferentiable in the probability, the existing theory of semiparametric efficient estimation is not applicable, as it does not have an influence function. Song (2014) recently developed a local asymptotic minimax estimation theory for a parameter that is a nondifferentiable transform of a regular parameter, where the transform is a composite map of a continuous piecewise linear map with a single kink point and a translation-scale equivariant map. The contribution of this paper is twofold. First, this paper extends the local asymptotic minimax theory to nondifferentiable transforms that are a composite map of a Lipschitz continuous map having a finite set of nondifferentiability points and a translation-scale equivariant map. Second, this paper investigates the discontinuity of the local asymptotic minimax risk in the true probability and shows that the proposed estimator remains to be optimal even when the risk is locally robustified not only over the scores at the true probability, but also over the true probability itself. However, the local robustification does not resolve the issue of discontinuity in the local asymptotic minimax risk.
Abstract
We examine the cardinal gap between wage distributions of the incumbents and newly hired workers based on entropic distances which are well-defined welfare theoretic measures. Decomposition of several effects is achieved by identifying several counterfactual distributions of different groups. These go beyond the usual Oaxaca–Blinder decompositions at the (linear) conditional means. Much like quantiles, these entropic distances are well-defined inferential objects and functions whose statistical properties have recently been developed. Going beyond these strong rankings and distances, we consider weak uniform ranking of these wage outcomes based on statistical tests for stochastic dominance. The empirical analysis is focused on employees with at least 35 hours of work in the 1996–2012 monthly Current Population Survey (CPS). Among others, we find incumbent workers enjoy a better distribution of wages, but the attribution of the gap to wage inequality and human capital characteristics varies between quantiles. For instance, highly paid new workers are mainly due to human capital components, and in some years, even better wage structure.
Abstract
Vector Autoregression (VAR) has been a standard empirical tool used in macroeconomics and finance. In this paper we discuss how to compare alternative VAR models after they are estimated by Bayesian MCMC methods. In particular we apply a robust version of deviance information criterion (RDIC) recently developed in Li, Zeng, and Yu (2014b) to determine the best candidate model. RDIC is a better information criterion than the widely used deviance information criterion (DIC) when latent variables are involved in candidate models. Empirical analysis using US data shows that the optimal model selected by RDIC can be different from that by DIC.
Abstract
The variance targeting estimator (VTE) for generalized autoregressive conditionally heteroskedastic (GARCH) processes has been proposed as a computationally simpler and misspecification-robust alternative to the quasi-maximum likelihood estimator (QMLE). In this paper we investigate the asymptotic behavior of the VTE when the stationary distribution of the GARCH process has infinite fourth moment. Existing studies of historical asset returns indicate that this may be a case of empirical relevance. Under suitable technical conditions, we establish a stable limit theory for the VTE, with the rate of convergence determined by the tails of the stationary distribution. This rate is slower than that achieved by the QMLE. The limit distribution of the VTE is nondegenerate but singular. We investigate the use of subsampling techniques for inference, but find that finite sample performance is poor in empirically relevant scenarios.
Abstract
We compare the finite sample power of short- and long-horizon tests in nonlinear predictive regression models of regime switching between bull and bear markets, allowing for time varying transition probabilities. As a point of reference, we also provide a similar comparison in a linear predictive regression model without regime switching. Overall, our results do not support the contention of higher power in longer horizon tests in either the linear or nonlinear regime switching models. Nonetheless, it is possible that other plausible nonlinear models provide stronger justification for long-horizon tests.
Abstract
This paper analyzes the roles of idiosyncratic risk and firm-level conditional skewness in determining cross-sectional returns. It is shown that the traditional EGARCH estimates of conditional idiosyncratic volatility may bring significant finite sample estimation bias in the presence of non-Gaussianity. We propose a new estimator that has more robust sampling performance than the EGARCH MLE in the presence of heavy-tail or skewed innovations. Our cross-sectional portfolio analysis demonstrates that the idiosyncratic volatility puzzle documented by Ang, Hodrick, Xiang, and Zhang (2006) exists intertemporally. We conduct further analysis to solve the puzzle. We show that two factors idiosyncratic variance and individual conditional skewness play important roles in determining cross-sectional returns. A new concept, the “expected windfall,” is introduced as an alternate measure of conditional return skewness. After controlling for these two additional factors, we solve the major piece of this puzzle: Our cross-sectional regression tests identify a positive relationship between conditional idiosyncratic volatility and expected returns for over 99% of the total market capitalization of the NYSE, NASDAQ, and AMEX stock exchanges.
- DOI
- 10.1108/S0731-9053201433
- Publication date
- 2014-11-21
- Book series
- Advances in Econometrics
- Series copyright holder
- Emerald Publishing Limited
- Book series ISSN
- 0731-9053