(2019), "Prelims", Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A (Advances in Econometrics, Vol. 40A), Emerald Publishing Limited, Bingley, pp. i-xi. https://doi.org/10.1108/S0731-90532019000040A001
Emerald Publishing Limited
Copyright © 2019 Emerald Publishing Limited
Half Title Page
TOPICS IN IDENTIFICATION, LIMITED DEPENDENT VARIABLES, PARTIAL OBSERVABILITY, EXPERIMENTATION, AND FLEXIBLE MODELING: PART A
ADVANCES IN ECONOMETRICS
Series Editors: Thomas B. Fomby, R. Carter Hill, Ivan Jeliazkov, Juan Carlos Escanciano, Eric Hillebrand, Daniel L. Millimet, Rodney Strachan, David T. Jacho-Chávez, and Alicia Rambaldi
|Volume 29:||Essays in Honor of Jerry Hausman – Edited by Badi H. Baltagi, Whitney Newey, Hal White and R. Carter Hill|
|Volume 30:||30th Anniversary Edition – Edited by Dek Terrell and Daniel Millmet|
|Volume 31:||Structural Econometric Models – Edited by Eugene Choo and Matthew Shum|
|Volume 32:||VAR Models in Macroeconomics — New Developments and Applications: Essays in Honor of Christopher A. Sims – Edited by Thomas B. Fomby, Lutz Kilian and Anthony Murphy|
|Volume 33:||Essays in Honor of Peter C. B. Phillips – Edited by Thomas B. Fomby, Yoosoon Chang and Joon Y. Park|
|Volume 34:||Bayesian Model Comparison – Edited by Ivan Jeliazkov and Dale J. Poirier|
|Volume 35:||Dynamic Factor Models – Edited by Eric Hillebrand and Siem Jan Koopman|
|Volume 36:||Essays in Honor of Aman Ullah – Edited by Gloria Gonzalez-Rivera, R. Carter Hill and Tae-Hwy Lee|
|Volume 37:||Spatial Econometrics – Edited by Badi H. Baltagi, James P. LeSage, and R. Kelley Pace|
|Volume 38:||Regression Discontinuity Designs: Theory and Applications – Edited by Matias D. Cattaneo and Juan Carlos Escanciano|
|Volume 39:||The Econometrics of Complex Survey Data: Theory and Applications – Edited by Kim P. Huynh, David T. Jacho- Chávez and Guatam Tripathi|
ADVANCES IN ECONOMETRICS VOLUME 40, PART A
TOPICS IN IDENTIFICATION, LIMITED DEPENDENT VARIABLES, PARTIAL OBSERVABILITY, EXPERIMENTATION, AND FLEXIBLE MODELING: PART A
University of California, USA
JUSTIN L. TOBIAS
Purdue University, USA
United Kingdom – North America – Japan – India – Malaysia – China
Emerald Publishing Limited
Howard House, Wagon Lane, Bingley BD16 1WA, UK
First edition 2019
Copyright © Chapter 12 is in the Public Domain. All other chapters and editorial matter © Emerald 2019.
Reprints and permissions service
No part of this book may be reproduced, stored in a retrieval system, transmitted in any form or by any means electronic, mechanical, photocopying, recording or otherwise without either the prior written permission of the publisher or a licence permitting restricted copying issued in the UK by The Copyright Licensing Agency and in the USA by The Copyright Clearance Center. Any opinions expressed in the chapters are those of the authors. Whilst Emerald makes every effort to ensure the quality and accuracy of its content, Emerald makes no representation implied or otherwise, as to the chapters’ suitability and application and disclaims any warranties, express or implied, to their use.
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library
ISBN: 978-1-78973-242-9 (Print)
ISBN: 978-1-78973-241-2 (Online)
ISBN: 978-1-78973-243-6 (Epub)
ISSN: 0731-9053 (Series)
Foreword to Part A
Volume 40 of Advances in Econometrics focuses on methods with particular themes surrounding identification, limited dependent variables, partial observability, experimentation, and flexible modeling. The volume contains both Bayesian and classical contributions to theory and application, and is intended to honor the scholarship of our friend and colleague, Professor Dale J. Poirier.
Assembly of Volume 40 began with a conference at the University of California, Irvine, on June 8–10, 2018. The event served the dual purpose of celebrating Dale’s contributions on the occasion of his retirement and showcasing papers that would be considered for possible publication. The volume took shape in the months that followed and reflected a large and diverse set of theoretical, modeling, computational, and applied developments. Some chapters addressed foundational and methodological issues, while others provided important modeling advances or delved into stimulating new topics in modern empirical research. As a consequence, we expect that the volume will be of significant interest to a wide audience and will have lasting impact on future work.
The final volume – one of the largest in the Advances in Econometrics series – contains 23 separate chapters that are split thematically into two parts. Part A presents novel contributions to the analysis of time series and panel data with applications in macroeconomics, finance, cognitive science, neuroscience, and labor economics. Part B examines innovations in stochastic frontier analysis, nonparametric and semiparametric modeling and estimation, A/B experiments, and quantile regression. We hope that this clustering of relevant chapters would expose readers to a wider variety of methodological approaches and applications and would facilitate extensions to new settings.
Part A of the volume begins with an interview with Dale Poirier. In this initial chapter, Dale describes his early years and family background, college experiences, and various professional appointments. He comments on a myriad of issues, including frequentist reasoning, objective Bayes, Big Data, and events leading to his text, Intermediate Statistics and Econometrics: A Comparative Approach, which students and colleagues sometimes affectionately refer to as “The Purple Monster.’’
Gary Koop and Luca Onorante tackle a problem in nowcasting, or short-term forecasting of macroeconomic variables. They seek to determine if Google data can be used to improve the forecasting of common macroeconomic US series, such as inflation and unemployment, but do so in novel ways: they allow the Google variables to enter the models in a time-varying fashion and also allow the probability of including explanatory variables to depend on the Google data, which they term “Google probabilities.’’ They apply these methods to forecast nine different US series. They find that, generally, the inclusion of Google data in the models tends to improve forecasting performance, and the inclusion of Google data through model probabilities is generally (though not always) the preferred way of using that data.
A chapter by Fulya Ozcan employs state-of-the-art graphical modeling and text processing to uncover latent overlapping communities of Reddit’s newsfeed users. High-frequency social media data are employed to draw linkages between user reaction to news and short-term exchange rates. The hierarchical mixture model developed in the chapter clusters the communities and detects their opinions (sentiment), which is then shown to be useful in forecasting exchange rate fluctuations. The chapter describes how estimation of the model parameters can proceed by Markov-chain Monte-Carlo simulation.
Percy K. Mistry and Michael D. Lee present a generative psychological model of dynamic violent behavior, and use it to analyze data on the incidence of Israeli and Palestinian fatalities in the Second Intifada. The modeling provides interpretable structural constructs that offer important insights into the dynamics of the conflict. Due to the analytical intractability of the model, Bayesian inference is made possible by computational methods. The authors demonstrate that their model is descriptively and predictively accurate and helps explain retailatory and repetitive violence in terms of meaningful cognitive processes for each side of the conflict.
The work of Zhe Yu, Raquel Prado, Steven C. Cramer, Erin B. Quinlan, and Hernando Ombao presents Bayesian methodology for modeling local activation and global connectivity using data on magnetic resonance signals in the brain. The approach simultaneously models activation of different brain regions, estimates region-specific hemodynamic response functions, and employs Bayesian vector autoregressions to model connectivity. Spike and slab priors are employed to address variable selection and help determine significant connectivities in networks. Evidence from a simulation study reveals the advantages of the proposed approach, while an application to a stroke study finds different connectivity patterns for task and rest conditions in certain regions of the brain.
Timothy Cogley and Richard Startz provide a procedure for dealing with an important problem in time-series analysis. In particular, it has been well known that in the presence of near-root cancellation of the AR and MA components of ARMA models, standard estimation methods have tended to produce spuriously accurate estimates, even though in reality the coefficients are only weakly identified. The chapter supplies a Bayesian model averaging procedure that avoids such spurious inference and performs well without much additional computation in both well-identified and weakly-identified settings. The procedure is recommended for routine adoption in both Bayesian and frequentist analyses of ARMA models because of its ability to guard against the possibility of spurious results, while leading to agreement with traditional estimates in cases when weak identification is not a problem.
Md. Nazmul Ahsan and Jean-Marie Dufour consider estimation of a stochastic volatility (SV) model. They exploit an ARMA representation of the SV model, yielding a small number of moment conditions and the possibility of estimation via GMM. The resulting ARMA-SV estimator of Ahsan and Dufour is computationally convenient, as it is available in closed-form, while also being highly efficient. Simulation experiments are conducted to compare the ARMA-SV estimators performance with a variety of alternatives, including MCMC-based Bayesian approaches. Results suggest that not only does the ARMA-SV estimator offer considerable computational advantages, but it often offers greater precision than its competitors. The chapter concludes with an application involve three stock price return series, from Coca-Cola, Walmart, and Ford.
In a contribution to the rapidly evolving adaptive learning literature in macroeconomics, Eric Gaus and Srikanth Ramamurthy propose a novel approach to the modeling of expectation formation and learning in models with time-varying parameters. In particular, the chapter examines a new endogenous gain scheme in which the gain sequence is driven by changes in agents’ coefficient estimates. The approach is compared and contrasted with exisitng methods. Simulation evidence and an empirical example involving a New Keynesian model demonstrate that the proposed method can offer superior forecasting ability compared to existing alternatives, particularly for inflation data.
A novel approach for checking the sensitivity of predictive modeling to prior hyperparameters is presented by Joshua C. C. Chan, Liana Jacobi and Dan Zhu. In the context of popular vector autoregression models, they develop a general method, based on automatic differentiation, that examines point and interval sensitivity to the prior hyperparameters. While the importance of sensitivity analysis in general, or in predictive VAR modeling in particular, can hardly be overstated, such analysis is rarely part of current practice. Following a discussion of the theory, the approach is implemented as an automatic way to assess the robustness of the forecasts in an application to US data. The application shows that prior sensitivity in both point and density forecasts can be an issue for the VAR coefficients, but this is less so for the intercepts and the error covariance matrix.
Bai Huang, Tae-Hwy Lee, and Aman Ullah consider estimation of a panel model and suggest a Stein-type shrinkage estimator. Specifically, they consider an estimator that is a weighted combination of the OLS fixed-effect estimator and Pesaran’s (2006) common correlated effects pooled (CCEP) estimator, where the value of Hausman’s (1978) statistic informs the weights. They show, under some conditions, that the shrinkage estimator has smaller risk than the CCEP estimator, and also demonstrate that the shrinkage estimator has smaller asymptotic risk than the conventional fixed effects estimator unless endogeneity is very weak. The performance of the method is illustrated in Monte-Carlo experiments and an application involving a panel of house prices.
Gary J. Cornwall, Jeffrey A. Mills, Beau A. Sauley, and Huibin Weng introduce a new out-of-sample Granger causality testing procedure. The methods introduced combine -fold cross-validation techniques with Markov-Chain Monte-Carlo (MCMC) techniques to perform out-of-sample testing. They demonstrate power improvements of their out-of-sample tests relative to conventional -tests, while also demonstrating often similar in-sample performance of their procedure relative to -testing. The chapter concludes with an application of their methods to the Phillips curve, where they find insufficient evidence to reject the null hypothesis of no Granger causality in both pre- and post-1984 samples.
Theodore F. Figinski, Alicia Lloro, and Philip Li also address issues in panel data modeling, as these authors reexamine the effect of compulsory schooling laws on both educational attainment and labor market earnings. Specifically, using new and rich data, they examine the impact such laws have had on educational attainment of both young and older workers. Using hierarchical Bayesian models, they find that such laws appear to have little effect on educational outcomes of younger workers. In addition, while compulsory schooling legislation does seem to have had an effect on educational attainment for older workers, there is little evidence pointing toward an overall effect on the earnings of these workers.
The tribute to Dale Poirier’s work continues with Part B of the volume, which offers a different sampling of topics including important contributions to stochastic frontier analysis, nonparametric and semiparametric modeling and estimation, A/B experiments, and quantile regression. The volume then concludes with a brief comment by Dale.
We would like to thank everyone involved in the production of Advances in Econometrics, Volume 40, including all contributing authors, conference presenters, as well as the referees who exacted a high standard and helped improve the quality of manuscripts. We would also like to thank Nick Wolterman and Charlotte Wilson at Emerald Publishing for their assistance and guidance. Finally, we are very grateful for financial support from Emerald Publishing, Eviews, Orion Construction, the International Association for Applied Econometrics, UC Irvine’s Department of Economics, School of Social Sciences and Donald Bren School of Information and Computer Science, and the Purdue University Research Center in Economics.
Justin L. Tobias
- An Interview with Dale Poirier
- Macroeconomic Nowcasting Using Google Probabilities
- Sentiment-based Overlapping Community Discovery
- Violence in the Second Intifada: A Demonstration of Bayesian Generative Cognitive Modeling
- A Bayesian Model for Activation and Connectivity in Task-related fMRI Data
- Robust Estimation of ARMA Models with Near Root Cancellation
- A Simple Efficient Moment-based Estimator for the Stochastic Volatility Model
- A New Approach to Modeling Endogenous Gain Learning
- How Sensitive Are VAR Forecasts to Prior Hyperparameters? An Automated Sensitivity Analysis
- Stein-like Shrinkage Estimation of Panel Data Models with Common Correlated Effects
- Predictive Testing for Granger Causality via Posterior Simulation and Cross-validation
- New Evidence on the Effect of Compulsory Schooling Laws