Bayesian Econometrics: Volume 23

Subject:

Table of contents

(25 chapters)

Bayesian Econometrics is a volume in the series Advances in Econometrics that illustrates the scope and diversity of modern Bayesian econometric applications, reviews some recent advances in Bayesian econometrics, and highlights many of the characteristics of Bayesian inference and computations. This first paper in the volume is the Editors’ introduction in which we summarize the contributions of each of the papers.

After briefly reviewing the past history of Bayesian econometrics and Alan Greenspan's (2004) recent description of his use of Bayesian methods in managing policy-making risk, some of the issues and needs that he mentions are discussed and linked to past and present Bayesian econometric research. Then a review of some recent Bayesian econometric research and needs is presented. Finally, some thoughts are presented that relate to the future of Bayesian econometrics.

Our paper discusses simulation-based Bayesian inference using information from previous draws to build the proposals. The aim is to produce samplers that are easy to implement, that explore the target distribution effectively, and that are computationally efficient and mix well.

This paper analyzes the effect of dental insurance on utilization of general dentist services by adult US population aged from 25 to 64 years using the ordered probit model with endogenous selection. Our econometric framework accommodates endogeneity of insurance and the ordered nature of the measure of dental utilization. The study finds strong evidence of endogeneity of dental insurance to utilization and identifies interesting patterns of nonlinear dependencies between the dental insurance status and individual's age and income. The calculated average treatment effect supports the claim of adverse selection into the treated (insured) state and indicates a strong positive incentives effect of dental insurance.

In this paper, we consider the analysis of models for univariate and multivariate ordinal outcomes in the context of the latent variable inferential framework of Albert and Chib (1993). We review several alternative modeling and identification schemes and evaluate how each aids or hampers estimation by Markov chain Monte Carlo simulation methods. For each identification scheme we also discuss the question of model comparison by marginal likelihoods and Bayes factors. In addition, we develop a simulation-based framework for analyzing covariate effects that can provide interpretability of the results despite the nonlinearities in the model and the different identification restrictions that can be implemented. The methods are employed to analyze problems in labor economics (educational attainment), political economy (voter opinions), and health economics (consumers’ reliance on alternative sources of medical information).

WIC, the Special Supplemental Nutrition Program for Women, Infants, and Children, is a widely studied public food assistance program that aims to provide foods, nutrition education, and other services to at-risk, low-income children and pregnant, breastfeeding, and postpartum women. From a policy perspective, it is of interest to assess the efficacy of the WIC program – how much, if at all, does the program improve the nutritional outcomes of WIC families? In this paper, we address two important issues related to the WIC program that have not been extensively addressed in the past. First, although the WIC program is primarily devised with the intent of improving the nutrition of “targeted” children and mothers, it is possible that WIC may also change the consumption of foods by nontargeted individuals within the household. Second, although WIC eligibility status is predetermined, participation in the program is voluntary and therefore potentially endogenous. We make use of a treatment–response model in which the dependent variable is the requirement-adjusted calcium intake from milk consumption and the endogenous variable is WIC participation, and estimate it using Bayesian methods. Using data from the CSFII 1994–1996, we find that the correlation between the errors of our two equations is strong and positive, suggesting that families participating in WIC have an unobserved propensity for high calcium intake. The direct “structural” WIC parameters, however, do not support the idea that WIC participation leads to increased levels of calcium intake from milk.

We present Bayesian models for finding the longitudinal causal effects of a randomized two-arm training program when compliance with the randomized assignment is less than perfect in the training arm (but perfect in the non-training arm) for reasons that are potentially correlated with the outcomes. We deal with the latter confounding problem under the principal stratification framework of Sommer and Zeger (1991) and Frangakis and Rubin (1999), and others. Building on the Bayesian contributions of Imbens and Rubin (1997), Hirano et al. (2000), Yau and Little (2001) and in particular Chib (2007) and Chib and Jacobi (2007, 2008), we construct rich models of the potential outcome sequences (with and without random effects), show how informative priors can be reasonably formulated, and present tuned computational approaches for summarizing the posterior distribution. We also discuss the computation of the marginal likelihood for comparing various versions of our models. We find the causal effects of the observed intake from the predictive distribution of each potential outcome for compliers. These are calculated from the output of our estimation procedures. We illustrate the techniques and ideas with data from the 1994 JOBS II trial that was set up to test the efficacy of a job training program on subsequent mental health outcomes.

Equilibrium job search models allow for labor markets with homogeneous workers and firms to yield nondegenerate wage densities. However, the resulting wage densities do not accord well with empirical regularities. Accordingly, many extensions to the basic equilibrium search model have been considered (e.g., heterogeneity in productivity, heterogeneity in the value of leisure, etc.). It is increasingly common to use nonparametric forms for these extensions and, hence, researchers can obtain a perfect fit (in a kernel smoothed sense) between theoretical and empirical wage densities. This makes it difficult to carry out model comparison of different model extensions. In this paper, we first develop Bayesian parametric and nonparametric methods which are comparable to the existing non-Bayesian literature. We then show how Bayesian methods can be used to compare various nonparametric equilibrium search models in a statistically rigorous sense.

One of the foremost objectives of the Common Agricultural Policy (CAP) in the European Union (EU) is to increase agricultural productivity through subsidization of farmers. However, little empirical research has been done to examine the effect of subsidies on farm performance and, in particular, the channels through which subsidies affect productivity. Using a Bayesian hierarchical model in which input productivity, efficiency change, and technical change depend on subsidies and other factors, including farm location, we analyze empirically how subsidies affect the performance of farms. We use an unbalanced panel from the EU's Farm Accountancy Data Network on Danish, Finnish, and Swedish dairy farms and partition the data into eight regions. The data set covers the period 1997–2003 and has a total of 6,609 observations. The results suggest that subsidies drive productivity through efficiency and input productivities and the magnitudes of these effects differ across regions. In contrast to existing studies, we find that subsidies have a positive impact on technical efficiency. The contribution of subsidies to output is largest for dairy farms in Denmark and Southern, Central, and Northern Sweden.

Heterogeneity in choice models is typically assumed to have a normal distribution in both Bayesian and classical setups. In this paper, we propose a semiparametric Bayesian framework for the analysis of random coefficients discrete choice models that can be applied to both individual as well as aggregate data. Heterogeneity is modeled using a Dirichlet process, which varies with consumers’ characteristics through covariates. We develop a Markov Chain Monte Carlo algorithm for fitting such model, and illustrate the methodology using two different datasets: a household-level panel dataset of peanut butter purchases, and supermarket chain-level data for 31 ready-to-eat breakfast cereal brands.

In this paper, we expand Kleibergen and Zivot's (2003) Bayesian two-stage (B2S) model by allowing for unequal variances. Our choice for modeling heteroscedasticity is a fully Bayesian parametric approach. As an application, we present a cross-country Cobb–Douglas production function estimation.

Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior probability near the boundary of the parameter region. This feature refers to near-instability within dynamic models, to forecasting with near-random walk models and to clustering of several economic series in a small number of groups within a data panel. Two canonical models are used: a linear regression model with autocorrelation and a simple variance components model. Several well-known time-series models like unit root and error correction models and further state space and panel data models are shown to be simple generalizations of these two canonical models for the purpose of posterior inference. A Bayesian model averaging procedure is presented in order to deal with models with substantial probability both near and at the boundary of the parameter region. Analytical, graphical, and empirical results using U.S. macroeconomic data, in particular on GDP growth, are presented.

This paper addresses the issue of improving the forecasting performance of vector autoregressions (VARs) when the set of available predictors is inconveniently large to handle with methods and diagnostics used in traditional small-scale models. First, available information from a large dataset is summarized into a considerably smaller set of variables through factors estimated using standard principal components. However, even in the case of reducing the dimension of the data the true number of factors may still be large. For that reason I introduce in my analysis simple and efficient Bayesian model selection methods. Model estimation and selection of predictors is carried out automatically through a stochastic search variable selection (SSVS) algorithm which requires minimal input by the user. I apply these methods to forecast 8 main U.S. macroeconomic variables using 124 potential predictors. I find improved out-of-sample fit in high-dimensional specifications that would otherwise suffer from the proliferation of parameters.

This paper develops methods of Bayesian inference in a cointegrating panel data model. This model involves each cross-sectional unit having a vector error correction representation. It is flexible in the sense that different cross-sectional units can have different cointegration ranks and cointegration spaces. Furthermore, the parameters that characterize short-run dynamics and deterministic components are allowed to vary over cross-sectional units. In addition to a noninformative prior, we introduce an informative prior which allows for information about the likely location of the cointegration space and about the degree of similarity in coefficients in different cross-sectional units. A collapsed Gibbs sampling algorithm is developed which allows for efficient posterior inference. Our methods are illustrated using real and artificial data.

This paper proposes a Bayesian procedure to investigate the purchasing power parity (PPP) utilizing an exponential smooth transition vector error correction model (VECM). Employing a simple Gibbs sampler, we jointly estimate the cointegrating relationship along with the nonlinearities caused by the departures from the long-run equilibrium. By allowing for nonlinear regime changes, we provide strong evidence that PPP holds between the US and each of the remaining G7 countries. The model we employed implies that the dynamics of the PPP deviations can be rather complex, which is attested to by the impulse response analysis.

We consider forecast combination and, indirectly, model selection for VAR models when there is uncertainty about which variables to include in the model in addition to the forecast variables. The key difference from traditional Bayesian variable selection is that we also allow for uncertainty regarding which endogenous variables to include in the model. That is, all models include the forecast variables, but may otherwise have differing sets of endogenous variables. This is a difficult problem to tackle with a traditional Bayesian approach. Our solution is to focus on the forecasting performance for the variables of interest and we construct model weights from the predictive likelihood of the forecast variables. The procedure is evaluated in a small simulation study and found to perform competitively in applications to real world data.

Time-varying proportions arise frequently in economics. Market shares show the relative importance of firms in a market. Labor economists divide populations into different labor market segments. Expenditure shares describe how consumers and firms allocate total expenditure to various categories. We introduce a state space model where unobserved states are Gaussian and observations are conditionally Dirichlet. Markov chain Monte Carlo techniques allow inference for unknown parameters and states. We draw states as a block using a multivariate Gaussian proposal distribution based on a quadratic approximation of the log conditional density of states given parameters and data. Repeated draws from the proposal distribution are particularly efficient. We illustrate using automobile production data.

In their seminal papers on ARCH and GARCH models, Engle (1982) and Bollerslev (1986) specified parametric inequality constraints that were sufficient for non-negativity and weak stationarity of the estimated conditional variance function. This paper uses Bayesian methodology to impose these constraints on the parameters of an ARCH(3) and a GARCH(1,1) model. The two models are used to explain volatility in the London Metals Exchange Index. Model uncertainty is resolved using Bayesian model averaging. Results include estimated posterior pdfs for one-step-ahead conditional variance forecasts.

It is well known that volatility asymmetry exists in financial markets. This paper reviews and investigates recently developed techniques for Bayesian estimation and model selection applied to a large group of modern asymmetric heteroskedastic models. These include the GJR-GARCH, threshold autoregression with GARCH errors, TGARCH, and double threshold heteroskedastic model with auxiliary threshold variables. Further, we briefly review recent methods for Bayesian model selection, such as, reversible-jump Markov chain Monte Carlo, Monte Carlo estimation via independent sampling from each model, and importance sampling methods. Seven heteroskedastic models are then compared, for three long series of daily Asian market returns, in a model selection study illustrating the preferred model selection method. Major evidence of nonlinearity in mean and volatility is found, with the preferred model having a weighted threshold variable of local and international market news.

The normal error distribution for the observations and log-volatilities in a stochastic volatility (SV) model is replaced by the Student-t distribution for robustness consideration. The model is then called the t-t SV model throughout this paper. The objectives of the paper are twofold. First, we introduce the scale mixtures of uniform (SMU) and the scale mixtures of normal (SMN) representations to the Student-t density and show that the setup of a Gibbs sampler for the t-t SV model can be simplified. For example, the full conditional distribution of the log-volatilities has a truncated normal distribution that enables an efficient Gibbs sampling algorithm. These representations also provide a means for outlier diagnostics. Second, we consider the so-called t SV model with leverage where the observations and log-volatilities follow a bivariate t distribution. Returns on exchange rates of Australian dollar to 10 major currencies are fitted by the t-t SV model and the t SV model with leverage, respectively.

In this paper we take up Bayesian inference for the consumption capital asset pricing model. The model has several econometric complications. First, it implies exact relationships between asset returns and the endowment growth rate that will be rejected by all possible realizations. Second, it was thought before that it is not possible to express asset returns in closed form. We show that Labadie's (1989) solution procedure can be applied to obtain asset returns in closed form and, therefore, it is possible to give an econometric interpretation in terms of traditional measurement error models. We apply the Bayesian inference procedures to the Mehra and Prescott (1985) dataset, we provide posterior distributions of structural parameters and posterior predictive asset return distributions, and we use these distributions to assess the existence of asset returns puzzles. The approach developed here, can be used in sampling theory and Bayesian frameworks alike. In fact, in a sampling-theory context, maximum likelihood can be used in a straightforward manner.

DOI
10.1016/S0731-9053(2008)23
Publication date
Book series
Advances in Econometrics
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-1-84855-308-8
eISBN
978-1-84855-309-5
Book series ISSN
0731-9053