This chapter compares the performance of the maximum simulated likelihood (MSL) approach with the composite marginal likelihood (CML) approach in multivariate…
This chapter compares the performance of the maximum simulated likelihood (MSL) approach with the composite marginal likelihood (CML) approach in multivariate ordered-response situations. The ability of the two approaches to recover model parameters in simulated data sets is examined, as is the efficiency of estimated parameters and computational cost. Overall, the simulation results demonstrate the ability of the CML approach to recover the parameters very well in a 5–6 dimensional ordered-response choice model context. In addition, the CML recovers parameters as well as the MSL estimation approach in the simulation contexts used in this study, while also doing so at a substantially reduced computational cost. Further, any reduction in the efficiency of the CML approach relative to the MSL approach is in the range of nonexistent to small. When taken together with its conceptual and implementation simplicity, the CML approach appears to be a promising approach for the estimation of not only the multivariate ordered-response model considered here, but also for other analytically intractable econometric models.
We consider forecast combination and, indirectly, model selection for VAR models when there is uncertainty about which variables to include in the model in addition to the forecast variables. The key difference from traditional Bayesian variable selection is that we also allow for uncertainty regarding which endogenous variables to include in the model. That is, all models include the forecast variables, but may otherwise have differing sets of endogenous variables. This is a difficult problem to tackle with a traditional Bayesian approach. Our solution is to focus on the forecasting performance for the variables of interest and we construct model weights from the predictive likelihood of the forecast variables. The procedure is evaluated in a small simulation study and found to perform competitively in applications to real world data.
Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and…
Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benefits algorithms that conform to parallel computing environments are needed. This paper presents a sequential posterior simulator designed to operate efficiently in this context. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable, and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm facilitates Bayesian model comparison by producing marginal likelihood approximations of unprecedented accuracy as an incidental by-product, is robust to pathological posterior distributions, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference.
For the past two and half decades, there has been a marked shift in the corporate governance regulations around the world. The change is more remarkable in developing…
For the past two and half decades, there has been a marked shift in the corporate governance regulations around the world. The change is more remarkable in developing countries where countries with little or no corporate governance regime have adopted “world class” standards. While there can be a debate on whether law in books actually translates into law in action, in the meantime it might be interesting to analyse the law in books to understand how the corporate governance regime has evolved in the past 20 years. This paper quantitatively tracks 21 countries, most of them being developing and emerging economies, over a period of 20 years. The period covers 1995 to 2014; thus, it traverses the pre and post crisis period in 1999 and 2008. Thus, the paper also provides a snapshot of the macrolegal changes that the countries engage in hoping to stave off the next crisis. The paper uses over 50 parameters modelled on the OECD Principles of Corporate Governance. The paper confirms the suspicion that corporate governance norms around the developing economies are converging on shareholder primacy end of the continuum. The rate of convergence was highest just before the financial crisis of 2008 and has since then slowed down.
The paper uses data collected from experts. They filled up detailed questionnaire which quizzed them on the rules relating to corporate governance norms in their country and asked them to retrospectively check their data every five years for the past 20 years. This provided an excellent overview as to how the law has evolved in the past two decades on corporate governance. The data were then tabulated using a scoring sheet and then was put together using item response theory (IRT) which is a Bayesian method similar to factor analysis. The paper then follows a comparative approach using heatmaps to analyse the evolution of corporate governance in developing countries.
Corporate governance norms around the developing economies are converging on shareholder primacy end of the continuum. The rate of convergence was highest just before the financial crisis of 2008 and has since then slowed down.
This is the first time that corporate governance panel data analysis has been carried out on top developing countries across so many parameters for such a long period. This paper also uses Bayesian IRT modelling to analyse the evolution which is novel in its approach especially in the corporate governance literature. The paper thus provides a clear view on the evolution of corporate governance norms and how they are converging on a particular ideology.
Within spatial econometrics a whole family of different spatial specifications has been developed, with associated estimators and tests. This lead to issues of model comparison and model choice, measuring the relative merits of alternative specifications and then using appropriate criteria to choose the “best” model or relative model probabilities. Bayesian theory provides a comprehensive and coherent framework for such model choice, including both nested and non-nested models within the choice set. The paper reviews the potential application of this Bayesian theory to spatial econometric models, examining the conditions and assumptions under which application is possible. Problems of prior distributions are outlined, and Bayes factors and marginal likelihoods are derived for a particular subset of spatial econometric specifications. These are then applied to two well-known spatial data-sets to illustrate the methods. Future possibilities, and comparisons with other approaches to both Bayesian and non-Bayesian model choice are discussed.
Copula modeling enables the analysis of multivariate count data that has previously required imposition of potentially undesirable correlation restrictions or has limited attention to models with only a few outcomes. This article presents a method for analyzing correlated counts that is appealing because it retains well-known marginal distributions for each response while simultaneously allowing for flexible correlations among the outcomes. The proposed framework extends the applicability of the method to settings with high-dimensional outcomes and provides an efficient simulation method to generate the correlation matrix in a single step. Another open problem that is tackled is that of model comparison. In particular, the article presents techniques for estimating marginal likelihoods and Bayes factors in copula models. The methodology is implemented in a study of the joint behavior of four categories of US technology patents. The results reveal that patent counts exhibit high levels of correlation among categories and that joint modeling is crucial for eliciting the interactions among these variables.
We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of Martínez-García and Wynne (2010). We discuss the trade-offs that monetary policy – characterized by a Taylor-type rule – faces in an interconnected world, with perfectly flexible exchange rates. We then use posterior model probabilities to evaluate the weight of evidence in support of such a model when estimated against more parsimonious specifications that either abstract from monetary frictions or assume autarky by means of controlled experiments that employ simulated data. We argue that Bayesian model comparison with posterior odds is sensitive to sample size and the choice of observable variables for estimation. We show that posterior model probabilities strongly penalize overfitting, which can lead us to favor a less parameterized model against the true data-generating process when the two become arbitrarily close to each other. We also illustrate that the spillovers from monetary policy across countries have an added confounding effect.
Most spatial econometrics work focuses on spatial dependence in the regressand or disturbances. However, Lesage and Pace (2009) as well as Pace and LeSage2009 showed that…
Most spatial econometrics work focuses on spatial dependence in the regressand or disturbances. However, Lesage and Pace (2009) as well as Pace and LeSage2009 showed that the bias in β from applying OLS to a regressand generated from a spatial autoregressive process was exacerbated by spatial dependence in the regressor. Also, the marginal likelihood function or restricted maximum likelihood (REML) function includes a determinant term involving the regressors. Therefore, high dependence in the regressor may affect the likelihood through this term. In addition, Bowden and Turkington (1984) showed that regressor temporal autocorrelation had a non-monotonic effect on instrumental variable estimators.
We provide empirical evidence that many common economic variables used as regressors (e.g., income, race, and employment) exhibit high levels of spatial dependence. Based on this observation, we conduct a Monte Carlo study of maximum likelihood (ML), REML and two instrumental variable specifications for spatial autoregressive (SAR) and spatial Durbin models (SDM) in the presence of spatially correlated regressors.
Findings indicate that as spatial dependence in the regressor rises, REML outperforms ML and that performance of the instrumental variable methods suffer. The combination of correlated regressors and the SDM specification provides a challenging environment for instrumental variable techniques.
We also examine estimates of marginal effects and show that these behave better than estimates of the underlying model parameters used to construct marginal effects estimates. Suggestions for improving design of Monte Carlo experiments are provided.
For over three decades, vector autoregressions have played a central role in empirical macroeconomics. These models are general, can capture sophisticated dynamic…
For over three decades, vector autoregressions have played a central role in empirical macroeconomics. These models are general, can capture sophisticated dynamic behavior, and can be extended to include features such as structural instability, time-varying parameters, dynamic factors, threshold-crossing behavior, and discrete outcomes. Building upon growing evidence that the assumption of linearity may be undesirable in modeling certain macroeconomic relationships, this article seeks to add to recent advances in VAR modeling by proposing a nonparametric dynamic model for multivariate time series. In this model, the problems of modeling and estimation are approached from a hierarchical Bayesian perspective. The article considers the issues of identification, estimation, and model comparison, enabling nonparametric VAR (or NPVAR) models to be fit efficiently by Markov chain Monte Carlo (MCMC) algorithms and compared to parametric and semiparametric alternatives by marginal likelihoods and Bayes factors. Among other benefits, the methodology allows for a more careful study of structural instability while guarding against the possibility of unaccounted nonlinearity in otherwise stable economic relationships. Extensions of the proposed nonparametric model to settings with heteroskedasticity and other important modeling features are also considered. The techniques are employed to study the postwar U.S. economy, confirming the presence of distinct volatility regimes and supporting the contention that certain nonlinear relationships in the data can remain undetected by standard models.