This paper examines variable selection among various factors related to motor vehicle fatality rates using a rich set of panel data. Four Bayesian methods are used. These…
This paper examines variable selection among various factors related to motor vehicle fatality rates using a rich set of panel data. Four Bayesian methods are used. These include Extreme Bounds Analysis (EBA), Stochastic Search Variable Selection (SSVS), Bayesian Model Averaging (BMA), and Bayesian Additive Regression Trees (BART). The first three of these employ parameter estimation, the last, BART, involves no parameter estimation. Nonetheless, it also has implications for variable selection. The variables examined in the models include traditional motor vehicle and socioeconomic factors along with important policy-related variables. Policy recommendations are suggested with respect to cell phone use, modernization of the fleet, alcohol use, and diminishing suicidal behavior.
Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial…
Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior probability near the boundary of the parameter region. This feature refers to near-instability within dynamic models, to forecasting with near-random walk models and to clustering of several economic series in a small number of groups within a data panel. Two canonical models are used: a linear regression model with autocorrelation and a simple variance components model. Several well-known time-series models like unit root and error correction models and further state space and panel data models are shown to be simple generalizations of these two canonical models for the purpose of posterior inference. A Bayesian model averaging procedure is presented in order to deal with models with substantial probability both near and at the boundary of the parameter region. Analytical, graphical, and empirical results using U.S. macroeconomic data, in particular on GDP growth, are presented.
Within spatial econometrics a whole family of different spatial specifications has been developed, with associated estimators and tests. This lead to issues of model comparison and model choice, measuring the relative merits of alternative specifications and then using appropriate criteria to choose the “best” model or relative model probabilities. Bayesian theory provides a comprehensive and coherent framework for such model choice, including both nested and non-nested models within the choice set. The paper reviews the potential application of this Bayesian theory to spatial econometric models, examining the conditions and assumptions under which application is possible. Problems of prior distributions are outlined, and Bayes factors and marginal likelihoods are derived for a particular subset of spatial econometric specifications. These are then applied to two well-known spatial data-sets to illustrate the methods. Future possibilities, and comparisons with other approaches to both Bayesian and non-Bayesian model choice are discussed.
Standard estimation of ARMA models in which the AR and MA roots nearly cancel, so that individual coefficients are only weakly identified, often produces inferential ranges for individual coefficients that give a spurious appearance of accuracy. We remedy this problem with a model that uses a simple mixture prior. The posterior mixing probability is derived using Bayesian methods, but we show that the method works well in both Bayesian and frequentist setups. In particular, we show that our mixture procedure weights standard results heavily when given data from a well-identified ARMA model (which does not exhibit near root cancellation) and weights heavily an uninformative inferential region when given data from a weakly-identified ARMA model (with near root cancellation). When our procedure is applied to a well-identified process the investigator gets the “usual results,” so there is no important statistical cost to using our procedure. On the other hand, when our procedure is applied to a weakly identified process, the investigator learns that the data tell us little about the parameters – and is thus protected against making spurious inferences. We recommend that mixture models be computed routinely when inference about ARMA coefficients is of interest.
Decisions pertaining to working capital management have pivotal role for firms’ short-term financial decisions. The purpose of this paper is to examine impact of working…
Decisions pertaining to working capital management have pivotal role for firms’ short-term financial decisions. The purpose of this paper is to examine impact of working capital on profitability for Indian corporate entities.
Both classical panel analysis and Bayesian techniques have been employed that provides opportunity not only to perform comparative analysis but also allows flexibility in prior distribution assumptions.
It is found that longer cash conversion period has detrimental influence on profitability. Financial soundness indicators are playing significant role in determining firm profitability. Larger firms seem to be more profitable and significant as per Bayesian approach. Bayesian approach has led to considerable gain in estimation fit.
Observing the highly skewed distribution of dependent variable, Multivariate Student t-distribution has been considered along with normal distribution to model stochastic term. Accordingly, Bayesian methodology is applied.
Analysis of working capital for firms has been performed in Indian context. Application of Bayesian methodology is performed on balanced panel spanning from 2003 to 2012. As per author’s knowledge, this is the first study which applies Bayesian approach employing panel data for the analysis of working capital management for Indian firms.
A Monte Carlo experiment is used to examine the size and power properties of alternative Bayesian tests for unit roots. Four different prior distributions for the root…
A Monte Carlo experiment is used to examine the size and power properties of alternative Bayesian tests for unit roots. Four different prior distributions for the root that is potentially unity – a uniform prior and priors attributable to Jeffreys, Lubrano, and Berger and Yang – are used in conjunction with two testing procedures: a credible interval test and a Bayes factor test. Two extensions are also considered: a test based on model averaging with different priors and a test with a hierarchical prior for a hyperparameter. The tests are applied to both trending and non-trending series. Our results favor the use of a prior suggested by Lubrano. Outcomes from applying the tests to some Australian macroeconomic time series are presented.
We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of Martínez-García and Wynne (2010). We discuss the trade-offs that monetary policy – characterized by a Taylor-type rule – faces in an interconnected world, with perfectly flexible exchange rates. We then use posterior model probabilities to evaluate the weight of evidence in support of such a model when estimated against more parsimonious specifications that either abstract from monetary frictions or assume autarky by means of controlled experiments that employ simulated data. We argue that Bayesian model comparison with posterior odds is sensitive to sample size and the choice of observable variables for estimation. We show that posterior model probabilities strongly penalize overfitting, which can lead us to favor a less parameterized model against the true data-generating process when the two become arbitrarily close to each other. We also illustrate that the spillovers from monetary policy across countries have an added confounding effect.
Small-scale VARs are widely used in macroeconomics for forecasting US output, prices, and interest rates. However, recent work suggests these models may exhibit…
Small-scale VARs are widely used in macroeconomics for forecasting US output, prices, and interest rates. However, recent work suggests these models may exhibit instabilities. As such, a variety of estimation or forecasting methods might be used to improve their forecast accuracy. These include using different observation windows for estimation, intercept correction, time-varying parameters, break dating, Bayesian shrinkage, model averaging, etc. This paper compares the effectiveness of such methods in real-time forecasting. We use forecasts from univariate time series models, the Survey of Professional Forecasters, and the Federal Reserve Board's Greenbook as benchmarks.
This paper aims to present several Bayesian specification tests for both in- and out-of-sample situations.
The authors focus on the Bayesian equivalents of the frequentist approach for testing heteroskedasticity, autocorrelation and functional form specification. For out-of-sample diagnostics, the authors consider several tests to evaluate the predictive ability of the model.
The authors demonstrate the performance of these tests using an application on the relationship between price and occupancy rate from the hotel industry. For purposes of comparison, the authors also provide evidence from traditional frequentist tests.
There certainly exist other issues and diagnostic tests that are not covered in this paper. The issues that are addressed, however, are critically important and can be applied to most modeling situations.
With the increased use of the Bayesian approach in various modeling contexts, this paper serves as an important guide for diagnostic testing in Bayesian analysis. Diagnostic analysis is essential and should always accompany the estimation of regression models.