Search results

1 – 10 of 280
Book part
Publication date: 19 November 2014

Enrique Martínez-García and Mark A. Wynne

We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of…

Abstract

We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of Martínez-García and Wynne (2010). We discuss the trade-offs that monetary policy – characterized by a Taylor-type rule – faces in an interconnected world, with perfectly flexible exchange rates. We then use posterior model probabilities to evaluate the weight of evidence in support of such a model when estimated against more parsimonious specifications that either abstract from monetary frictions or assume autarky by means of controlled experiments that employ simulated data. We argue that Bayesian model comparison with posterior odds is sensitive to sample size and the choice of observable variables for estimation. We show that posterior model probabilities strongly penalize overfitting, which can lead us to favor a less parameterized model against the true data-generating process when the two become arbitrarily close to each other. We also illustrate that the spillovers from monetary policy across countries have an added confounding effect.

Book part
Publication date: 1 January 2008

Michiel de Pooter, Francesco Ravazzolo, Rene Segers and Herman K. van Dijk

Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior…

Abstract

Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior probability near the boundary of the parameter region. This feature refers to near-instability within dynamic models, to forecasting with near-random walk models and to clustering of several economic series in a small number of groups within a data panel. Two canonical models are used: a linear regression model with autocorrelation and a simple variance components model. Several well-known time-series models like unit root and error correction models and further state space and panel data models are shown to be simple generalizations of these two canonical models for the purpose of posterior inference. A Bayesian model averaging procedure is presented in order to deal with models with substantial probability both near and at the boundary of the parameter region. Analytical, graphical, and empirical results using U.S. macroeconomic data, in particular on GDP growth, are presented.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Book part
Publication date: 1 January 2008

Gary Koop, Roberto Leon-Gonzalez and Rodney Strachan

This paper develops methods of Bayesian inference in a cointegrating panel data model. This model involves each cross-sectional unit having a vector error correction…

Abstract

This paper develops methods of Bayesian inference in a cointegrating panel data model. This model involves each cross-sectional unit having a vector error correction representation. It is flexible in the sense that different cross-sectional units can have different cointegration ranks and cointegration spaces. Furthermore, the parameters that characterize short-run dynamics and deterministic components are allowed to vary over cross-sectional units. In addition to a noninformative prior, we introduce an informative prior which allows for information about the likely location of the cointegration space and about the degree of similarity in coefficients in different cross-sectional units. A collapsed Gibbs sampling algorithm is developed which allows for efficient posterior inference. Our methods are illustrated using real and artificial data.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Book part
Publication date: 30 August 2019

Timothy Cogley and Richard Startz

Standard estimation of ARMA models in which the AR and MA roots nearly cancel, so that individual coefficients are only weakly identified, often produces inferential ranges for…

Abstract

Standard estimation of ARMA models in which the AR and MA roots nearly cancel, so that individual coefficients are only weakly identified, often produces inferential ranges for individual coefficients that give a spurious appearance of accuracy. We remedy this problem with a model that uses a simple mixture prior. The posterior mixing probability is derived using Bayesian methods, but we show that the method works well in both Bayesian and frequentist setups. In particular, we show that our mixture procedure weights standard results heavily when given data from a well-identified ARMA model (which does not exhibit near root cancellation) and weights heavily an uninformative inferential region when given data from a weakly-identified ARMA model (with near root cancellation). When our procedure is applied to a well-identified process the investigator gets the “usual results,” so there is no important statistical cost to using our procedure. On the other hand, when our procedure is applied to a weakly identified process, the investigator learns that the data tell us little about the parameters – and is thus protected against making spurious inferences. We recommend that mixture models be computed routinely when inference about ARMA coefficients is of interest.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

Book part
Publication date: 29 February 2008

Francesco Ravazzolo, Richard Paap, Dick van Dijk and Philip Hans Franses

This chapter develops a return forecasting methodology that allows for instability in the relationship between stock returns and predictor variables, model uncertainty, and…

Abstract

This chapter develops a return forecasting methodology that allows for instability in the relationship between stock returns and predictor variables, model uncertainty, and parameter estimation uncertainty. The predictive regression specification that is put forward allows for occasional structural breaks of random magnitude in the regression parameters, uncertainty about the inclusion of forecasting variables, and uncertainty about parameter values by employing Bayesian model averaging. The implications of these three sources of uncertainty and their relative importance are investigated from an active investment management perspective. It is found that the economic value of incorporating all three sources of uncertainty is considerable. A typical investor would be willing to pay up to several hundreds of basis points annually to switch from a passive buy-and-hold strategy to an active strategy based on a return forecasting model that allows for model and parameter uncertainty as well as structural breaks in the regression parameters.

Details

Forecasting in the Presence of Structural Breaks and Model Uncertainty
Type: Book
ISBN: 978-1-84950-540-6

Book part
Publication date: 30 December 2004

Leslie W. Hepple

Within spatial econometrics a whole family of different spatial specifications has been developed, with associated estimators and tests. This lead to issues of model comparison…

Abstract

Within spatial econometrics a whole family of different spatial specifications has been developed, with associated estimators and tests. This lead to issues of model comparison and model choice, measuring the relative merits of alternative specifications and then using appropriate criteria to choose the “best” model or relative model probabilities. Bayesian theory provides a comprehensive and coherent framework for such model choice, including both nested and non-nested models within the choice set. The paper reviews the potential application of this Bayesian theory to spatial econometric models, examining the conditions and assumptions under which application is possible. Problems of prior distributions are outlined, and Bayes factors and marginal likelihoods are derived for a particular subset of spatial econometric specifications. These are then applied to two well-known spatial data-sets to illustrate the methods. Future possibilities, and comparisons with other approaches to both Bayesian and non-Bayesian model choice are discussed.

Details

Spatial and Spatiotemporal Econometrics
Type: Book
ISBN: 978-0-76231-148-4

Abstract

Details

Nonlinear Time Series Analysis of Business Cycles
Type: Book
ISBN: 978-0-44451-838-5

Book part
Publication date: 1 January 2008

Deborah Gefang

This paper proposes a Bayesian procedure to investigate the purchasing power parity (PPP) utilizing an exponential smooth transition vector error correction model (VECM)…

Abstract

This paper proposes a Bayesian procedure to investigate the purchasing power parity (PPP) utilizing an exponential smooth transition vector error correction model (VECM). Employing a simple Gibbs sampler, we jointly estimate the cointegrating relationship along with the nonlinearities caused by the departures from the long-run equilibrium. By allowing for nonlinear regime changes, we provide strong evidence that PPP holds between the US and each of the remaining G7 countries. The model we employed implies that the dynamics of the PPP deviations can be rather complex, which is attested to by the impulse response analysis.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Book part
Publication date: 1 January 2008

Christopher J. O’Donnell and Vanessa Rayner

In their seminal papers on ARCH and GARCH models, Engle (1982) and Bollerslev (1986) specified parametric inequality constraints that were sufficient for non-negativity and weak…

Abstract

In their seminal papers on ARCH and GARCH models, Engle (1982) and Bollerslev (1986) specified parametric inequality constraints that were sufficient for non-negativity and weak stationarity of the estimated conditional variance function. This paper uses Bayesian methodology to impose these constraints on the parameters of an ARCH(3) and a GARCH(1,1) model. The two models are used to explain volatility in the London Metals Exchange Index. Model uncertainty is resolved using Bayesian model averaging. Results include estimated posterior pdfs for one-step-ahead conditional variance forecasts.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Abstract

Details

New Directions in Macromodelling
Type: Book
ISBN: 978-1-84950-830-8

1 – 10 of 280