Search results

1 – 10 of over 2000
Book part
Publication date: 1 January 2008

Michiel de Pooter, Francesco Ravazzolo, Rene Segers and Herman K. van Dijk

Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior

Abstract

Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior probability near the boundary of the parameter region. This feature refers to near-instability within dynamic models, to forecasting with near-random walk models and to clustering of several economic series in a small number of groups within a data panel. Two canonical models are used: a linear regression model with autocorrelation and a simple variance components model. Several well-known time-series models like unit root and error correction models and further state space and panel data models are shown to be simple generalizations of these two canonical models for the purpose of posterior inference. A Bayesian model averaging procedure is presented in order to deal with models with substantial probability both near and at the boundary of the parameter region. Analytical, graphical, and empirical results using U.S. macroeconomic data, in particular on GDP growth, are presented.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Book part
Publication date: 19 November 2014

Enrique Martínez-García and Mark A. Wynne

We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of…

Abstract

We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of Martínez-García and Wynne (2010). We discuss the trade-offs that monetary policy – characterized by a Taylor-type rule – faces in an interconnected world, with perfectly flexible exchange rates. We then use posterior model probabilities to evaluate the weight of evidence in support of such a model when estimated against more parsimonious specifications that either abstract from monetary frictions or assume autarky by means of controlled experiments that employ simulated data. We argue that Bayesian model comparison with posterior odds is sensitive to sample size and the choice of observable variables for estimation. We show that posterior model probabilities strongly penalize overfitting, which can lead us to favor a less parameterized model against the true data-generating process when the two become arbitrarily close to each other. We also illustrate that the spillovers from monetary policy across countries have an added confounding effect.

Book part
Publication date: 19 November 2014

Garland Durham and John Geweke

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially…

Abstract

Massively parallel desktop computing capabilities now well within the reach of individual academics modify the environment for posterior simulation in fundamental and potentially quite advantageous ways. But to fully exploit these benefits algorithms that conform to parallel computing environments are needed. This paper presents a sequential posterior simulator designed to operate efficiently in this context. The simulator makes fewer analytical and programming demands on investigators, and is faster, more reliable, and more complete than conventional posterior simulators. The paper extends existing sequential Monte Carlo methods and theory to provide a thorough and practical foundation for sequential posterior simulation that is well suited to massively parallel computing environments. It provides detailed recommendations on implementation, yielding an algorithm that requires only code for simulation from the prior and evaluation of prior and data densities and works well in a variety of applications representative of serious empirical work in economics and finance. The algorithm facilitates Bayesian model comparison by producing marginal likelihood approximations of unprecedented accuracy as an incidental by-product, is robust to pathological posterior distributions, and provides estimates of numerical standard error and relative numerical efficiency intrinsically. The paper concludes with an application that illustrates the potential of these simulators for applied Bayesian inference.

Book part
Publication date: 1 January 2008

Gary Koop, Roberto Leon-Gonzalez and Rodney Strachan

This paper develops methods of Bayesian inference in a cointegrating panel data model. This model involves each cross-sectional unit having a vector error correction…

Abstract

This paper develops methods of Bayesian inference in a cointegrating panel data model. This model involves each cross-sectional unit having a vector error correction representation. It is flexible in the sense that different cross-sectional units can have different cointegration ranks and cointegration spaces. Furthermore, the parameters that characterize short-run dynamics and deterministic components are allowed to vary over cross-sectional units. In addition to a noninformative prior, we introduce an informative prior which allows for information about the likely location of the cointegration space and about the degree of similarity in coefficients in different cross-sectional units. A collapsed Gibbs sampling algorithm is developed which allows for efficient posterior inference. Our methods are illustrated using real and artificial data.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Book part
Publication date: 23 June 2016

Jean-Jacques Forneron and Serena Ng

This paper considers properties of an optimization-based sampler for targeting the posterior distribution when the likelihood is intractable. It uses auxiliary statistics to…

Abstract

This paper considers properties of an optimization-based sampler for targeting the posterior distribution when the likelihood is intractable. It uses auxiliary statistics to summarize information in the data and does not directly evaluate the likelihood associated with the specified parametric model. Our reverse sampler approximates the desired posterior distribution by first solving a sequence of simulated minimum distance problems. The solutions are then reweighted by an importance ratio that depends on the prior and the volume of the Jacobian matrix. By a change of variable argument, the output consists of draws from the desired posterior distribution. Optimization always results in acceptable draws. Hence, when the minimum distance problem is not too difficult to solve, combining importance sampling with optimization can be much faster than the method of Approximate Bayesian Computation that by-passes optimization.

Details

Essays in Honor of Aman Ullah
Type: Book
ISBN: 978-1-78560-786-8

Keywords

Book part
Publication date: 21 February 2008

Mingliang Li and Justin L. Tobias

We describe a new Bayesian estimation algorithm for fitting a binary treatment, ordered outcome selection model in a potential outcomes framework. We show how recent advances in…

Abstract

We describe a new Bayesian estimation algorithm for fitting a binary treatment, ordered outcome selection model in a potential outcomes framework. We show how recent advances in simulation methods, namely data augmentation, the Gibbs sampler and the Metropolis-Hastings algorithm can be used to fit this model efficiently, and also introduce a reparameterization to help accelerate the convergence of our posterior simulator. Conventional “treatment effects” such as the Average Treatment Effect (ATE), the effect of treatment on the treated (TT) and the Local Average Treatment Effect (LATE) are adapted for this specific model, and Bayesian strategies for calculating these treatment effects are introduced. Finally, we review how one can potentially learn (or at least bound) the non-identified cross-regime correlation parameter and use this learning to calculate (or bound) parameters of interest beyond mean treatment effects.

Details

Modelling and Evaluating Treatment Effects in Econometrics
Type: Book
ISBN: 978-0-7623-1380-8

Abstract

Details

Functional Structure and Approximation in Econometrics
Type: Book
ISBN: 978-0-44450-861-4

Article
Publication date: 15 August 2019

Sandeep W. Dahake, Abhaykumar M. Kuthe and Mahesh B. Mawale

This study aims to find the usefulness of the customized surgical osteotomy guide (CSOG) for accurate mandibular tumor resection for boosting the accuracy of prefabricated…

106

Abstract

Purpose

This study aims to find the usefulness of the customized surgical osteotomy guide (CSOG) for accurate mandibular tumor resection for boosting the accuracy of prefabricated customized implant fixation in mandibular reconstructions.

Design/methodology/approach

In all, 30 diseased mandibular RP models (biomodels) were allocated for the study (for experimental group [n = 15] and for control group [n = 15]). To reconstruct the mandible with customized implant in the experimental group, CSOGs and in control group, no CSOG were used for accurate tumor resections. In control group, only preoperative virtual surgical planning (VSP) and reconstructed RP mandible model were used for the reference. Individually each patient’s preoperative mandibular reconstructions data of both the groups were superimposed to the preoperative VSP of respective patient by registering images with the non-surgical side of the mandible. In both the groups, 3D measurements were taken on the reconstructed side and compared the preoperative VSP and postoperative reconstructed mandible data. The sum of the differences between pre and postoperative data was considered as the total error. This procedure was followed for both the groups and compared the obtained error between the two groups using statistical analysis.

Findings

The use of CSOG for accurate tumor resection and exact implant fixation in mandibular reconstruction produced a smaller total error than without using CSOG.

Originality/value

The results showed that, benefits provided with the use of CSOG in mandibular reconstruction justified its use over the without using CSOG, even in free hand tumor resection using rotating burr.

Book part
Publication date: 30 August 2019

Timothy Cogley and Richard Startz

Standard estimation of ARMA models in which the AR and MA roots nearly cancel, so that individual coefficients are only weakly identified, often produces inferential ranges for…

Abstract

Standard estimation of ARMA models in which the AR and MA roots nearly cancel, so that individual coefficients are only weakly identified, often produces inferential ranges for individual coefficients that give a spurious appearance of accuracy. We remedy this problem with a model that uses a simple mixture prior. The posterior mixing probability is derived using Bayesian methods, but we show that the method works well in both Bayesian and frequentist setups. In particular, we show that our mixture procedure weights standard results heavily when given data from a well-identified ARMA model (which does not exhibit near root cancellation) and weights heavily an uninformative inferential region when given data from a weakly-identified ARMA model (with near root cancellation). When our procedure is applied to a well-identified process the investigator gets the “usual results,” so there is no important statistical cost to using our procedure. On the other hand, when our procedure is applied to a weakly identified process, the investigator learns that the data tell us little about the parameters – and is thus protected against making spurious inferences. We recommend that mixture models be computed routinely when inference about ARMA coefficients is of interest.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

Book part
Publication date: 18 October 2019

John Geweke

Bayesian A/B inference (BABI) is a method that combines subjective prior information with data from A/B experiments to provide inference for lift – the difference in a measure of…

Abstract

Bayesian A/B inference (BABI) is a method that combines subjective prior information with data from A/B experiments to provide inference for lift – the difference in a measure of response in control and treatment, expressed as its ratio to the measure of response in control. The procedure is embedded in stable code that can be executed in a few seconds for an experiment, regardless of sample size, and caters to the objectives and technical background of the owners of experiments. BABI provides more powerful tests of the hypothesis of the impact of treatment on lift, and sharper conclusions about the value of lift, than do legacy conventional methods. In application to 21 large online experiments, the credible interval is 60% to 65% shorter than the conventional confidence interval in the median case, and by close to 100% in a significant proportion of cases; in rare cases, BABI credible intervals are longer than conventional confidence intervals and then by no more than about 10%.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part B
Type: Book
ISBN: 978-1-83867-419-9

1 – 10 of over 2000