Search results
1 – 10 of 169Markus Neumayer, Thomas Suppan and Thomas Bretterklieber
The application of statistical inversion theory provides a powerful approach for solving estimation problems including the ability for uncertainty quantification (UQ) by means of…
Abstract
Purpose
The application of statistical inversion theory provides a powerful approach for solving estimation problems including the ability for uncertainty quantification (UQ) by means of Markov chain Monte Carlo (MCMC) methods and Monte Carlo integration. This paper aims to analyze the application of a state reduction technique within different MCMC techniques to improve the computational efficiency and the tuning process of these algorithms.
Design/methodology/approach
A reduced state representation is constructed from a general prior distribution. For sampling the Metropolis Hastings (MH) Algorithm and the Gibbs sampler are used. Efficient proposal generation techniques and techniques for conditional sampling are proposed and evaluated for an exemplary inverse problem.
Findings
For the MH-algorithm, high acceptance rates can be obtained with a simple proposal kernel. For the Gibbs sampler, an efficient technique for conditional sampling was found. The state reduction scheme stabilizes the ill-posed inverse problem, allowing a solution without a dedicated prior distribution. The state reduction is suitable to represent general material distributions.
Practical implications
The state reduction scheme and the MCMC techniques can be applied in different imaging problems. The stabilizing nature of the state reduction improves the solution of ill-posed problems. The tuning of the MCMC methods is simplified.
Originality/value
The paper presents a method to improve the solution process of inverse problems within the Bayesian framework. The stabilization of the inverse problem due to the state reduction improves the solution. The approach simplifies the tuning of MCMC methods.
Details
Keywords
Ivan Jeliazkov and Esther Hee Lee
A major stumbling block in multivariate discrete data analysis is the problem of evaluating the outcome probabilities that enter the likelihood function. Calculation of these…
Abstract
A major stumbling block in multivariate discrete data analysis is the problem of evaluating the outcome probabilities that enter the likelihood function. Calculation of these probabilities involves high-dimensional integration, making simulation methods indispensable in both Bayesian and frequentist estimation and model choice. We review several existing probability estimators and then show that a broader perspective on the simulation problem can be afforded by interpreting the outcome probabilities through Bayes’ theorem, leading to the recognition that estimation can alternatively be handled by methods for marginal likelihood computation based on the output of Markov chain Monte Carlo (MCMC) algorithms. These techniques offer stand-alone approaches to simulated likelihood estimation but can also be integrated with traditional estimators. Building on both branches in the literature, we develop new methods for estimating response probabilities and propose an adaptive sampler for producing high-quality draws from multivariate truncated normal distributions. A simulation study illustrates the practical benefits and costs associated with each approach. The methods are employed to estimate the likelihood function of a correlated random effects panel data model of women's labor force participation.
This paper proposes a Bayesian procedure to investigate the purchasing power parity (PPP) utilizing an exponential smooth transition vector error correction model (VECM)…
Abstract
This paper proposes a Bayesian procedure to investigate the purchasing power parity (PPP) utilizing an exponential smooth transition vector error correction model (VECM). Employing a simple Gibbs sampler, we jointly estimate the cointegrating relationship along with the nonlinearities caused by the departures from the long-run equilibrium. By allowing for nonlinear regime changes, we provide strong evidence that PPP holds between the US and each of the remaining G7 countries. The model we employed implies that the dynamics of the PPP deviations can be rather complex, which is attested to by the impulse response analysis.
Mohammad Arshad Rahman and Angela Vossmeyer
This chapter develops a framework for quantile regression in binary longitudinal data settings. A novel Markov chain Monte Carlo (MCMC) method is designed to fit the model and its…
Abstract
This chapter develops a framework for quantile regression in binary longitudinal data settings. A novel Markov chain Monte Carlo (MCMC) method is designed to fit the model and its computational efficiency is demonstrated in a simulation study. The proposed approach is flexible in that it can account for common and individual-specific parameters, as well as multivariate heterogeneity associated with several covariates. The methodology is applied to study female labor force participation and home ownership in the United States. The results offer new insights at the various quantiles, which are of interest to policymakers and researchers alike.
Details
Keywords
In a Bayesian approach, we compare the forecasting performance of five classes of models: ARCH, GARCH, SV, SV-STAR, and MSSV using daily Tehran Stock Exchange (TSE) market data…
Abstract
In a Bayesian approach, we compare the forecasting performance of five classes of models: ARCH, GARCH, SV, SV-STAR, and MSSV using daily Tehran Stock Exchange (TSE) market data. To estimate the parameters of the models, Markov chain Monte Carlo (MCMC) methods is applied. The results show that the models in the fourth and the fifth class perform better than the models in the other classes.
Michiel de Pooter, Francesco Ravazzolo, Rene Segers and Herman K. van Dijk
Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior…
Abstract
Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior probability near the boundary of the parameter region. This feature refers to near-instability within dynamic models, to forecasting with near-random walk models and to clustering of several economic series in a small number of groups within a data panel. Two canonical models are used: a linear regression model with autocorrelation and a simple variance components model. Several well-known time-series models like unit root and error correction models and further state space and panel data models are shown to be simple generalizations of these two canonical models for the purpose of posterior inference. A Bayesian model averaging procedure is presented in order to deal with models with substantial probability both near and at the boundary of the parameter region. Analytical, graphical, and empirical results using U.S. macroeconomic data, in particular on GDP growth, are presented.
S.T. Boris Choy, Wai-yin Wan and Chun-man Chan
The normal error distribution for the observations and log-volatilities in a stochastic volatility (SV) model is replaced by the Student-t distribution for robustness…
Abstract
The normal error distribution for the observations and log-volatilities in a stochastic volatility (SV) model is replaced by the Student-t distribution for robustness consideration. The model is then called the t-t SV model throughout this paper. The objectives of the paper are twofold. First, we introduce the scale mixtures of uniform (SMU) and the scale mixtures of normal (SMN) representations to the Student-t density and show that the setup of a Gibbs sampler for the t-t SV model can be simplified. For example, the full conditional distribution of the log-volatilities has a truncated normal distribution that enables an efficient Gibbs sampling algorithm. These representations also provide a means for outlier diagnostics. Second, we consider the so-called t SV model with leverage where the observations and log-volatilities follow a bivariate t distribution. Returns on exchange rates of Australian dollar to 10 major currencies are fitted by the t-t SV model and the t SV model with leverage, respectively.
Gary Koop, Roberto Leon-Gonzalez and Rodney Strachan
This paper develops methods of Bayesian inference in a cointegrating panel data model. This model involves each cross-sectional unit having a vector error correction…
Abstract
This paper develops methods of Bayesian inference in a cointegrating panel data model. This model involves each cross-sectional unit having a vector error correction representation. It is flexible in the sense that different cross-sectional units can have different cointegration ranks and cointegration spaces. Furthermore, the parameters that characterize short-run dynamics and deterministic components are allowed to vary over cross-sectional units. In addition to a noninformative prior, we introduce an informative prior which allows for information about the likely location of the cointegration space and about the degree of similarity in coefficients in different cross-sectional units. A collapsed Gibbs sampling algorithm is developed which allows for efficient posterior inference. Our methods are illustrated using real and artificial data.