Search results

11 – 20 of over 9000

Abstract

Details

Panel Data and Structural Labour Market Models
Type: Book
ISBN: 978-0-44450-319-0

Book part
Publication date: 13 December 2013

Ivan Jeliazkov

For over three decades, vector autoregressions have played a central role in empirical macroeconomics. These models are general, can capture sophisticated dynamic behavior, and…

Abstract

For over three decades, vector autoregressions have played a central role in empirical macroeconomics. These models are general, can capture sophisticated dynamic behavior, and can be extended to include features such as structural instability, time-varying parameters, dynamic factors, threshold-crossing behavior, and discrete outcomes. Building upon growing evidence that the assumption of linearity may be undesirable in modeling certain macroeconomic relationships, this article seeks to add to recent advances in VAR modeling by proposing a nonparametric dynamic model for multivariate time series. In this model, the problems of modeling and estimation are approached from a hierarchical Bayesian perspective. The article considers the issues of identification, estimation, and model comparison, enabling nonparametric VAR (or NPVAR) models to be fit efficiently by Markov chain Monte Carlo (MCMC) algorithms and compared to parametric and semiparametric alternatives by marginal likelihoods and Bayes factors. Among other benefits, the methodology allows for a more careful study of structural instability while guarding against the possibility of unaccounted nonlinearity in otherwise stable economic relationships. Extensions of the proposed nonparametric model to settings with heteroskedasticity and other important modeling features are also considered. The techniques are employed to study the postwar U.S. economy, confirming the presence of distinct volatility regimes and supporting the contention that certain nonlinear relationships in the data can remain undetected by standard models.

Details

VAR Models in Macroeconomics – New Developments and Applications: Essays in Honor of Christopher A. Sims
Type: Book
ISBN: 978-1-78190-752-8

Keywords

Book part
Publication date: 21 December 2010

Ivan Jeliazkov and Esther Hee Lee

A major stumbling block in multivariate discrete data analysis is the problem of evaluating the outcome probabilities that enter the likelihood function. Calculation of these…

Abstract

A major stumbling block in multivariate discrete data analysis is the problem of evaluating the outcome probabilities that enter the likelihood function. Calculation of these probabilities involves high-dimensional integration, making simulation methods indispensable in both Bayesian and frequentist estimation and model choice. We review several existing probability estimators and then show that a broader perspective on the simulation problem can be afforded by interpreting the outcome probabilities through Bayes’ theorem, leading to the recognition that estimation can alternatively be handled by methods for marginal likelihood computation based on the output of Markov chain Monte Carlo (MCMC) algorithms. These techniques offer stand-alone approaches to simulated likelihood estimation but can also be integrated with traditional estimators. Building on both branches in the literature, we develop new methods for estimating response probabilities and propose an adaptive sampler for producing high-quality draws from multivariate truncated normal distributions. A simulation study illustrates the practical benefits and costs associated with each approach. The methods are employed to estimate the likelihood function of a correlated random effects panel data model of women's labor force participation.

Details

Maximum Simulated Likelihood Methods and Applications
Type: Book
ISBN: 978-0-85724-150-4

Book part
Publication date: 1 January 2008

Michiel de Pooter, Francesco Ravazzolo, Rene Segers and Herman K. van Dijk

Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior…

Abstract

Several lessons learnt from a Bayesian analysis of basic macroeconomic time-series models are presented for the situation where some model parameters have substantial posterior probability near the boundary of the parameter region. This feature refers to near-instability within dynamic models, to forecasting with near-random walk models and to clustering of several economic series in a small number of groups within a data panel. Two canonical models are used: a linear regression model with autocorrelation and a simple variance components model. Several well-known time-series models like unit root and error correction models and further state space and panel data models are shown to be simple generalizations of these two canonical models for the purpose of posterior inference. A Bayesian model averaging procedure is presented in order to deal with models with substantial probability both near and at the boundary of the parameter region. Analytical, graphical, and empirical results using U.S. macroeconomic data, in particular on GDP growth, are presented.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Book part
Publication date: 30 August 2019

Timothy Cogley and Richard Startz

Standard estimation of ARMA models in which the AR and MA roots nearly cancel, so that individual coefficients are only weakly identified, often produces inferential ranges for…

Abstract

Standard estimation of ARMA models in which the AR and MA roots nearly cancel, so that individual coefficients are only weakly identified, often produces inferential ranges for individual coefficients that give a spurious appearance of accuracy. We remedy this problem with a model that uses a simple mixture prior. The posterior mixing probability is derived using Bayesian methods, but we show that the method works well in both Bayesian and frequentist setups. In particular, we show that our mixture procedure weights standard results heavily when given data from a well-identified ARMA model (which does not exhibit near root cancellation) and weights heavily an uninformative inferential region when given data from a weakly-identified ARMA model (with near root cancellation). When our procedure is applied to a well-identified process the investigator gets the “usual results,” so there is no important statistical cost to using our procedure. On the other hand, when our procedure is applied to a weakly identified process, the investigator learns that the data tell us little about the parameters – and is thus protected against making spurious inferences. We recommend that mixture models be computed routinely when inference about ARMA coefficients is of interest.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

Book part
Publication date: 1 January 2008

Ivan Jeliazkov, Jennifer Graves and Mark Kutzbach

In this paper, we consider the analysis of models for univariate and multivariate ordinal outcomes in the context of the latent variable inferential framework of Albert and Chib…

Abstract

In this paper, we consider the analysis of models for univariate and multivariate ordinal outcomes in the context of the latent variable inferential framework of Albert and Chib (1993). We review several alternative modeling and identification schemes and evaluate how each aids or hampers estimation by Markov chain Monte Carlo simulation methods. For each identification scheme we also discuss the question of model comparison by marginal likelihoods and Bayes factors. In addition, we develop a simulation-based framework for analyzing covariate effects that can provide interpretability of the results despite the nonlinearities in the model and the different identification restrictions that can be implemented. The methods are employed to analyze problems in labor economics (educational attainment), political economy (voter opinions), and health economics (consumers’ reliance on alternative sources of medical information).

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Book part
Publication date: 18 October 2019

John Geweke

Bayesian A/B inference (BABI) is a method that combines subjective prior information with data from A/B experiments to provide inference for lift – the difference in a measure of…

Abstract

Bayesian A/B inference (BABI) is a method that combines subjective prior information with data from A/B experiments to provide inference for lift – the difference in a measure of response in control and treatment, expressed as its ratio to the measure of response in control. The procedure is embedded in stable code that can be executed in a few seconds for an experiment, regardless of sample size, and caters to the objectives and technical background of the owners of experiments. BABI provides more powerful tests of the hypothesis of the impact of treatment on lift, and sharper conclusions about the value of lift, than do legacy conventional methods. In application to 21 large online experiments, the credible interval is 60% to 65% shorter than the conventional confidence interval in the median case, and by close to 100% in a significant proportion of cases; in rare cases, BABI credible intervals are longer than conventional confidence intervals and then by no more than about 10%.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part B
Type: Book
ISBN: 978-1-83867-419-9

Book part
Publication date: 1 January 2008

Siddhartha Chib and Liana Jacobi

We present Bayesian models for finding the longitudinal causal effects of a randomized two-arm training program when compliance with the randomized assignment is less than perfect…

Abstract

We present Bayesian models for finding the longitudinal causal effects of a randomized two-arm training program when compliance with the randomized assignment is less than perfect in the training arm (but perfect in the non-training arm) for reasons that are potentially correlated with the outcomes. We deal with the latter confounding problem under the principal stratification framework of Sommer and Zeger (1991) and Frangakis and Rubin (1999), and others. Building on the Bayesian contributions of Imbens and Rubin (1997), Hirano et al. (2000), Yau and Little (2001) and in particular Chib (2007) and Chib and Jacobi (2007, 2008), we construct rich models of the potential outcome sequences (with and without random effects), show how informative priors can be reasonably formulated, and present tuned computational approaches for summarizing the posterior distribution. We also discuss the computation of the marginal likelihood for comparing various versions of our models. We find the causal effects of the observed intake from the predictive distribution of each potential outcome for compliers. These are calculated from the output of our estimation procedures. We illustrate the techniques and ideas with data from the 1994 JOBS II trial that was set up to test the efficacy of a job training program on subsequent mental health outcomes.

Details

Bayesian Econometrics
Type: Book
ISBN: 978-1-84855-308-8

Book part
Publication date: 4 October 2018

William W. Chow

This chapter proposes augmenting a simple income stock price model with spatial structures to evaluate the significance of real and financial linkages in instigating stock market…

Abstract

This chapter proposes augmenting a simple income stock price model with spatial structures to evaluate the significance of real and financial linkages in instigating stock market contagion. The treatment is premised upon the clustering of excessive returns and volatilities during the Subprime crisis envisaged from our regime switching analysis over a long time span, and the presence of spatial autocorrelation in the baseline income stock model. With the channel factors manifested as spatial weights, this chapter explores specifications of explicit interrelated stock price returns and implicit spatial autocorrelation in the error term for the 3-year period from 2007 to 2009. Model validity is authenticated by way of model choice and spatial weight selection. The finding shows that spatial dependence in either specification is not too sizable indicating that contagion is not spreading fast in the sample period. Of the various factors considered, non-performing loans, market liquidity, and credit to deposit ratio turn out to be the most important transmission factors. Current account balance, net FDI flows, and size of GDP are among the least significant media. In sum, these suggest that financial linkages could play a more important role in facilitating shock transmission when compared to real linkages such as trade.

Details

Banking and Finance Issues in Emerging Markets
Type: Book
ISBN: 978-1-78756-453-4

Keywords

Article
Publication date: 1 July 2006

George Chang

The purpose of this paper is to investigate whether Markov mixture of normals (MMN) model is a viable approach to modeling financial returns.

Abstract

Purpose

The purpose of this paper is to investigate whether Markov mixture of normals (MMN) model is a viable approach to modeling financial returns.

Design/methodology/approach

This paper adopts the full Bayesian estimation approach based on the method of Gibbs sampling, and the latent state variables simulation algorithm developed by Chib.

Findings

Using data from the S&P 500 index, the paper first demonstrates that the MMN model is able to capture the unconditional features of the S&P 500 daily returns. It further conducts formal model comparisons to examine the performance of the Markov mixture structures relative to two well‐known alternatives, the GARCH and the t‐GARCH models. The results clearly indicate that MMN models are viable alternatives to modeling financial returns.

Research limitations/implications

The univariate MMN structure in this paper can be generalized to a multivariate setting, which can provide a flexible yet practical approach to modeling multiple time series of assets returns.

Practical implications

Given the encouraging empirical performance of the MMN models, it is hopeful that the MMN models will have success in some interesting financial applications such as Value‐at‐Risk and option pricing.

Originality/value

The paper explicitly formulates the Gibbs sampling procedures for estimating MMN models in a Bayesian framework. It also shows empirically that MMN models are able to capture the stylized features of financial returns. The MMN models and their estimation method in this paper can be applied to other financial data, especially in which tail probability is of major interest or concern.

Details

Studies in Economics and Finance, vol. 23 no. 2
Type: Research Article
ISSN: 1086-7376

Keywords

11 – 20 of over 9000