Search results

1 – 10 of over 4000
Book part
Publication date: 1 July 2015

Enrique Martínez-García

The global slack hypothesis is central to the discussion of the trade-offs that monetary policy faces in an increasingly more integrated world. The workhorse New Open Economy…

Abstract

The global slack hypothesis is central to the discussion of the trade-offs that monetary policy faces in an increasingly more integrated world. The workhorse New Open Economy Macro (NOEM) model of Martínez-García and Wynne (2010), which fleshes out this hypothesis, shows how expected future local inflation and global slack affect current local inflation. In this chapter, I propose the use of the orthogonalization method of Aoki (1981) and Fukuda (1993) on the workhorse NOEM model to further decompose local inflation into a global component and an inflation differential component. I find that the log-linearized rational expectations model of Martínez-García and Wynne (2010) can be solved with two separate subsystems to describe each of these two components of inflation.

I estimate the full NOEM model with Bayesian techniques using data for the United States and an aggregate of its 38 largest trading partners from 1980Q1 until 2011Q4. The Bayesian estimation recognizes the parameter uncertainty surrounding the model and calls on the data (inflation and output) to discipline the parameterization. My findings show that the strength of the international spillovers through trade – even in the absence of common shocks – is reflected in the response of global inflation and is incorporated into local inflation dynamics. Furthermore, I find that key features of the economy can have different impacts on global and local inflation – in particular, I show that the parameters that determine the import share and the price-elasticity of trade matter in explaining the inflation differential component but not the global component of inflation.

Details

Monetary Policy in the Context of the Financial Crisis: New Challenges and Lessons
Type: Book
ISBN: 978-1-78441-779-6

Keywords

Article
Publication date: 17 July 2009

Emmanuel Blanchard, Adrian Sandu and Corina Sandu

The purpose of this paper is to propose a new computational approach for parameter estimation in the Bayesian framework. A posteriori probability density functions are obtained…

Abstract

Purpose

The purpose of this paper is to propose a new computational approach for parameter estimation in the Bayesian framework. A posteriori probability density functions are obtained using the polynomial chaos theory for propagating uncertainties through system dynamics. The new method has the advantage of being able to deal with large parametric uncertainties, non‐Gaussian probability densities and nonlinear dynamics.

Design/methodology/approach

The maximum likelihood estimates are obtained by minimizing a cost function derived from the Bayesian theorem. Direct stochastic collocation is used as a less computationally expensive alternative to the traditional Galerkin approach to propagate the uncertainties through the system in the polynomial chaos framework.

Findings

The new approach is explained and is applied to very simple mechanical systems in order to illustrate how the Bayesian cost function can be affected by the noise level in the measurements, by undersampling, non‐identifiablily of the system, non‐observability and by excitation signals that are not rich enough. When the system is non‐identifiable and an a priori knowledge of the parameter uncertainties is available, regularization techniques can still yield most likely values among the possible combinations of uncertain parameters resulting in the same time responses than the ones observed.

Originality/value

The polynomial chaos method has been shown to be considerably more efficient than Monte Carlo in the simulation of systems with a small number of uncertain parameters. This is believed to be the first time the polynomial chaos theory has been applied to Bayesian estimation.

Details

Engineering Computations, vol. 26 no. 5
Type: Research Article
ISSN: 0264-4401

Keywords

Book part
Publication date: 19 November 2014

Enrique Martínez-García and Mark A. Wynne

We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of…

Abstract

We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of Martínez-García and Wynne (2010). We discuss the trade-offs that monetary policy – characterized by a Taylor-type rule – faces in an interconnected world, with perfectly flexible exchange rates. We then use posterior model probabilities to evaluate the weight of evidence in support of such a model when estimated against more parsimonious specifications that either abstract from monetary frictions or assume autarky by means of controlled experiments that employ simulated data. We argue that Bayesian model comparison with posterior odds is sensitive to sample size and the choice of observable variables for estimation. We show that posterior model probabilities strongly penalize overfitting, which can lead us to favor a less parameterized model against the true data-generating process when the two become arbitrarily close to each other. We also illustrate that the spillovers from monetary policy across countries have an added confounding effect.

Article
Publication date: 18 January 2016

Huajun Liu, Cailing Wang and Jingyu Yang

– This paper aims to present a novel scheme of multiple vanishing points (VPs) estimation and corresponding lanes identification.

Abstract

Purpose

This paper aims to present a novel scheme of multiple vanishing points (VPs) estimation and corresponding lanes identification.

Design/methodology/approach

The scheme proposed here includes two main stages: VPs estimation and lane identification. VPs estimation based on vanishing direction hypothesis and Bayesian posterior probability estimation in the image Hough space is a foremost contribution, and then VPs are estimated through an optimal objective function. In lane identification stage, the selected linear samples supervised by estimated VPs are clustered based on the gradient direction of linear features to separate lanes, and finally all the lanes are identified through an identification function.

Findings

The scheme and algorithms are tested on real data sets collected from an intelligent vehicle. It is more efficient and more accurate than recent similar methods for structured road, and especially multiple VPs identification and estimation of branch road can be achieved and lanes of branch road can be identified for complex scenarios based on Bayesian posterior probability verification framework. Experimental results demonstrate VPs, and lanes are practical for challenging structured and semi-structured complex road scenarios.

Originality/value

A Bayesian posterior probability verification framework is proposed to estimate multiple VPs and corresponding lanes for road scene understanding of structured or semi-structured road monocular images on intelligent vehicles.

Details

Industrial Robot: An International Journal, vol. 43 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Book part
Publication date: 6 January 2016

Laura E. Jackson, M. Ayhan Kose, Christopher Otrok and Michael T. Owyang

We compare methods to measure comovement in business cycle data using multi-level dynamic factor models. To do so, we employ a Monte Carlo procedure to evaluate model performance…

Abstract

We compare methods to measure comovement in business cycle data using multi-level dynamic factor models. To do so, we employ a Monte Carlo procedure to evaluate model performance for different specifications of factor models across three different estimation procedures. We consider three general factor model specifications used in applied work. The first is a single-factor model, the second a two-level factor model, and the third a three-level factor model. Our estimation procedures are the Bayesian approach of Otrok and Whiteman (1998), the Bayesian state-space approach of Kim and Nelson (1998) and a frequentist principal components approach. The latter serves as a benchmark to measure any potential gains from the more computationally intensive Bayesian procedures. We then apply the three methods to a novel new dataset on house prices in advanced and emerging markets from Cesa-Bianchi, Cespedes, and Rebucci (2015) and interpret the empirical results in light of the Monte Carlo results.

Details

Dynamic Factor Models
Type: Book
ISBN: 978-1-78560-353-2

Keywords

Abstract

Details

Functional Structure and Approximation in Econometrics
Type: Book
ISBN: 978-0-44450-861-4

Book part
Publication date: 18 October 2019

Gholamreza Hajargasht and William E. Griffiths

We consider a semiparametric panel stochastic frontier model where one-sided firm effects representing inefficiencies are correlated with the regressors. A form of the…

Abstract

We consider a semiparametric panel stochastic frontier model where one-sided firm effects representing inefficiencies are correlated with the regressors. A form of the Chamberlain-Mundlak device is used to relate the logarithm of the effects to the regressors resulting in a lognormal distribution for the effects. The function describing the technology is modeled nonparametrically using penalized splines. Both Bayesian and non-Bayesian approaches to estimation are considered, with an emphasis on Bayesian estimation. A Monte Carlo experiment is used to investigate the consequences of ignoring correlation between the effects and the regressors, and choosing the wrong functional form for the technology.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part B
Type: Book
ISBN: 978-1-83867-419-9

Keywords

Article
Publication date: 1 January 2008

Peng Liu, Elia El‐Darzi, Lei Lei, Christos Vasilakis, Panagiotis Chountas and Wei Huang

Purpose – Data preparation plays an important role in data mining as most real life data sets contained missing data. This paper aims to investigate different treatment methods…

Abstract

Purpose – Data preparation plays an important role in data mining as most real life data sets contained missing data. This paper aims to investigate different treatment methods for missing data. Design/methodology/approach – This paper introduces, analyses and compares well‐established treatment methods for missing data and proposes new methods based on naïve Bayesian classifier. These methods have been implemented and compared using a real life geriatric hospital dataset. Findings – In the case where a large proportion of the data is missing and many attributes have missing data, treatment methods based on naïve Bayesian classifier perform very well. Originality/value – This paper proposes an effective missing data treatment method and offers a viable approach to predict inpatient length of stay from a data set with many missing values.

Details

Journal of Enterprise Information Management, vol. 21 no. 1
Type: Research Article
ISSN: 1741-0398

Keywords

Abstract

This article surveys recent developments in the evaluation of point and density forecasts in the context of forecasts made by vector autoregressions. Specific emphasis is placed on highlighting those parts of the existing literature that are applicable to direct multistep forecasts and those parts that are applicable to iterated multistep forecasts. This literature includes advancements in the evaluation of forecasts in population (based on true, unknown model coefficients) and the evaluation of forecasts in the finite sample (based on estimated model coefficients). The article then examines in Monte Carlo experiments the finite-sample properties of some tests of equal forecast accuracy, focusing on the comparison of VAR forecasts to AR forecasts. These experiments show the tests to behave as should be expected given the theory. For example, using critical values obtained by bootstrap methods, tests of equal accuracy in population have empirical size about equal to nominal size.

Details

VAR Models in Macroeconomics – New Developments and Applications: Essays in Honor of Christopher A. Sims
Type: Book
ISBN: 978-1-78190-752-8

Keywords

Article
Publication date: 29 November 2019

A. George Assaf and Mike G. Tsionas

This paper aims to present several Bayesian specification tests for both in- and out-of-sample situations.

Abstract

Purpose

This paper aims to present several Bayesian specification tests for both in- and out-of-sample situations.

Design/methodology/approach

The authors focus on the Bayesian equivalents of the frequentist approach for testing heteroskedasticity, autocorrelation and functional form specification. For out-of-sample diagnostics, the authors consider several tests to evaluate the predictive ability of the model.

Findings

The authors demonstrate the performance of these tests using an application on the relationship between price and occupancy rate from the hotel industry. For purposes of comparison, the authors also provide evidence from traditional frequentist tests.

Research limitations/implications

There certainly exist other issues and diagnostic tests that are not covered in this paper. The issues that are addressed, however, are critically important and can be applied to most modeling situations.

Originality/value

With the increased use of the Bayesian approach in various modeling contexts, this paper serves as an important guide for diagnostic testing in Bayesian analysis. Diagnostic analysis is essential and should always accompany the estimation of regression models.

Details

International Journal of Contemporary Hospitality Management, vol. 32 no. 4
Type: Research Article
ISSN: 0959-6119

Keywords

1 – 10 of over 4000