Search results

1 – 10 of 701
Book part
Publication date: 1 July 2015

Enrique Martínez-García

The global slack hypothesis is central to the discussion of the trade-offs that monetary policy faces in an increasingly more integrated world. The workhorse New Open Economy…

Abstract

The global slack hypothesis is central to the discussion of the trade-offs that monetary policy faces in an increasingly more integrated world. The workhorse New Open Economy Macro (NOEM) model of Martínez-García and Wynne (2010), which fleshes out this hypothesis, shows how expected future local inflation and global slack affect current local inflation. In this chapter, I propose the use of the orthogonalization method of Aoki (1981) and Fukuda (1993) on the workhorse NOEM model to further decompose local inflation into a global component and an inflation differential component. I find that the log-linearized rational expectations model of Martínez-García and Wynne (2010) can be solved with two separate subsystems to describe each of these two components of inflation.

I estimate the full NOEM model with Bayesian techniques using data for the United States and an aggregate of its 38 largest trading partners from 1980Q1 until 2011Q4. The Bayesian estimation recognizes the parameter uncertainty surrounding the model and calls on the data (inflation and output) to discipline the parameterization. My findings show that the strength of the international spillovers through trade – even in the absence of common shocks – is reflected in the response of global inflation and is incorporated into local inflation dynamics. Furthermore, I find that key features of the economy can have different impacts on global and local inflation – in particular, I show that the parameters that determine the import share and the price-elasticity of trade matter in explaining the inflation differential component but not the global component of inflation.

Details

Monetary Policy in the Context of the Financial Crisis: New Challenges and Lessons
Type: Book
ISBN: 978-1-78441-779-6

Keywords

Book part
Publication date: 19 November 2014

Enrique Martínez-García and Mark A. Wynne

We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of…

Abstract

We investigate the Bayesian approach to model comparison within a two-country framework with nominal rigidities using the workhorse New Keynesian open-economy model of Martínez-García and Wynne (2010). We discuss the trade-offs that monetary policy – characterized by a Taylor-type rule – faces in an interconnected world, with perfectly flexible exchange rates. We then use posterior model probabilities to evaluate the weight of evidence in support of such a model when estimated against more parsimonious specifications that either abstract from monetary frictions or assume autarky by means of controlled experiments that employ simulated data. We argue that Bayesian model comparison with posterior odds is sensitive to sample size and the choice of observable variables for estimation. We show that posterior model probabilities strongly penalize overfitting, which can lead us to favor a less parameterized model against the true data-generating process when the two become arbitrarily close to each other. We also illustrate that the spillovers from monetary policy across countries have an added confounding effect.

Book part
Publication date: 6 January 2016

Laura E. Jackson, M. Ayhan Kose, Christopher Otrok and Michael T. Owyang

We compare methods to measure comovement in business cycle data using multi-level dynamic factor models. To do so, we employ a Monte Carlo procedure to evaluate model performance…

Abstract

We compare methods to measure comovement in business cycle data using multi-level dynamic factor models. To do so, we employ a Monte Carlo procedure to evaluate model performance for different specifications of factor models across three different estimation procedures. We consider three general factor model specifications used in applied work. The first is a single-factor model, the second a two-level factor model, and the third a three-level factor model. Our estimation procedures are the Bayesian approach of Otrok and Whiteman (1998), the Bayesian state-space approach of Kim and Nelson (1998) and a frequentist principal components approach. The latter serves as a benchmark to measure any potential gains from the more computationally intensive Bayesian procedures. We then apply the three methods to a novel new dataset on house prices in advanced and emerging markets from Cesa-Bianchi, Cespedes, and Rebucci (2015) and interpret the empirical results in light of the Monte Carlo results.

Details

Dynamic Factor Models
Type: Book
ISBN: 978-1-78560-353-2

Keywords

Abstract

Details

Functional Structure and Approximation in Econometrics
Type: Book
ISBN: 978-0-44450-861-4

Book part
Publication date: 18 October 2019

Gholamreza Hajargasht and William E. Griffiths

We consider a semiparametric panel stochastic frontier model where one-sided firm effects representing inefficiencies are correlated with the regressors. A form of the…

Abstract

We consider a semiparametric panel stochastic frontier model where one-sided firm effects representing inefficiencies are correlated with the regressors. A form of the Chamberlain-Mundlak device is used to relate the logarithm of the effects to the regressors resulting in a lognormal distribution for the effects. The function describing the technology is modeled nonparametrically using penalized splines. Both Bayesian and non-Bayesian approaches to estimation are considered, with an emphasis on Bayesian estimation. A Monte Carlo experiment is used to investigate the consequences of ignoring correlation between the effects and the regressors, and choosing the wrong functional form for the technology.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part B
Type: Book
ISBN: 978-1-83867-419-9

Keywords

Abstract

This article surveys recent developments in the evaluation of point and density forecasts in the context of forecasts made by vector autoregressions. Specific emphasis is placed on highlighting those parts of the existing literature that are applicable to direct multistep forecasts and those parts that are applicable to iterated multistep forecasts. This literature includes advancements in the evaluation of forecasts in population (based on true, unknown model coefficients) and the evaluation of forecasts in the finite sample (based on estimated model coefficients). The article then examines in Monte Carlo experiments the finite-sample properties of some tests of equal forecast accuracy, focusing on the comparison of VAR forecasts to AR forecasts. These experiments show the tests to behave as should be expected given the theory. For example, using critical values obtained by bootstrap methods, tests of equal accuracy in population have empirical size about equal to nominal size.

Details

VAR Models in Macroeconomics – New Developments and Applications: Essays in Honor of Christopher A. Sims
Type: Book
ISBN: 978-1-78190-752-8

Keywords

Book part
Publication date: 18 October 2019

Mohammad Arshad Rahman and Shubham Karnawat

This article is motivated by the lack of flexibility in Bayesian quantile regression for ordinal models where the error follows an asymmetric Laplace (AL) distribution. The…

Abstract

This article is motivated by the lack of flexibility in Bayesian quantile regression for ordinal models where the error follows an asymmetric Laplace (AL) distribution. The inflexibility arises because the skewness of the distribution is completely specified when a quantile is chosen. To overcome this shortcoming, we derive the cumulative distribution function (and the moment-generating function) of the generalized asymmetric Laplace (GAL) distribution – a generalization of AL distribution that separates the skewness from the quantile parameter – and construct a working likelihood for the ordinal quantile model. The resulting framework is termed flexible Bayesian quantile regression for ordinal (FBQROR) models. However, its estimation is not straightforward. We address estimation issues and propose an efficient Markov chain Monte Carlo (MCMC) procedure based on Gibbs sampling and joint Metropolis–Hastings algorithm. The advantages of the proposed model are demonstrated in multiple simulation studies and implemented to analyze public opinion on homeownership as the best long-term investment in the United States following the Great Recession.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part B
Type: Book
ISBN: 978-1-83867-419-9

Keywords

Book part
Publication date: 30 December 2004

Leslie W. Hepple

Within spatial econometrics a whole family of different spatial specifications has been developed, with associated estimators and tests. This lead to issues of model comparison…

Abstract

Within spatial econometrics a whole family of different spatial specifications has been developed, with associated estimators and tests. This lead to issues of model comparison and model choice, measuring the relative merits of alternative specifications and then using appropriate criteria to choose the “best” model or relative model probabilities. Bayesian theory provides a comprehensive and coherent framework for such model choice, including both nested and non-nested models within the choice set. The paper reviews the potential application of this Bayesian theory to spatial econometric models, examining the conditions and assumptions under which application is possible. Problems of prior distributions are outlined, and Bayes factors and marginal likelihoods are derived for a particular subset of spatial econometric specifications. These are then applied to two well-known spatial data-sets to illustrate the methods. Future possibilities, and comparisons with other approaches to both Bayesian and non-Bayesian model choice are discussed.

Details

Spatial and Spatiotemporal Econometrics
Type: Book
ISBN: 978-0-76231-148-4

Book part
Publication date: 18 April 2018

Simon Washington, Amir Pooyan Afghari and Mohammed Mazharul Haque

Purpose – The purpose of this chapter is to review the methodological and empirical underpinnings of transport network screening, or management, as it relates to improving road…

Abstract

Purpose – The purpose of this chapter is to review the methodological and empirical underpinnings of transport network screening, or management, as it relates to improving road safety. As jurisdictions around the world are charged with transport network management in order to reduce externalities associated with road crashes, identifying potential blackspots or hotspots is an important if not critical function and responsibility of transport agencies.

Methodology – Key references from within the literature are summarised and discussed, along with a discussion of the evolution of thinking around hotspot identification and management. The theoretical developments that correspond with the evolution in thinking are provided, sprinkled with examples along the way.

Findings – Hotspot identification methodologies have evolved considerably over the past 30 or so years, correcting for methodological deficiencies along the way. Despite vast and significant advancements, identifying hotspots remains a reactive approach to managing road safety – relying on crashes to accrue in order to mitigate their occurrence. The most fruitful directions for future research will be in the establishment of reliable relationships between surrogate measures of road safety – such as ‘near misses’ – and actual crashes – so that safety can be proactively managed without the need for crashes to accrue.

Research implications – Research in hotspot identification will continue; however, it is likely to shift over time to both closer to ‘real-time’ crash risk detection and considering safety improvements using surrogate measures of road safety – described in Chapter 17.

Practical implications – There are two types of errors made in hotspot detection – identifying a ‘risky’ site as ‘safe’ and identifying a ‘safe’ site as ‘risky’. In the former case no investments will be made to improve safety, while in the latter case ineffective or inefficient safety improvements could be made. To minimise these errors, transport network safety managers should be applying the current state of the practice methods for hotspot detection. Moreover, transport network safety managers should be eager to transition to proactive methods of network safety management to avoid the need for crashes to occur. While in its infancy, the use of surrogate measures of safety holds significant promise for the future.

Details

Safe Mobility: Challenges, Methodology and Solutions
Type: Book
ISBN: 978-1-78635-223-1

Keywords

Book part
Publication date: 21 December 2010

Ivan Jeliazkov and Esther Hee Lee

A major stumbling block in multivariate discrete data analysis is the problem of evaluating the outcome probabilities that enter the likelihood function. Calculation of these…

Abstract

A major stumbling block in multivariate discrete data analysis is the problem of evaluating the outcome probabilities that enter the likelihood function. Calculation of these probabilities involves high-dimensional integration, making simulation methods indispensable in both Bayesian and frequentist estimation and model choice. We review several existing probability estimators and then show that a broader perspective on the simulation problem can be afforded by interpreting the outcome probabilities through Bayes’ theorem, leading to the recognition that estimation can alternatively be handled by methods for marginal likelihood computation based on the output of Markov chain Monte Carlo (MCMC) algorithms. These techniques offer stand-alone approaches to simulated likelihood estimation but can also be integrated with traditional estimators. Building on both branches in the literature, we develop new methods for estimating response probabilities and propose an adaptive sampler for producing high-quality draws from multivariate truncated normal distributions. A simulation study illustrates the practical benefits and costs associated with each approach. The methods are employed to estimate the likelihood function of a correlated random effects panel data model of women's labor force participation.

Details

Maximum Simulated Likelihood Methods and Applications
Type: Book
ISBN: 978-0-85724-150-4

Access

Year

All dates (701)

Content type

Book part (701)
1 – 10 of 701