Search results

1 – 10 of over 6000
Book part
Publication date: 29 February 2008

Anindya Banerjee, Massimiliano Marcellino and Igor Masten

We conduct a detailed simulation study of the forecasting performance of diffusion index-based methods in short samples with structural change. We consider several data generation…

Abstract

We conduct a detailed simulation study of the forecasting performance of diffusion index-based methods in short samples with structural change. We consider several data generation processes, to mimic different types of structural change, and compare the relative forecasting performance of factor models and more traditional time series methods. We find that changes in the loading structure of the factors into the variables of interest are extremely important in determining the performance of factor models. We complement the analysis with an empirical evaluation of forecasts for the key macroeconomic variables of the Euro area and Slovenia, for which relatively short samples are officially available and structural changes are likely. The results are coherent with the findings of the simulation exercise and confirm the relatively good performance of factor-based forecasts in short samples with structural change.

Details

Forecasting in the Presence of Structural Breaks and Model Uncertainty
Type: Book
ISBN: 978-1-84950-540-6

Book part
Publication date: 13 December 2013

Refet S. Gürkaynak, Burçin Kısacıkoğlu and Barbara Rossi

Recently, it has been suggested that macroeconomic forecasts from estimated dynamic stochastic general equilibrium (DSGE) models tend to be more accurate out-of-sample than random…

Abstract

Recently, it has been suggested that macroeconomic forecasts from estimated dynamic stochastic general equilibrium (DSGE) models tend to be more accurate out-of-sample than random walk forecasts or Bayesian vector autoregression (VAR) forecasts. Del Negro and Schorfheide (2013) in particular suggest that the DSGE model forecast should become the benchmark for forecasting horse-races. We compare the real-time forecasting accuracy of the Smets and Wouters (2007) DSGE model with that of several reduced-form time series models. We first demonstrate that none of the forecasting models is efficient. Our second finding is that there is no single best forecasting method. For example, typically simple AR models are most accurate at short horizons and DSGE models are most accurate at long horizons when forecasting output growth, while for inflation forecasts the results are reversed. Moreover, the relative accuracy of all models tends to evolve over time. Third, we show that there is no support to the common practice of using large-scale Bayesian VAR models as the forecast benchmark when evaluating DSGE models. Indeed, low-dimensional unrestricted AR and VAR forecasts may forecast more accurately.

Details

VAR Models in Macroeconomics – New Developments and Applications: Essays in Honor of Christopher A. Sims
Type: Book
ISBN: 978-1-78190-752-8

Keywords

Article
Publication date: 20 January 2021

Athanasios Fassas, Stephanos Papadamou and Dimitrios Kenourgios

This study examines the forecasting performance of the professional analysts participating in the Blue Chip Economic Indicators Survey using an alternative methodological research…

Abstract

Purpose

This study examines the forecasting performance of the professional analysts participating in the Blue Chip Economic Indicators Survey using an alternative methodological research design.

Design/methodology/approach

This work employs two methodologies, namely a panel specification, with the cross-section being the forecast horizon (from 1-month to 18-months ahead forecasts) and the time period being the time that the forecast was made and a quantile regression technique, which evaluates the hidden nonmonotonic relations between the forecasts and the target variables being forecasted.

Findings

The empirical findings of this study show that survey-based forecasts of certain key macroeconomic variables are generally biased but still efficient predictors of target variables. In particular, we find that survey participants are more efficient in predicting long-term interest rates in the long-run and short-term interest rates in the short run, while the predictability of medium-term interest rates is the least accurate. Finally, our empirical analysis suggests that currency fluctuations are very hard to predict in the short run, while we show that survey-based forecasts are among the most accurate predictors of GDP deflator and growth.

Practical implications

Evaluating the accuracy of economic forecasts is critical since market participants and policymakers utilize such data (as one of several inputs) for making investment, financial and policy decisions. Therefore, the quality of a decision depends, in part, on the quality of the forecast. Our empirical results should have immediate implications for asset pricing models that use interest rates and inflation forecasts as variables.

Originality/value

The present study marks a methodological departure from existing empirical attempts as it proposes a simpler yet powerful approach in order to investigate the efficiency of professional forecasts. The employed empirical specifications enable market participants to investigate the information content of forecasts over different forecast horizons and the temporal evolution of forecast quality.

Details

Journal of Economic Studies, vol. 49 no. 1
Type: Research Article
ISSN: 0144-3585

Keywords

Book part
Publication date: 6 January 2016

Alessandro Giovannelli and Tommaso Proietti

We address the problem of selecting the common factors that are relevant for forecasting macroeconomic variables. In economic forecasting using diffusion indexes, the factors are…

Abstract

We address the problem of selecting the common factors that are relevant for forecasting macroeconomic variables. In economic forecasting using diffusion indexes, the factors are ordered, according to their importance, in terms of relative variability, and are the same for each variable to predict, that is, the process of selecting the factors is not supervised by the predictand. We propose a simple and operational supervised method, based on selecting the factors on the basis of their significance in the regression of the predictand on the predictors. Given a potentially large number of predictors, we consider linear transformations obtained by principal components analysis. The orthogonality of the components implies that the standard t-statistics for the inclusion of a particular component are independent, and thus applying a selection procedure that takes into account the multiplicity of the hypotheses tests is both correct and computationally feasible. We focus on three main multiple testing procedures: Holm's sequential method, controlling the familywise error rate, the Benjamini–Hochberg method, controlling the false discovery rate, and a procedure for incorporating prior information on the ordering of the components, based on weighting the p-values according to the eigenvalues associated to the components. We compare the empirical performances of these methods with the classical diffusion index (DI) approach proposed by Stock and Watson, conducting a pseudo-real-time forecasting exercise, assessing the predictions of eight macroeconomic variables using factors extracted from an U.S. dataset consisting of 121 quarterly time series. The overall conclusion is that nature is tricky, but essentially benign: the information that is relevant for prediction is effectively condensed by the first few factors. However, variable selection, leading to exclude some of the low-order principal components, can lead to a sizable improvement in forecasting in specific cases. Only in one instance, real personal income, we were able to detect a significant contribution from high-order components.

Details

Dynamic Factor Models
Type: Book
ISBN: 978-1-78560-353-2

Keywords

Book part
Publication date: 31 December 2010

Dominique Guégan and Patrick Rakotomarolahy

Purpose – The purpose of this chapter is twofold: to forecast gross domestic product (GDP) using nonparametric method, known as multivariate k-nearest neighbors method, and to…

Abstract

Purpose – The purpose of this chapter is twofold: to forecast gross domestic product (GDP) using nonparametric method, known as multivariate k-nearest neighbors method, and to provide asymptotic properties for this method.

Methodology/approach – We consider monthly and quarterly macroeconomic variables, and to match the quarterly GDP, we estimate the missing monthly economic variables using multivariate k-nearest neighbors method and parametric vector autoregressive (VAR) modeling. Then linking these monthly macroeconomic variables through the use of bridge equations, we can produce nowcasting and forecasting of GDP.

Findings – Using multivariate k-nearest neighbors method, we provide a forecast of the euro area monthly economic indicator and quarterly GDP, which is better than that obtained with a competitive linear VAR modeling. We also provide the asymptotic normality of this k-nearest neighbors regression estimator for dependent time series, as a confidence interval for point forecast in time series.

Originality/value of chapter – We provide a new theoretical result for nonparametric method and propose a novel methodology for forecasting using macroeconomic data.

Details

Nonlinear Modeling of Economic and Financial Time-Series
Type: Book
ISBN: 978-0-85724-489-5

Keywords

Abstract

Details

Nonlinear Time Series Analysis of Business Cycles
Type: Book
ISBN: 978-0-44451-838-5

Book part
Publication date: 18 January 2022

Andreas Pick and Matthijs Carpay

This chapter investigates the performance of different dimension reduction approaches for large vector autoregressions in multi-step ahead forecasts. The authors consider factor…

Abstract

This chapter investigates the performance of different dimension reduction approaches for large vector autoregressions in multi-step ahead forecasts. The authors consider factor augmented VAR models using principal components and partial least squares, random subset regression, random projection, random compression, and estimation via LASSO and Bayesian VAR. The authors compare the accuracy of iterated and direct multi-step point and density forecasts. The comparison is based on macroeconomic and financial variables from the FRED-MD data base. Our findings suggest that random subspace methods and LASSO estimation deliver the most precise forecasts.

Details

Essays in Honor of M. Hashem Pesaran: Prediction and Macro Modeling
Type: Book
ISBN: 978-1-80262-062-7

Keywords

Book part
Publication date: 1 July 2015

Nikolay Markov

This chapter investigates the predictability of the European monetary policy through the eyes of the professional forecasters from a large investment bank. The analysis is based…

Abstract

This chapter investigates the predictability of the European monetary policy through the eyes of the professional forecasters from a large investment bank. The analysis is based on forward-looking Actual and Perceived Taylor Rules for the European Central Bank which are estimated in real-time using a newly constructed database for the period April 2000–November 2009. The former policy rule is based on the actual refi rate set by the Governing Council, while the latter is estimated for the bank’s economists using their main point forecast for the upcoming refi rate decision as a dependent variable. The empirical evidence shows that the pattern of the refi rate is broadly well predicted by the professional forecasters even though the latter have foreseen more accurately the increases rather than the policy rate cuts. Second, the results point to an increasing responsiveness of the ECB to macroeconomic fundamentals along the forecast horizon. Third, the rolling window regressions suggest that the estimated coefficients have changed after the bankruptcy of Lehman Brothers in October 2008; the ECB has responded less strongly to macroeconomic fundamentals and the degree of policy inertia has decreased. A sensitivity analysis shows that the baseline results are robust to applying a recursive window methodology and some of the findings are qualitatively unaltered from using Consensus Economics forecasts in the regressions.

Details

Monetary Policy in the Context of the Financial Crisis: New Challenges and Lessons
Type: Book
ISBN: 978-1-78441-779-6

Keywords

Book part
Publication date: 29 February 2008

Todd E. Clark and Michael W. McCracken

Small-scale VARs are widely used in macroeconomics for forecasting US output, prices, and interest rates. However, recent work suggests these models may exhibit instabilities. As…

Abstract

Small-scale VARs are widely used in macroeconomics for forecasting US output, prices, and interest rates. However, recent work suggests these models may exhibit instabilities. As such, a variety of estimation or forecasting methods might be used to improve their forecast accuracy. These include using different observation windows for estimation, intercept correction, time-varying parameters, break dating, Bayesian shrinkage, model averaging, etc. This paper compares the effectiveness of such methods in real-time forecasting. We use forecasts from univariate time series models, the Survey of Professional Forecasters, and the Federal Reserve Board's Greenbook as benchmarks.

Details

Forecasting in the Presence of Structural Breaks and Model Uncertainty
Type: Book
ISBN: 978-1-84950-540-6

Book part
Publication date: 30 August 2019

Gary Koop and Luca Onorante

Many recent chapters have investigated whether data from internet search engines such as Google can help improve nowcasts or short-term forecasts of macroeconomic variables. These…

Abstract

Many recent chapters have investigated whether data from internet search engines such as Google can help improve nowcasts or short-term forecasts of macroeconomic variables. These chapters construct variables based on Google searches and use them as explanatory variables in regression models. We add to this literature by nowcasting using dynamic model selection (DMS) methods which allow for model switching between time-varying parameter regression models. This is potentially useful in an environment of coefficient instability and over-parameterization which can arise when forecasting with Google variables. We extend the DMS methodology by allowing for the model switching to be controlled by the Google variables through what we call “Google probabilities”: instead of using Google variables as regressors, we allow them to determine which nowcasting model should be used at each point in time. In an empirical exercise involving nine major monthly US macroeconomic variables, we find DMS methods to provide large improvements in nowcasting. Our use of Google model probabilities within DMS often performs better than conventional DMS methods.

Details

Topics in Identification, Limited Dependent Variables, Partial Observability, Experimentation, and Flexible Modeling: Part A
Type: Book
ISBN: 978-1-78973-241-2

Keywords

1 – 10 of over 6000