Forecasting in the Presence of Structural Breaks and Model Uncertainty: Volume 3

Cover of Forecasting in the Presence of Structural Breaks and Model Uncertainty
Subject:

Table of contents

(24 chapters)

This series is aimed at economists and financial economists worldwide and will provide an in depth look at current global topics. Each volume in the series will focus on specialized topics for greater understanding of the chosen subject and provide a detailed discussion of emerging issues. The target audiences are professional researchers, graduate students, and policy makers. It will offer cutting-edge views on new horizons and deepen the understanding in these emerging topics.

We thank the Simon Center for Regional Forecasting at the John Cook School of Business at Saint Louis University – especially Jack Strauss, Director of the Simon Center and Ellen Harshman, Dean of the Cook School – for its generosity and hospitality in hosting a conference during the summer of 2006 where many of the chapters appearing in this volume were presented. The conference provided a forum for discussing many important issues relating to forecasting in the presence of structural breaks and model uncertainty, and participants viewed the conference as helping to significantly improve the quality of the research appearing in the chapters of this volume.3 This volume is part of Elsevier's new series, Frontiers of Economics and Globalization, and we also thank Hamid Beladi for his support as an Editor of the series.

In recent work, we have developed a theory of economic forecasting for empirical econometric models when there are structural breaks. This research shows that well-specified models may forecast poorly, whereas it is possible to design forecasting devices more immune to the effects of breaks. In this chapter, we summarise key aspects of that theory, describe the models and data, then provide an empirical illustration of some of these developments when the goal is to generate sequences of inflation forecasts over a long historical period, starting with the model of annual inflation in the UK over 1875–1991 in Hendry (2001a).

Structural models' inflation forecasts are often inferior to those of naïve devices. This chapter theoretically and empirically assesses this for UK annual and quarterly inflation, using the theoretical framework in Clements and Hendry (1998, 1999). Forecasts from equilibrium-correction mechanisms, built by automatic model selection, are compared to various robust devices. Forecast-error taxonomies for aggregated and time-disaggregated information reveal that the impacts of structural breaks are identical between these, helping to interpret the empirical findings. Forecast failures in structural models are driven by their deterministic terms, confirming location shifts as a pernicious cause thereof, and explaining the success of robust devices.

Small-scale VARs are widely used in macroeconomics for forecasting US output, prices, and interest rates. However, recent work suggests these models may exhibit instabilities. As such, a variety of estimation or forecasting methods might be used to improve their forecast accuracy. These include using different observation windows for estimation, intercept correction, time-varying parameters, break dating, Bayesian shrinkage, model averaging, etc. This paper compares the effectiveness of such methods in real-time forecasting. We use forecasts from univariate time series models, the Survey of Professional Forecasters, and the Federal Reserve Board's Greenbook as benchmarks.

We conduct a detailed simulation study of the forecasting performance of diffusion index-based methods in short samples with structural change. We consider several data generation processes, to mimic different types of structural change, and compare the relative forecasting performance of factor models and more traditional time series methods. We find that changes in the loading structure of the factors into the variables of interest are extremely important in determining the performance of factor models. We complement the analysis with an empirical evaluation of forecasts for the key macroeconomic variables of the Euro area and Slovenia, for which relatively short samples are officially available and structural changes are likely. The results are coherent with the findings of the simulation exercise and confirm the relatively good performance of factor-based forecasts in short samples with structural change.

In this chapter we discuss model selection and predictive accuracy tests in the context of parameter and model uncertainty under recursive and rolling estimation schemes. We begin by summarizing some recent theoretical findings, with particular emphasis on the construction of valid bootstrap procedures for calculating the impact of parameter estimation error. We then discuss the Corradi and Swanson (2002) (CS) test of (non)linear out-of-sample Granger causality. Thereafter, we carry out a series of Monte Carlo experiments examining the properties of the CS and a variety of other related predictive accuracy and model selection type tests. Finally, we present the results of an empirical investigation of the marginal predictive content of money for income, in the spirit of Stock and Watson (1989), Swanson (1998) and Amato and Swanson (2001).

In contrast to recent forecasting developments, “Old School” forecasting techniques, such as exponential smoothing and the Box–Jenkins methodology, do not attempt to explicitly model or estimate breaks in a time series. Adherents of the “New School” methodology argue that once breaks are well estimated, it is possible to control for regime shifts when forecasting. We compare the forecasts of monthly unemployment rates in 10 OECD countries using various Old School and New School methods. Although each method seems to have drawbacks and no one method dominates the others, the Old School methods often outperform the New School methods for forecasting the unemployment rates.

The empirical properties of benchmark revisions to key US macroeconomic aggregates are examined. News versus noise impact of revisions is interpreted via the cointegration property of successive benchmark revisions. Cointegration breaks down in the last two years before a benchmark revision. Hence, we conclude that there is some information content in benchmark revisions. This last point is illustrated by reporting that inflation forecasts could be improved by the addition of a time series that reflects benchmark revisions to real GDP. Standard backward- and forward-looking Phillips curves are used to explore the statistical significance of benchmark revisions.

In this chapter, we outline the statistical consequences of neglecting structural breaks and regime switches in autoregressive and GARCH models and propose two strategies to approach the problem. The first strategy is to identify regimes of constant unconditional volatility using a change point detector and estimate a separate GARCH model on the resulting segments. The second approach is to use a multiple-regime GARCH model, such as the Flexible Coefficient GARCH (FCGARCH) specification, where the regime-switches are governed by an observable variable. We apply both alternatives to an array of financial time series and compare their forecast performance.

This paper compares the out-of-sample forecasting performance of three long-memory volatility models (i.e., fractionally integrated (FI), break and regime switching) against three short-memory models (i.e., GARCH, GJR and volatility component). Using S&P 500 returns, we find that structural break models produced the best out-of-sample forecasts, if future volatility breaks are known. Without knowing the future breaks, GJR models produced the best short-horizon forecasts and FI models dominated for volatility forecasts of 10 days and beyond. The results suggest that S&P 500 volatility is non-stationary at least in some time periods. Controlling for extreme events (e.g., the 1987 crash) significantly improved forecasting performance.

We examine the role of structural breaks in forecasting stock return volatility. We begin by testing for structural breaks in the unconditional variance of daily returns for the S&P 500 market index and ten sectoral stock indices for 9/12/1989–1/19/2006 using an iterative cumulative sum of squares procedure. We find evidence of multiple variance breaks in almost all of the return series, indicating that structural breaks are an empirically relevant feature of return volatility. We then undertake an out-of-sample forecasting exercise to analyze how instabilities in unconditional variance affect the forecasting performance of asymmetric volatility models, focusing on procedures that employ a variety of estimation window sizes designed to accommodate potential structural breaks. The exercise demonstrates that structural breaks present important challenges to forecasting stock return volatility. We find that averaging across volatility forecasts generated by individual forecasting models estimated using different window sizes performs well in many cases and appears to offer a useful approach to forecasting stock return volatility in the presence of structural breaks.

We extend earlier work on the NoVaS transformation approach introduced by Politis (2003a, 2003b). The proposed approach is model-free and especially relevant when making forecasts in the context of model uncertainty and structural breaks. We introduce a new implied distribution in the context of NoVaS, a number of additional methods for implementing NoVaS, and we examine the relative forecasting performance of NoVaS for making volatility predictions using real and simulated time series. We pay particular attention to data-generating processes with varying coefficients and structural breaks. Our results clearly indicate that the NoVaS approach outperforms GARCH model forecasts in all cases we examined, except (as expected) when the data-generating process is itself a GARCH model.

We propose a new discrete-time model of returns in which jumps capture persistence in the conditional variance and higher-order moments. Jump arrival is governed by a heterogeneous Poisson process. The intensity is directed by a latent stochastic autoregressive process, while the jump-size distribution allows for conditional heteroskedasticity. Model evaluation focuses on the dynamics of the conditional distribution of returns using density and variance forecasts. Predictive likelihoods provide a period-by-period comparison of the performance of our heterogeneous jump model relative to conventional SV and GARCH models. Furthermore, in contrast to previous studies on the importance of jumps, we utilize realized volatility to assess out-of-sample variance forecasts.

Bagging (bootstrap aggregating) is a smoothing method to improve predictive ability under the presence of parameter estimation uncertainty and model uncertainty. In Lee and Yang (2006), we examined how (equal-weighted and BMA-weighted) bagging works for one-step-ahead binary prediction with an asymmetric cost function for time series, where we considered simple cases with particular choices of a linlin tick loss function and an algorithm to estimate a linear quantile regression model. In the present chapter, we examine how bagging predictors work with different aggregating (averaging) schemes, for multi-step forecast horizons, with a general class of tick loss functions, with different estimation algorithms, for nonlinear quantile regression models, and for different data frequencies. Bagging quantile predictors are constructed via (weighted) averaging over predictors trained on bootstrapped training samples, and bagging binary predictors are conducted via (majority) voting on predictors trained on the bootstrapped training samples. We find that median bagging and trimmed-mean bagging can alleviate the problem of extreme predictors from bootstrap samples and have better performance than equally weighted bagging predictors; that bagging works better at longer forecast horizons; that bagging works well with highly nonlinear quantile regression models (e.g., artificial neural network), and with general tick loss functions. We also find that the performance of bagging may be affected by using different quantile estimation algorithms (in small samples, even if the estimation is consistent) and by using different frequencies of time series data.

This paper investigates forecasting US Treasury bond and Dollar Eurocurrency rates using the stochastic unit root (STUR) model of Leybourne et al. (1996), and the stochastic cointegration (SC) model of Harris et al. (2002, 2006). Both models have time-varying parameter representations and are conceptually attractive for modelling interest rates as both allow for conditional heteroscedasticity. I find that for many of the series considered STUR and SC models generate statistically significant gains in out-of-sample forecasting accuracy relative to simple orthodox models. The results obtained highlight the usefulness of these extensions and raise some issues for future research.

This chapter develops a return forecasting methodology that allows for instability in the relationship between stock returns and predictor variables, model uncertainty, and parameter estimation uncertainty. The predictive regression specification that is put forward allows for occasional structural breaks of random magnitude in the regression parameters, uncertainty about the inclusion of forecasting variables, and uncertainty about parameter values by employing Bayesian model averaging. The implications of these three sources of uncertainty and their relative importance are investigated from an active investment management perspective. It is found that the economic value of incorporating all three sources of uncertainty is considerable. A typical investor would be willing to pay up to several hundreds of basis points annually to switch from a passive buy-and-hold strategy to an active strategy based on a return forecasting model that allows for model and parameter uncertainty as well as structural breaks in the regression parameters.

We address an interesting case – the predictability of excess US asset returns from macroeconomic factors within a flexible regime-switching VAR framework – in which the presence of regimes may lead to superior forecasting performance from forecast combinations. After documenting that forecast combinations provide gains in predictive accuracy and that these gains are statistically significant, we show that forecast combinations may substantially improve portfolio selection. We find that the best-performing forecast combinations are those that either avoid estimating the pooling weights or that minimize the need for estimation. In practice, we report that the best-performing combination schemes are based on the principle of relative past forecasting performance. The economic gains from combining forecasts in portfolio management applications appear to be large, stable over time, and robust to the introduction of realistic transaction costs.

Cover of Forecasting in the Presence of Structural Breaks and Model Uncertainty
DOI
10.1016/S1574-8715(2008)3
Publication date
2008-02-29
Book series
Frontiers of Economics and Globalization
Editors
Series copyright holder
Emerald Publishing Limited
ISBN
978-0-444-52942-8
eISBN
978-1-84950-540-6
Book series ISSN
1574-8715