Search results
1 – 10 of 988T.J. Brailsford, J.H.W. Penm and R.D. Terrell
Vector error-correction models (VECMs) have become increasingly important in their application to financial markets. Standard full-order VECM models assume non-zero entries in all…
Abstract
Vector error-correction models (VECMs) have become increasingly important in their application to financial markets. Standard full-order VECM models assume non-zero entries in all their coefficient matrices. However, applications of VECM models to financial market data have revealed that zero entries are often a necessary part of efficient modelling. In such cases, the use of full-order VECM models may lead to incorrect inferences. Specifically, if indirect causality or Granger non-causality exists among the variables, the use of over-parameterised full-order VECM models may weaken the power of statistical inference. In this paper, it is argued that the zero–non-zero (ZNZ) patterned VECM is a more straightforward and effective means of testing for both indirect causality and Granger non-causality. For a ZNZ patterned VECM framework for time series of integrated order two, we provide a new algorithm to select cointegrating and loading vectors that can contain zero entries. Two case studies are used to demonstrate the usefulness of the algorithm in tests of purchasing power parity and a three-variable system involving the stock market.
This study proposed an optimal model to examine the relationship between the Bitcoin price and six macroeconomic variables – the Bitcoin price, Standard and Poor's 500 volatility…
Abstract
This study proposed an optimal model to examine the relationship between the Bitcoin price and six macroeconomic variables – the Bitcoin price, Standard and Poor's 500 volatility index, US treasury 10-year yield, US consumer price index, gold price and dollar index. It also examined the effectiveness of the vector error correction model (VECM) in analyzing the interrelationship among these variables. The authors employed the following approach: first, the authors sampled the period August 2010–February 2022. This is because Bitcoin achieved a market capitalization of more than US$1 tn over this period, gaining market attention and acceptance from retail, corporate and institutional investors. Second, the authors employed a VECM with the six macroeconomic variables. Finally, the authors expanded the long-run equilibrium relationship (time-invariant cointegration)-based VECM to develop a time-varying cointegration (TVC) VECM. The authors estimated the TVC VECM using the Chebyshev polynomial specification based on various information criteria. The results showed that the Bitcoin price can be modeled with the VECM (p = 1, r = 1). The TVC approach generated more explanatory power for Bitcoin pricing, indicating the effectiveness of the approach for modeling the long-run relationship between Bitcoin price and macroeconomic variables.
Details
Keywords
T.J. Brailsford, J.H.W. Penm and R.D. Terrell
Conventional methods to test for long-term PPP based on the theory of cointegration are typically undertaken in the framework of vector error correction models (VECM). The…
Abstract
Conventional methods to test for long-term PPP based on the theory of cointegration are typically undertaken in the framework of vector error correction models (VECM). The standard approach in the use of VECMs is to employ a model of full-order, which assumes nonzero entries in all the coefficient matrices. But, the use of full-order VECM models may lead to incorrect inferences if zero entries are required in the coefficient matrices. Specifically, if we wish to test for indirect causality, instantaneous causality, or Granger non-causality, and employ “overparameterised” full-order VECM models that ignore entries assigned a priori to be zero, then the power of statistical inference is weakened and the resultant specifications can produce different conclusions concerning the cointegrating relationships among the variables. In this paper, an approach is presented that incorporates zero entries in the VECM analysis. This approach is a more straightforward and effective means of testing for causality and cointegrating relations. The paper extends prior work on PPP through an investigation of causality between the U.S. Dollar and the Japanese Yen. The results demonstrate the inconsistencies that can arise in the area and show that bi-directional feedback exists between prices, interest rates and the exchange rate such that adjustment mechanisms are complete within the context of PPP.
The existing literature on the Black-Litterman (BL) model does not offer adequate guidance on how to generate investors’ views in an objective manner. Therefore, the purpose of…
Abstract
Purpose
The existing literature on the Black-Litterman (BL) model does not offer adequate guidance on how to generate investors’ views in an objective manner. Therefore, the purpose of this paper is to establish a generalized multivariate Vector Error Correction Model (VECM)/Vector Auto-Regressive (VAR)-Dynamic Conditional Correlation (DCC)/Asymmetric DCC (ADCC) framework, and applies it to generate objective views to improve the practicality of the BL model.
Design/methodology/approach
This paper establishes a generalized VECM/VAR-DCC/ADCC framework that can be utilized to model multivariate financial time series in general, and produce objective views as inputs to the BL model in particular. To test the VECM/VAR-DCC/ADCC preconditioned BL model’s practical utility, it is applied to a six-asset China portfolio (including one risk-free asset).
Findings
With dynamically optimized view confidence parameters, the VECM/VAR-DCC/ADCC preconditioned BL model offers clear advantage over the standard mean-variance method, and provides an automated portfolio optimization alternative to the classic BL approach.
Originality/value
The VECM/VAR-DCC/ADCC framework and its application in the BL model proposed by this paper provide an alternative approach to the classic BL method. Since all the view parameters, including estimated mean return vectors, conditional covariance matrices and pick matrices, are generated in the VECM/VAR and DCC/ADCC preconditioning stage, the model improves the objectiveness of the inputs to the BL stage. In conclusion, the proposed model offers a practical choice for automated portfolio balancing and optimization in a China context.
Details
Keywords
Panayiotis F. Diamandis, Anastassios A. Drakos and Georgios P. Kouretas
The purpose of this paper is to provide an extensive review of the monetary model of exchange rate determination which is the main theoretical framework on analyzing exchange rate…
Abstract
Purpose
The purpose of this paper is to provide an extensive review of the monetary model of exchange rate determination which is the main theoretical framework on analyzing exchange rate behavior over the last 40 years. Furthermore, we test the flexible price monetarist variant and the sticky price Keynesian variant of the monetary model. We conduct our analysis employing a sample of 14 advanced economies using annual data spanning the period 1880–2012.
Design/methodology/approach
The theoretical background of the paper relies on the monetary model to the exchange rate determination. We provide a thorough econometric analysis using a battery of unit root and cointegration testing techniques. We test the price-flexible monetarist version and the sticky-price version of the model using annual data from 1880 to 2012 for a group of industrialized countries.
Findings
We provide strong evidence of the existence of a nonlinear relationship between exchange rates and fundamentals. Therefore, we model the time-varying nature of this relationship by allowing for Markov regime switches for the exchange rate regimes. Modeling exchange rates within this context can be motivated by the fact that the change in regime should be considered as a random event and not predictable. These results show that linearity is rejected in favor of an MS-VECM specification which forms statistically an adequate representation of the data. Two regimes are implied by the model; the one of the estimated regimes describes the monetary model whereas the other matches in most cases the constant coefficient model with wrong signs. Furthermore it is shown that depending on the nominal exchange rate regime in operation, the adjustment to the long run implied by the monetary model of the exchange rate determination came either from the exchange rate or from the monetary fundamentals. Moreover, based on a Regime Classification Measure, we showed that our chosen Markov-switching specification performed well in distinguishing between the two regimes for all cases. Finally, it is shown that fundamentals are not only significant within each regime but are also significant for the switches between the two regimes.
Practical implications
The results are of interest to practitioners and policy makers since understanding the evolution and determination of exchange rates is of crucial importance. Furthermore, our results are linked to forecasting performance of exchange rate models.
Originality/value
The present analysis extends previous analyses on exchange rate determination and it provides further support in favor of the monetary model as a long-run framework to understand the evolution of exchange rates.
Details
Keywords
Ali F. Darrat, M. Zhong, R.M. Shelor and R.N. Dickens
This study uses the Vector Error Correction Model (VECM) to forecast ex post changes in earning and stock prices of six major DOW companies as well as of the S&P 500 market index…
Abstract
This study uses the Vector Error Correction Model (VECM) to forecast ex post changes in earning and stock prices of six major DOW companies as well as of the S&P 500 market index. Compared to ARIMA and GARCH models, results from four decades of data are supportive of the forecasting ability of the VECM process.
Matt Larriva and Peter Linneman
Establishing the strength of a novel variable–mortgage debt as a fraction of US gross domestic product (GDP)–on forecasting capitalisation rates in both the US office and…
Abstract
Purpose
Establishing the strength of a novel variable–mortgage debt as a fraction of US gross domestic product (GDP)–on forecasting capitalisation rates in both the US office and multifamily sectors.
Design/methodology/approach
The authors specify a vector error correction model (VECM) to the data. VECM are used to address the nonstationarity issues of financial variables while maintaining the information embedded in the levels of the data, as opposed to their differences. The cap rate series used are from Green Street Advisors and represent transaction cap rates which avoids the problem of artificial smoothness found in appraisal-based cap rates.
Findings
Using a VECM specified with the novel variable, unemployment and past cap rates contains enough information to produce more robust forecasts than the traditional variables (return expectations and risk premiums). The method is robust both in and out of sample.
Practical implications
This has direct implications for governmental policy, offering a path to real estate price stability and growth through mortgage access–functions largely influenced by the Fed and the quasi-federal agencies Fannie Mae and Freddie Mac. It also offers a timely alternative to interest rate-based forecasting models, which are likely to be less useful as interest rates are to be held low for the foreseeable future.
Originality/value
This study offers a new and highly explanatory variable to the literature while being among the only to model either (1) transactional cap rates (versus appraisal) (2) out-of-sample data (versus in-sample) (3) without the use of the traditional variables thought to be integral to cap rate modelling (return expectations and risk premiums).
Details
Keywords
Nidhaleddine Ben Cheikh and Waël Louhichi
This chapter analyzes the exchange rate pass-through (ERPT) into different prices for 12 euro area (EA) countries. We provide new up-to-date estimates of ERPT by paying attention…
Abstract
This chapter analyzes the exchange rate pass-through (ERPT) into different prices for 12 euro area (EA) countries. We provide new up-to-date estimates of ERPT by paying attention to either the time-series properties of data and variables endogeneity. Using VECM framework, we examine the pass-through at different stages along the distribution chain, that is, import prices, producer prices, and consumer prices. When carrying out impulse response functions analysis, we find a higher pass-through to import prices with a complete pass-through (after one year) detected for roughly half of EA countries. These estimates are relatively large compared to single-equation literature. We denote that the magnitude of the pass-through of exchange rate shocks declines along the distribution chain of pricing, with the modest effect recorded for consumer prices. When assessing for the determinant of cross-country differences in the ERPT, we find that inflation level, inflation volatility, and exchange rate persistence are the main macroeconomic factors influencing the pass-through almost along the pricing chain. Thereafter, we have tested for the decline of the response of consumer prices across EA countries. According to multivariate time-series Chow test, the stability of ERPT coefficients was rejected, and the impulse responses of consumer prices over 1990–2010 provide an evidence of general decline in rates of pass-through in most of the EA countries. Finally, using the historical decompositions, our results reveal that external factors, that is, exchange rate and import prices shocks, have had important inflationary impacts on inflation since 1999 compared to the pre-EMU period.
Details
Keywords
Nikolay Gospodinov, Alex Maynard and Elena Pesavento
It is widely documented that while contemporaneous spot and forward financial prices trace each other extremely closely, their difference is often highly persistent and the…
Abstract
It is widely documented that while contemporaneous spot and forward financial prices trace each other extremely closely, their difference is often highly persistent and the conventional cointegration tests may suggest lack of cointegration. This chapter studies the possibility of having cointegrated errors that are characterized simultaneously by high persistence (near-unit root behavior) and very small (near zero) variance. The proposed dual parameterization induces the cointegration error process to be stochastically bounded which prevents the variables in the cointegrating system from drifting apart over a reasonably long horizon. More specifically, this chapter develops the appropriate asymptotic theory (rate of convergence and asymptotic distribution) for the estimators in unconditional and conditional vector error correction models (VECM) when the error correction term is parameterized as a dampened near-unit root process (local-to-unity process with local-to-zero variance). The important differences in the limiting behavior of the estimators and their implications for empirical analysis are discussed. Simulation results and an empirical analysis of the forward premium regressions are also provided.
Details
Keywords
Dennis W. Jansen and Zijun Wang
The “Fed Model” postulates a cointegrating relationship between the equity yield on the S&P 500 and the bond yield. We evaluate the Fed Model as a vector error correction…
Abstract
The “Fed Model” postulates a cointegrating relationship between the equity yield on the S&P 500 and the bond yield. We evaluate the Fed Model as a vector error correction forecasting model for stock prices and for bond yields. We compare out-of-sample forecasts of each of these two variables from a univariate model and various versions of the Fed Model including both linear and nonlinear vector error correction models. We find that for stock prices the Fed Model improves on the univariate model for longer-horizon forecasts, and the nonlinear vector error correction model performs even better than its linear version.