Search results
1 – 10 of 11Xunfa Lu, Kang Sheng and Zhengjun Zhang
This paper aims to better jointly estimate Value at Risk (VaR) and expected shortfall (ES) by using the joint regression combined forecasting (JRCF) model.
Abstract
Purpose
This paper aims to better jointly estimate Value at Risk (VaR) and expected shortfall (ES) by using the joint regression combined forecasting (JRCF) model.
Design/methodology/approach
Combining different forecasting models in financial risk measurement can improve their prediction accuracy by integrating the individual models’ information. This paper applies the JRCF model to measure VaR and ES at 5%, 2.5% and 1% probability levels in the Chinese stock market. While ES is not elicitable on its own, the joint elicitability property of VaR and ES is established by the joint consistent scoring functions, which further refines the ES’s backtest. In addition, a variety of backtesting and evaluation methods are used to analyze and compare the alternative risk measurement models.
Findings
The empirical results show that the JRCF model outperforms the competing models. Based on the evaluation results of the joint scoring functions, the proposed model obtains the minimum scoring function value compared to the individual forecasting models and the average combined forecasting model overall. Moreover, Murphy diagrams’ results further reveal that this model has consistent comparative advantages among all considered models.
Originality/value
The JRCF model of risk measures is proposed, and the application of the joint scoring functions of VaR and ES is expanded. Additionally, this paper comprehensively backtests and evaluates the competing risk models and examines the characteristics of Chinese financial market risks.
Details
Keywords
Xunfa Lu, Cheng Liu, Kin Keung Lai and Hairong Cui
The purpose of the paper is to better measure the risks and volatility of the Bitcoin market by using the proposed novel risk measurement model.
Abstract
Purpose
The purpose of the paper is to better measure the risks and volatility of the Bitcoin market by using the proposed novel risk measurement model.
Design/methodology/approach
The joint regression analysis of value at risk (VaR) and expected shortfall (ES) can effectively overcome the non-elicitability problem of ES to better measure the risks and volatility of financial markets. And because of the incomparable advantages of the long- and short-term memory (LSTM) model in processing non-linear time series, the paper embeds LSTM into the joint regression combined forecasting framework of VaR and ES, constructs a joint regression combined forecasting model based on LSTM for jointly measuring VaR and ES, i.e. the LSTM-joint-combined (LSTM-J-C) model, and uses it to investigate the risks of the Bitcoin market.
Findings
Empirical results show that the proposed LSTM-J-C model can improve forecasting performance of VaR and ES in the Bitcoin market more effectively compared with the historical simulation, the GARCH model and the joint regression combined forecasting model.
Social implications
The proposed LSTM-J-C model can provide theoretical support and practical guidance to cryptocurrency market investors, policy makers and regulatory agencies for measuring and controlling cryptocurrency market risks.
Originality/value
A novel risk measurement model, namely LSTM-J-C model, is proposed to jointly estimate VaR and ES of Bitcoin. On the other hand, the proposed LSTM-J-C model provides risk managers more accurate forecasts of volatility in the Bitcoin market.
Details
Keywords
This study aims to implement a novel approach of using the Realized generalized autoregressive conditional heteroskedasticity (GARCH) model within the conditional extreme value…
Abstract
Purpose
This study aims to implement a novel approach of using the Realized generalized autoregressive conditional heteroskedasticity (GARCH) model within the conditional extreme value theory (EVT) framework to generate quantile forecasts. The Realized GARCH-EVT models are estimated with different realized volatility measures. The forecasting ability of the Realized GARCH-EVT models is compared with that of the standard GARCH-EVT models.
Design/methodology/approach
One-step-ahead forecasts of Value-at-Risk (VaR) and expected shortfall (ES) for five European stock indices, using different two-stage GARCH-EVT models, are generated. The forecasting ability of the standard GARCH-EVT model and the asymmetric exponential GARCH (EGARCH)-EVT model is compared with that of the Realized GARCH-EVT model. Additionally, five realized volatility measures are used to test whether the choice of realized volatility measure affects the forecasting performance of the Realized GARCH-EVT model.
Findings
In terms of the out-of-sample comparisons, the Realized GARCH-EVT models generally outperform the standard GARCH-EVT and EGARCH-EVT models. However, the choice of the realized estimator does not affect the forecasting ability of the Realized GARCH-EVT model.
Originality/value
It is one of the earliest implementations of the two-stage Realized GARCH-EVT model for generating quantile forecasts. To the best of the authors’ knowledge, this is the first study that compares the performance of different realized estimators within Realized GARCH-EVT framework. In the context of high-frequency data-based forecasting studies, a sample period of around 11 years is reasonably large. More importantly, the data set has a cross-sectional dimension with multiple European stock indices, whereas most of the earlier studies are based on the US market.
Details
Keywords
Ngoc Quynh Anh Nguyen and Thi Ngoc Trang Nguyen
The purpose of this paper is to present the method for efficient computation of risk measures using Fourier transform technique. Another objective is to demonstrate that this…
Abstract
Purpose
The purpose of this paper is to present the method for efficient computation of risk measures using Fourier transform technique. Another objective is to demonstrate that this technique enables an efficient computation of risk measures beyond value-at-risk and expected shortfall. Finally, this paper highlights the importance of validating assumptions behind the risk model and describes its application in the affine model framework.
Design/methodology/approach
The method proposed is based on Fourier transform methods for computing risk measures. The authors obtain the loss distribution by fitting a cubic spline through the points where Fourier inversion of the characteristic function is applied. From the loss distribution, the authors calculate value-at-risk and expected shortfall. As for the calculation of the entropic value-at-risk, it involves the moment generating function which is closely related to the characteristic function. The expectile risk measure is calculated based on call and put option prices which are available in a semi-closed form by Fourier inversion of the characteristic function. We also consider mean loss, standard deviation and semivariance which are calculated in a similar manner.
Findings
The study offers practical insights into the efficient computation of risk measures as well as validation of the risk models. It also provides a detailed description of algorithms to compute each of the risk measures considered. While the main focus of the paper is on portfolio-level risk metrics, all algorithms are also applicable to single instruments.
Practical implications
The algorithms presented in this paper require little computational effort which makes them very suitable for real-world applications. In addition, the mathematical setup adopted in this paper provides a natural framework for risk model validation which makes the approach presented in this paper particularly appealing in practice.
Originality/value
This is the first study to consider the computation of entropic value-at-risk, semivariance as well as expectile risk measure using Fourier transform method.
Details
Keywords
This study aims to analyse the conditional volatility of the Vietnam Index (Ho Chi Minh City) and the Hanoi Exchange Index (Hanoi) with a specific focus on their application to…
Abstract
Purpose
This study aims to analyse the conditional volatility of the Vietnam Index (Ho Chi Minh City) and the Hanoi Exchange Index (Hanoi) with a specific focus on their application to risk management tools such as Expected Shortfall (ES).
Design/methodology/approach
First, the author tests both indices for long memory in their returns and squared returns. Second, the author applies several generalised autoregressive conditional heteroskedasticity (GARCH) models to account for asymmetry and long memory effects in conditional volatility. Finally, the author back tests the GARCH models’ forecasts for Value-at-Risk (VaR) and ES.
Findings
The author does not find long memory in returns, but does find long memory in the squared returns. The results suggest differences in both indices for the asymmetric impact of negative and positive news on volatility and the persistence of shocks (long memory). Long memory models perform best when estimating risk measures for both series.
Practical implications
Short-time horizons to estimate the variance should be avoided. A combination of long memory GARCH models with skewed Student’s t-distribution is recommended to forecast VaR and ES.
Originality/value
Up to now, no analysis has examined asymmetry and long memory effects jointly. Moreover, studies on Vietnamese stock market volatility do not take ES into consideration. This study attempts to overcome this gap. The author contributes by offering more insight into the Vietnamese stock market properties and shows the necessity of considering ES in risk management. The findings of this study are important to domestic and foreign practitioners, particularly for risk management, as well as banks and researchers investigating international markets.
Details
Keywords
This paper investigates how various strategies for combining forecasts, both simple and optimised approaches, are compared with popular individual risk models in estimating…
Abstract
Purpose
This paper investigates how various strategies for combining forecasts, both simple and optimised approaches, are compared with popular individual risk models in estimating value-at-risk (VaR) and expected shortfall (ES) in emerging market at alternative risk levels.
Design/methodology/approach
Using the case study of the Vietnamese stock market, the author produced one-day-ahead VaR and ES forecast from seven individual risk models and ten alternative forecast combinations. Next, the author employed a battery of backtesting procedures and alternative loss functions to evaluate the global predictive accuracy of the different methods. Finally, the author investigated the relative performance over time of VaR and ES forecasts using fluctuation test.
Findings
The empirical results indicate that, although combined forecasts have reasonable predictive abilities, they are often outperformed by one individual risk model. Furthermore, the author showed that the complex combining methods with optimised weighting functions do not perform better than simple combining methods. The fluctuation test suggests that the poor performance of combined forecasts is mainly due to their inability to cope with periods of instability.
Research limitations/implications
This study reveals the limitation of combining strategies in the one-day-ahead VaR and ES forecasts in emerging markets. A possible direction for further research is to investigate whether this finding holds for multi-day ahead forecasts. Moreover, the inferior performance of combined forecasts during periods of instability motivates further research on the combining strategies that take into account for potential structure breaks in the performance of individual risk models. A potential approach is to improve the individual risk models with macroeconomic variables using a mixed-data sampling approach.
Originality/value
First, the authors contribute to the literature on the forecasting combinations for VaR and ES measures. Second, the author explored a wide range of alternative risk models to forecast both VaR and ES with recent data including periods of the COVID-19 pandemic. Although forecast combination strategies have been providing several good results in several fields, the literature of forecast combination in the VaR and ES context is surprisingly limited, especially for emerging market returns. To the best of the author’s knowledge, this is the first study investigating predictive power of combining methods for VaR and ES in an emerging market.
Details
Keywords
Ying L. Becker, Lin Guo and Odilbek Nurmamatov
Value at risk (VaR) and expected shortfall (ES) are popular market risk measurements. The former is not coherent but robust, whereas the latter is coherent but less interpretable…
Abstract
Value at risk (VaR) and expected shortfall (ES) are popular market risk measurements. The former is not coherent but robust, whereas the latter is coherent but less interpretable, only conditionally backtestable and less robust. In this chapter, we compare an innovative artificial neural network (ANN) model with a time series model in the context of forecasting VaR and ES of the univariate time series of four asset classes: US large capitalization equity index, European large cap equity index, US bond index, and US dollar versus euro exchange rate price index for the period of January 4, 1999, to December 31, 2018. In general, the ANN model has more favorable backtesting results as compared to the autoregressive moving average, generalized autoregressive conditional heteroscedasticity (ARMA-GARCH) time series model. In terms of forecasting accuracy, the ANN model has much fewer in-sample and out-of-sample exceptions than those of the ARMA-GARCH model.
Details
Keywords
Carsten Lausberg and Patrick Krieger
Scoring is a widely used, long-established, and universally applicable method of measuring risks, especially those that are difficult to quantify. Unfortunately, the scoring…
Abstract
Purpose
Scoring is a widely used, long-established, and universally applicable method of measuring risks, especially those that are difficult to quantify. Unfortunately, the scoring method is often misused in real estate practice and underestimated in academia. The purpose of this paper is to supplement the literature with general rules under which scoring systems should be designed and validated, so that they can become reliable risk instruments.
Design/methodology/approach
The paper combines the rules, or axioms, for coherent risk measures known from the literature with those for scoring instruments. The result is a system of rules that a risk scoring system should fulfil. The approach is theoretical, based on a literature survey and reasoning.
Findings
At first, the paper clarifies that a risk score should express the variation of a property’s yield and not of its quality, as it is often done in practice. Then the axioms for a coherent risk scoring are derived, e.g. the independence of the risk factors. Finally, the paper proposes procedures for valid and reliable risk scoring systems, e.g. the out-of-time validation.
Practical implications
Although it is a theoretical work, the paper also focuses on practical applicability. The findings are illustrated with examples of scoring systems.
Originality/value
Rules for risk measures and for scoring systems have been established long ago, but the combination is a first. In this way, the paper contributes to real estate risk research and risk management practice.
Details
Keywords
Sharif Mozumder, Michael Dempsey and M. Humayun Kabir
The purpose of the paper is to back-test value-at-risk (VaR) models for conditional distributions belonging to a Generalized Hyperbolic (GH) family of Lévy processes – Variance…
Abstract
Purpose
The purpose of the paper is to back-test value-at-risk (VaR) models for conditional distributions belonging to a Generalized Hyperbolic (GH) family of Lévy processes – Variance Gamma, Normal Inverse Gaussian, Hyperbolic distribution and GH – and compare their risk-management features with a traditional unconditional extreme value (EV) approach using data from future contracts return data of S&P500, FTSE100, DAX, HangSeng and Nikkei 225 indices.
Design/methodology/approach
The authors apply tail-based and Lévy-based calibration to estimate the parameters of the models as part of the initial data analysis. While the authors utilize the peaks-over-threshold approach for generalized Pareto distribution, the conditional maximum likelihood method is followed in case of Lévy models. As the Lévy models do not have closed form expressions for VaR, the authors follow a bootstrap method to determine the VaR and the confidence intervals. Finally, for back-testing, they use both static calibration (on the entire data) and dynamic calibration (on a four-year rolling window) to test the unconditional, independence and conditional coverage hypotheses implemented with 95 and 99 per cent VaRs.
Findings
Both EV and Lévy models provide the authors with a conservative proportion of violation for VaR forecasts. A model targeting tail or fitting the entire distribution has little effect on either VaR calculation or a VaR model’s back-testing performance.
Originality/value
To the best of the authors’ knowledge, this is the first study to explore the back-testing performance of Lévy-based VaR models. The authors conduct various calibration and bootstrap techniques to test the unconditional, independence and conditional coverage hypotheses for the VaRs.
Details
Keywords
Hemant Kumar Badaye and Jason Narsoo
This study aims to use a novel methodology to investigate the performance of several multivariate value at risk (VaR) and expected shortfall (ES) models implemented to assess the…
Abstract
Purpose
This study aims to use a novel methodology to investigate the performance of several multivariate value at risk (VaR) and expected shortfall (ES) models implemented to assess the risk of an equally weighted portfolio consisting of high-frequency (1-min) observations for five foreign currencies, namely, EUR/USD, GBP/USD, EUR/JPY, USD/JPY and GBP/JPY.
Design/methodology/approach
By applying the multiplicative component generalised autoregressive conditional heteroskedasticity (MC-GARCH) model on each return series and by modelling the dependence structure using copulas, the 95 per cent intraday portfolio VaR and ES are forecasted for an out-of-sample set using Monte Carlo simulation.
Findings
In terms of VaR forecasting performance, the backtesting results indicated that four out of the five models implemented could not be rejected at 5 per cent level of significance. However, when the models were further evaluated for their ES forecasting power, only the Student’s t and Clayton models could not be rejected. The fact that some ES models were rejected at 5 per cent significance level highlights the importance of selecting an appropriate copula model for the dependence structure.
Originality/value
To the best of the authors’ knowledge, this is the first study to use the MC-GARCH and copula models to forecast, for the next 1 min, the VaR and ES of an equally weighted portfolio of foreign currencies. It is also the first study to analyse the performance of the MC-GARCH model under seven distributional assumptions for the innovation term.
Details