Search results
1 – 10 of 151Hemant Kumar Badaye and Jason Narsoo
This study aims to use a novel methodology to investigate the performance of several multivariate value at risk (VaR) and expected shortfall (ES) models implemented to assess the…
Abstract
Purpose
This study aims to use a novel methodology to investigate the performance of several multivariate value at risk (VaR) and expected shortfall (ES) models implemented to assess the risk of an equally weighted portfolio consisting of high-frequency (1-min) observations for five foreign currencies, namely, EUR/USD, GBP/USD, EUR/JPY, USD/JPY and GBP/JPY.
Design/methodology/approach
By applying the multiplicative component generalised autoregressive conditional heteroskedasticity (MC-GARCH) model on each return series and by modelling the dependence structure using copulas, the 95 per cent intraday portfolio VaR and ES are forecasted for an out-of-sample set using Monte Carlo simulation.
Findings
In terms of VaR forecasting performance, the backtesting results indicated that four out of the five models implemented could not be rejected at 5 per cent level of significance. However, when the models were further evaluated for their ES forecasting power, only the Student’s t and Clayton models could not be rejected. The fact that some ES models were rejected at 5 per cent significance level highlights the importance of selecting an appropriate copula model for the dependence structure.
Originality/value
To the best of the authors’ knowledge, this is the first study to use the MC-GARCH and copula models to forecast, for the next 1 min, the VaR and ES of an equally weighted portfolio of foreign currencies. It is also the first study to analyse the performance of the MC-GARCH model under seven distributional assumptions for the innovation term.
Details
Keywords
Mariarosaria Coppola and Valeria D'Amato
The determination of the capital requirements represents the first Pillar of Solvency II. The main purpose of the new solvency regulation is to obtain more realistic modelling and…
Abstract
Purpose
The determination of the capital requirements represents the first Pillar of Solvency II. The main purpose of the new solvency regulation is to obtain more realistic modelling and assessment of the different risks insurance companies are exposed to in a balance‐sheet perspective. In this context, the Solvency Capital Requirement (SCR) standard calculation is based on a modular approach, where the overall risk is split into several modules and submodules. In Solvency II, standard formula longevity risk is explicitly considered. The purpose of this paper is to look at the backtesting approach for measuring the consistency of SCR calculations for life insurance policies.
Design/methodology/approach
A multiperiod approach is suggested for correctly calculating the SCR in a risk management perspective, in the sense that the amount of capital necessary to meet company future obligations year by year until the contract will be in force has to be assessed. The backtesting approach for measuring the consistency of SCR calculations for life insurance policies represents the main contribution of the research. In fact this kind of model performance is generally specified in the VaR validation analysis. In this paper, this approach is considered for testing the ex post performance of SCR calculation methodology.
Findings
The backtesting framework is able to measure, from time to time, if the insurer has allocated more or less capital to support his in‐force business, with adverse effects on free reserves and profitability or solvency.
Practical implications
The paper shows that the forecasting performance is an important aspect to assess the effectiveness of the model, a poor performance corresponding to a biased allocation of capital.
Originality/value
The backtesting approach for measuring the consistency of SCR calculations for life insurance policies represents the main contribution of the research. In fact this kind of model performance is generally specified in the VaR validation analysis. Recently, Dowd et al. have proposed it for verifying the goodness of mortality models and now, in this paper, this approach is considered for testing the ex post performance of SCR calculation methodology.
Details
Keywords
Ying L. Becker, Lin Guo and Odilbek Nurmamatov
Value at risk (VaR) and expected shortfall (ES) are popular market risk measurements. The former is not coherent but robust, whereas the latter is coherent but less interpretable…
Abstract
Value at risk (VaR) and expected shortfall (ES) are popular market risk measurements. The former is not coherent but robust, whereas the latter is coherent but less interpretable, only conditionally backtestable and less robust. In this chapter, we compare an innovative artificial neural network (ANN) model with a time series model in the context of forecasting VaR and ES of the univariate time series of four asset classes: US large capitalization equity index, European large cap equity index, US bond index, and US dollar versus euro exchange rate price index for the period of January 4, 1999, to December 31, 2018. In general, the ANN model has more favorable backtesting results as compared to the autoregressive moving average, generalized autoregressive conditional heteroscedasticity (ARMA-GARCH) time series model. In terms of forecasting accuracy, the ANN model has much fewer in-sample and out-of-sample exceptions than those of the ARMA-GARCH model.
Details
Keywords
Xunfa Lu, Kang Sheng and Zhengjun Zhang
This paper aims to better jointly estimate Value at Risk (VaR) and expected shortfall (ES) by using the joint regression combined forecasting (JRCF) model.
Abstract
Purpose
This paper aims to better jointly estimate Value at Risk (VaR) and expected shortfall (ES) by using the joint regression combined forecasting (JRCF) model.
Design/methodology/approach
Combining different forecasting models in financial risk measurement can improve their prediction accuracy by integrating the individual models’ information. This paper applies the JRCF model to measure VaR and ES at 5%, 2.5% and 1% probability levels in the Chinese stock market. While ES is not elicitable on its own, the joint elicitability property of VaR and ES is established by the joint consistent scoring functions, which further refines the ES’s backtest. In addition, a variety of backtesting and evaluation methods are used to analyze and compare the alternative risk measurement models.
Findings
The empirical results show that the JRCF model outperforms the competing models. Based on the evaluation results of the joint scoring functions, the proposed model obtains the minimum scoring function value compared to the individual forecasting models and the average combined forecasting model overall. Moreover, Murphy diagrams’ results further reveal that this model has consistent comparative advantages among all considered models.
Originality/value
The JRCF model of risk measures is proposed, and the application of the joint scoring functions of VaR and ES is expanded. Additionally, this paper comprehensively backtests and evaluates the competing risk models and examines the characteristics of Chinese financial market risks.
Details
Keywords
The purpose of this paper is to compare the ability of popular temperature models, namely, the models given by Alaton et al., by Benth and Benth, by Campbell and Diebold and by…
Abstract
Purpose
The purpose of this paper is to compare the ability of popular temperature models, namely, the models given by Alaton et al., by Benth and Benth, by Campbell and Diebold and by Brody et al., to forecast the prices of heating/cooling degree days (HDD/CDD) futures for New York, Atlanta, and Chicago.
Design/methodology/approach
To verify the forecasting power of various temperature models, a statistical backtesting approach is utilised. The backtesting sample consists of the market data of daily settlement futures prices for New York, Atlanta, and Chicago. Settlement prices are separated into two groups, namely, “in‐period” and “out‐of‐period”.
Findings
The findings show that the models of Alaton et al. and Benth and Benth forecast the futures prices more accurately. The difference in the forecasting performance of models between “in‐period” and “out‐of‐period” valuation can be attributed to the meteorological temperature forecasts during the contract measurement periods.
Research limitations/implications
In future studies, it may be useful to utilize the historical data for meteorological forecasts to assess the forecasting power of the new hybrid model considered.
Practical implications
Out‐of‐period backtesting helps reduce the effect of any meteorological forecast on the formation of futures prices. It is observed that the performance of models for out‐of‐period improves consistently. This indicates that the effects of available weather forecasts should be incorporated into the considered models.
Originality/value
To the best of the author's knowledge this is the first study to compare some of the popular temperature models in forecasting HDD/CDD futures. Furthermore, a new temperature modelling approach is proposed for incorporating available temperature forecasts into the considered dynamic models.
Details
Keywords
LEO M. TILMAN and PAVEL BRUSILOVSKIY
Value‐at‐Risk (VaR) has become a mainstream risk management technique employed by a large proportion of financial institutions. There exists a substantial amount of research…
Abstract
Value‐at‐Risk (VaR) has become a mainstream risk management technique employed by a large proportion of financial institutions. There exists a substantial amount of research dealing with this task, most commonly referred to as VaR backtesting. A new generation of “self‐learning” VaR models (Conditional Autoregressive Value‐at‐Risk or CAViaR) combine backtesting results with ex ante VaR estimates in an ARIMA framework in order to forecast P/L distributions more accurately. In this commentary, the authors present a systematic overview of several classes of applied statistical techniques that can make VaR backtesting more comprehensive and provide valuable insights into the analytical properties of VaR models in various market environments. In addition, they discuss the challenges associated with extending traditional backtesting approaches for VaR horizons longer than one day and propose solutions to this important problem.
This paper aims to propose a scenario-based approach for measuring interest rate risks. Many regulatory capital standards in banking and insurance make use of similar approaches…
Abstract
Purpose
This paper aims to propose a scenario-based approach for measuring interest rate risks. Many regulatory capital standards in banking and insurance make use of similar approaches. The authors provide a theoretical justification and extensive backtesting of our approach.
Design/methodology/approach
The authors theoretically derive a scenario-based value-at-risk for interest rate risks based on a principal component analysis. The authors calibrate their approach based on the Nelson–Siegel model, which is modified to account for lower bounds for interest rates. The authors backtest the model outcomes against historical yield curve changes for a large number of generated asset–liability portfolios. In addition, the authors backtest the scenario-based value-at-risk against the stochastic model.
Findings
The backtesting results of the adjusted Nelson–Siegel model (accounting for a lower bound) are similar to those of the traditional Nelson–Siegel model. The suitability of the scenario-based value-at-risk can be substantially improved by allowing for correlation parameters in the aggregation of the scenario outcomes. Implementing those parameters is straightforward with the replacement of Pearson correlations by value-at-risk-implied tail correlations in situations where risk factors are not elliptically distributed.
Research limitations/implications
The paper assumes deterministic cash flow patterns. The authors discuss the applicability of their approach, e.g. for insurance companies.
Practical implications
The authors’ approach can be used to better communicate interest rate risks using scenarios. Discussing risk measurement results with decision makers can help to backtest stochastic-term structure models.
Originality/value
The authors’ adjustment of the Nelson–Siegel model to account for lower bounds makes the model more useful in the current low-yield environment when unjustifiably high negative interest rates need to be avoided. The proposed scenario-based value-at-risk allows for a pragmatic measurement of interest rate risks, which nevertheless closely approximates the value-at-risk according to the stochastic model.
Details
Keywords
Tomáš Mrkvička, Martina Krásnická, Ludvík Friebel, Tomáš Volek and Ladislav Rolínek
Small- and medium-sized enterprises can be highly affected by losses caused by exchange rate changes. The aim of this paper was to find the optimal Value-at-Risk (VaR) method for…
Abstract
Purpose
Small- and medium-sized enterprises can be highly affected by losses caused by exchange rate changes. The aim of this paper was to find the optimal Value-at-Risk (VaR) method for estimating future exchange rate losses within one year.
Design/methodology/approach
The analysis focuses on five VaR methods, some of them traditional and some of them more up to date with integrated EVT or GARCH. The analysis of VaR methods was concentrated on a time horizon (1–12 months), overestimation predictions and six scenarios based on trends and variability of exchange rates. This study used three currency pairs EUR/CZK, EUR/USD and EUR/JPY for backtesting.
Findings
In compliance with the backtesting results, the parametric VaR with random walk has been chosen, despite its shortcomings, as the most accurate for estimating future losses in a medium-term period. The Nonparametric VaR confirmed insensitivity to the current exchange rate development. The EVT-based methods showed overconservatism (overestimation predictions). Every parametric or semiparametric method revealed a severe increase of liberality with increasing time.
Research limitations/implications
This research is limited to the analysis of suitable VaR models in a long- and short-run period without using artificial intelligence.
Practical implications
The result of this paper is the choice of a proper VaR method for the online application for estimating the future exchange rate for enterprises.
Originality/value
The orientation of medium-term period makes the research original and useful for small- and medium-sized enterprises.
Details
Keywords
A misplaced reliance on value at risk (VaR) has been focused on in the media as one of the main reasons for the current financial crisis, and the recently published Turner Review…
Abstract
Purpose
A misplaced reliance on value at risk (VaR) has been focused on in the media as one of the main reasons for the current financial crisis, and the recently published Turner Review by the UK Financial Services Authority concurs. The purpose of this paper is to present an introductory overview of VaR and its weaknesses which will be easily understood by non‐technical readers.
Design/methodology/approach
Simple numerical examples utilising real and simulated data are employed to reinforce the main arguments.
Findings
This paper explains that some of the main approaches employed by banks for computing VaR have serious weaknesses. These weaknesses have contributed to the current financial crisis.
Research limitations/implications
Consistent with the introductory nature of this paper, the empirical research is limited to simple examples.
Practical implications
The evidence here suggest that if VaR is to play a major role under future financial regulation then research is required to develop improved estimation techniques and backtesting procedures.
Originality/value
This paper differs from many academic papers on VaR by assuming only a very basic knowledge of mathematics and statistics.
Details
Keywords
Yurun Yang, Ahmet Goncu and Athanasios Pantelous
The purpose of this paper is to compare the profitability of different pairs selection and spread trading methods using the complete data set of commodity futures from Dalian…
Abstract
Purpose
The purpose of this paper is to compare the profitability of different pairs selection and spread trading methods using the complete data set of commodity futures from Dalian Commodity Exchange, Shanghai Futures Exchange and Zhengzhou Commodity Exchange.
Design/methodology/approach
Paris trading methods that are proposed in the literature are compared in terms of the risk-adjusted returns visa in-sample and out-of-sample backtesting and bootstrapping for robustness.
Findings
The empirical results show that pairs trading in the Chinese commodity futures market offers high returns, whereas, the profitability of these strategies primarily depends on the identification of suitable pairs. The observed high returns are a compensation for the spread divergence risk during the potentially longer holding periods, which implies that the maximum drawdown is more crucial compared to other risk-adjusted return measures such as the Sharpe ratio.
Originality/value
Complementary to the existing literature, for the Chinese commodity futures market, it is shown that if shorter maximum holding periods are introduced for the spread positions, then the pairs trading profits decreases. Therefore, the returns do not necessarily imply market inefficiency when the higher maximum drawdown associated with the holding period of the spread position is taken into account.
Details