Search results
1 – 10 of over 1000Santiago Gamba-Santamaria, Oscar Fernando Jaulin-Mendez, Luis Fernando Melo-Velandia and Carlos Andrés Quicazán-Moreno
Value at risk (VaR) is a market risk measure widely used by risk managers and market regulatory authorities, and various methods are proposed in the literature for its estimation…
Abstract
Purpose
Value at risk (VaR) is a market risk measure widely used by risk managers and market regulatory authorities, and various methods are proposed in the literature for its estimation. However, limited studies discuss its distribution or its confidence intervals. The purpose of this paper is to compare different techniques for computing such intervals to identify the scenarios under which such confidence interval techniques perform properly.
Design/methodology/approach
The methods that are included in the comparison are based on asymptotic normality, extreme value theory and subsample bootstrap. The evaluation is done by computing the coverage rates for each method through Monte Carlo simulations under certain scenarios. The scenarios consider different persistence degrees in mean and variance, sample sizes, VaR probability levels, confidence levels of the intervals and distributions of the standardized errors. Additionally, an empirical application for the stock market index returns of G7 countries is presented.
Findings
The simulation exercises show that the methods that were considered in the study are only valid for high quantiles. In particular, in terms of coverage rates, there is a good performance for VaR(99 per cent) and bad performance for VaR(95 per cent) and VaR(90 per cent). The results are confirmed by an empirical application for the stock market index returns of G7 countries.
Practical implications
The findings of the study suggest that the methods that were considered to estimate VaR confidence interval are appropriated when considering high quantiles such as VaR(99 per cent). However, using these methods for smaller quantiles, such as VaR(95 per cent) and VaR(90 per cent), is not recommended.
Originality/value
This study is the first one, as far as it is known, to identify the scenarios under which the methods for estimating the VaR confidence intervals perform properly. The findings are supported by simulation and empirical exercises.
Details
Keywords
Ai Han, Yongmiao Hong, Shouyang Wang and Xin Yun
Modelling and forecasting interval-valued time series (ITS) have received increasing attention in statistics and econometrics. An interval-valued observation contains more…
Abstract
Modelling and forecasting interval-valued time series (ITS) have received increasing attention in statistics and econometrics. An interval-valued observation contains more information than a point-valued observation in the same time period. The previous literature has mainly considered modelling and forecasting a univariate ITS. However, few works attempt to model a vector process of ITS. In this paper, we propose an interval-valued vector autoregressive moving average (IVARMA) model to capture the cross-dependence dynamics within an ITS vector system. A minimum-distance estimation method is developed to estimate the parameters of an IVARMA model, and consistency, asymptotic normality and asymptotic efficiency of the proposed estimator are established. A two-stage minimum-distance estimator is shown to be asymptotically most efficient among the class of minimum-distance estimators. Simulation studies show that the two-stage estimator indeed outperforms other minimum-distance estimators for various data-generating processes considered.
Details
Keywords
Federico Echenique and Ivana Komunjer
In this article we design an econometric test for monotone comparative statics (MCS) often found in models with multiple equilibria. Our test exploits the observable implications…
Abstract
In this article we design an econometric test for monotone comparative statics (MCS) often found in models with multiple equilibria. Our test exploits the observable implications of the MCS prediction: that the extreme (high and low) conditiona l quantiles of the dependent variable increase monotonically with the explanatory variable. The main contribution of the article is to derive a likelihood-ratio test, which to the best of our knowledge is the first econometric test of MCS proposed in the literature. The test is an asymptotic “chi-bar squared” test for order restrictions on intermediate conditional quantiles. The key features of our approach are: (1) we do not need to estimate the underlying nonparametric model relating the dependent and explanatory variables to the latent disturbances; (2) we make few assumptions on the cardinality, location, or probabilities over equilibria. In particular, one can implement our test without assuming an equilibrium selection rule.
Details
Keywords
The purpose of this paper is to show that multivariate t-distribution assumption provides a better description of stock return data than multivariate normality assumption.
Abstract
Purpose
The purpose of this paper is to show that multivariate t-distribution assumption provides a better description of stock return data than multivariate normality assumption.
Design/methodology/approach
The EM algorithm is applied to solve the statistical estimation problem almost analytically, and the asymptotic theory is provided for inference.
Findings
The authors find that the multivariate normality assumption is almost always rejected by real stock return data, while the multivariate t-distribution assumption can often be adequate. Conclusions under normality vs under t can be drastically different for estimating expected returns and Jensen’s αs, and for testing asset pricing models.
Practical implications
The results provide improved estimates of cost of capital and asset moment parameters that are useful for corporate project evaluation and portfolio management.
Originality/value
The authors proposed new procedures that makes it easy to use a multivariate t-distribution, which models well the data, as a simple and viable alternative in practice to examine the robustness of many existing results.
Details
Keywords
Saida Mancer, Abdelhakim Necir and Souad Benchaira
The purpose of this paper is to propose a semiparametric estimator for the tail index of Pareto-type random truncated data that improves the existing ones in terms of mean square…
Abstract
Purpose
The purpose of this paper is to propose a semiparametric estimator for the tail index of Pareto-type random truncated data that improves the existing ones in terms of mean square error. Moreover, we establish its consistency and asymptotic normality.
Design/methodology/approach
To construct a root mean squared error (RMSE)-reduced estimator of the tail index, the authors used the semiparametric estimator of the underlying distribution function given by Wang (1989). This allows us to define the corresponding tail process and provide a weak approximation to this one. By means of a functional representation of the given estimator of the tail index and by using this weak approximation, the authors establish the asymptotic normality of the aforementioned RMSE-reduced estimator.
Findings
In basis on a semiparametric estimator of the underlying distribution function, the authors proposed a new estimation method to the tail index of Pareto-type distributions for randomly right-truncated data. Compared with the existing ones, this estimator behaves well both in terms of bias and RMSE. A useful weak approximation of the corresponding tail empirical process allowed us to establish both the consistency and asymptotic normality of the proposed estimator.
Originality/value
A new tail semiparametric (empirical) process for truncated data is introduced, a new estimator for the tail index of Pareto-type truncated data is introduced and asymptotic normality of the proposed estimator is established.
Details