Search results
1 – 10 of over 38000Neeraj Joshi, Sudeep R. Bapat and Raghu Nandan Sengupta
The purpose of this paper is to develop optimal estimation procedures for the stress-strength reliability (SSR) parameter R = P(X > Y) of an inverse Pareto distribution (IPD).
Abstract
Purpose
The purpose of this paper is to develop optimal estimation procedures for the stress-strength reliability (SSR) parameter R = P(X > Y) of an inverse Pareto distribution (IPD).
Design/methodology/approach
We estimate the SSR parameter R = P(X > Y) of the IPD under the minimum risk and bounded risk point estimation problems, where X and Y are strength and stress variables, respectively. The total loss function considered is a combination of estimation error (squared error) and cost, utilizing which we minimize the associated risk in order to estimate the reliability parameter. As no fixed-sample technique can be used to solve the proposed point estimation problems, we propose some “cost and time efficient” adaptive sampling techniques (two-stage and purely sequential sampling methods) to tackle them.
Findings
We state important results based on the proposed sampling methodologies. These include estimations of the expected sample size, standard deviation (SD) and mean square error (MSE) of the terminal estimator of reliability parameters. The theoretical values of reliability parameters and the associated sample size and risk functions are well supported by exhaustive simulation analyses. The applicability of our suggested methodology is further corroborated by a real dataset based on insurance claims.
Originality/value
This study will be useful for scenarios where various logistical concerns are involved in the reliability analysis. The methodologies proposed in this study can reduce the number of sampling operations substantially and save time and cost to a great extent.
Details
Keywords
Pournima Sridarran, Kaushal Keraminiyage and Leon Herszon
Project-based industries face major challenges in controlling project cost and completing within the budget. This is a critical issue as it often connects to the main objectives…
Abstract
Purpose
Project-based industries face major challenges in controlling project cost and completing within the budget. This is a critical issue as it often connects to the main objectives of any project. However, accurate estimation at the beginning of the project is difficult. Scholars argue that project complexity is a major contributor to cost estimation inaccuracies. Therefore, recognising the priorities of acknowledging complexity dimensions in cost estimation across similar industries is beneficial in identifying effective practices to reduce cost implications. Hence, the purpose of this paper is to identify the level of importance given to different complexity dimensions in cost estimation and to recognise best practices to improve cost estimation accuracy.
Design/methodology/approach
An online questionnaire survey was conducted among professionals including estimators, project managers, and quantity surveyors to rank the identified complexity dimensions based on their impacts in cost estimation accuracy. Besides, in-depth interviews were conducted among experts and practitioners from different industries, in order to extract effective practices to improve the cost estimation process of complex projects.
Findings
Study results show that risk, project and product size, and time frame are the high-impact complexity dimensions on cost estimation, which need more attention in reducing unforeseen cost implications. Moreover, study suggests that implementing a knowledge sharing system will be beneficial to acquire reliable and adequate information for cost estimation. Further, appropriate staffing, network enhancement, risk management, and circumspect estimation are some of the suggestions to improve cost estimation of complex projects.
Originality/value
The study finally provides suggestions to improve cost estimation in complex projects. Further, the results are expected to be beneficial to learn lessons from different industries and to exchange best practices.
Details
Keywords
Luiz Eduardo Gaio, Tabajara Pimenta Júnior, Fabiano Guasti Lima, Ivan Carlin Passos and Nelson Oliveira Stefanelli
The purpose of this paper is to evaluate the predictive capacity of market risk estimation models in times of financial crises.
Abstract
Purpose
The purpose of this paper is to evaluate the predictive capacity of market risk estimation models in times of financial crises.
Design/methodology/approach
For this, value-at-risk (VaR) valuation models applied to the daily returns of portfolios composed of stock indexes of developed and emerging countries were tested. The Historical Simulation VaR model, multivariate ARCH models (BEKK, VECH and constant conditional correlation), artificial neural networks and copula functions were tested. The data sample refers to the periods of two international financial crises, the Asian Crisis of 1997, and the US Sub Prime Crisis of 2008.
Findings
The results pointed out that the multivariate ARCH models (VECH and BEKK) and Copula-Clayton had similar performance, with good adjustments in 100 percent of the tests. It was not possible to perceive significant differences between the adjustments for developed and emerging countries and of the crisis and normal periods, which was different to what was expected.
Originality/value
Previous studies focus on the estimation of VaR by a group of models. One of the contributions of this paper is to use several forms of estimation.
Details
Keywords
Pankaj Sinha and Shalini Agnihotri
This paper aims to investigate the effect of non-normality in returns and market capitalization of stock portfolios and stock indices on value at risk and conditional VaR…
Abstract
Purpose
This paper aims to investigate the effect of non-normality in returns and market capitalization of stock portfolios and stock indices on value at risk and conditional VaR estimation. It is a well-documented fact that returns of stocks and stock indices are not normally distributed, as Indian financial markets are more prone to shocks caused by regulatory changes, exchange rate fluctuations, financial instability, political uncertainty and inadequate economic reforms. Further, the relationship of liquidity represented by volume traded of stocks and the market risk calculated by VaR of the firms is studied.
Design/methodology/approach
In this paper, VaR is estimated by fitting empirical distribution of returns, parametric method and by using GARCH(1,1) with Student’s t innovation method.
Findings
It is observed that both the stocks, stock indices and their residuals exhibit non-normality; therefore, conventional methods of VaR calculation are not accurate in real word situation. It is observed that parametric method of VaR calculation is underestimating VaR and CVaR but, VaR estimated by fitting empirical distribution of return and finding out 1-a percentile is giving better results as non-normality in returns is considered. The distributions fitted by the return series are following Logistic, Weibull and Laplace. It is also observed that VaR violations are increasing with decreasing market capitalization. Therefore, we can say that market capitalization also affects accurate VaR calculation. Further, the relationship of liquidity represented by volume traded of stocks and the market risk calculated by VaR of the firms is studied. It is observed that the decrease in liquidity increases the value at risk of the firms.
Research limitations/implications
This methodology can further be extended to other assets’ VaR calculation like foreign exchange rates, commodities and bank loan portfolios, etc.
Practical implications
This finding can help risk managers and mutual fund managers (as they have portfolios of different assets size) in estimating VaR of portfolios with non-normal returns and different market capitalization with precision. VaR is used as tool in setting trading limits at trading desks. Therefore, if VaR is calculated which takes into account non-normality of underlying distribution of return then trading limits can be set with precision. Hence, both risk management and risk measurement through VaR can be enhanced if VaR is calculated with accuracy.
Originality/value
This paper is considering the joint issue of non-normality in returns and effect of market capitalization in VaR estimation.
Details
Keywords
Khaled Mokni and Faysal Mansouri
In this chapter, we investigate the effect of long memory in volatility on the accuracy of emerging stock markets risk estimation during the period of the recent global financial…
Abstract
In this chapter, we investigate the effect of long memory in volatility on the accuracy of emerging stock markets risk estimation during the period of the recent global financial crisis. For this purpose, we use a short (GJR-GARCH) and long (FIAPARCH) memory volatility models to compute in-sample and out-of-sample one-day-ahead VaR. Using six emerging stock markets index, we show that taking into account the long memory property in volatility modelling generally provides a more accurate VaR estimation and prediction. Therefore, conservative risk managers may adopt long memory models using GARCH-type models to assess the emerging market risks, especially when incorporating crisis periods.
Details
Keywords
Gordon Newlove Asamoah and Anthony Quartey‐Papafio
The purpose of this paper is to estimate the Beta Risk Coefficient of 32 listed companies (shares), which are included in the Ghana Stock Exchange (GSE All Share Index).
Abstract
Purpose
The purpose of this paper is to estimate the Beta Risk Coefficient of 32 listed companies (shares), which are included in the Ghana Stock Exchange (GSE All Share Index).
Design/methodology/approach
This research investigated some of the issues that can affect beta estimates (the measurement of returns, the choice of market index used, the length of estimation period, the sampling interval, the issue of normality, autocorrelation, the effect of thin trading, seasonality and stability) by using 32 listed companies. The methodology used was the Market model and some of the beta estimation techniques used included Scholes‐Williams' beta, Dimson's beta, and Fowler‐Rorke's beta.
Findings
The empirical results generally confirm the evidence by various researchers in the literature reviewed. However, the tests for the effect of thin trading and the effect of seasonality reject the null hypothesis (Ho: βMonday=βTuesday=βWednesday=βThursday=βFriday).
Originality/value
The study has about 95 per cent originality since the authors went into the field to gather all the data needed and did all the analysis.
Details
Keywords
Amid increased size and complexity of the banking industry, operational risk has a greater potential to occur in more harmful ways than many other sources of risk. This paper…
Abstract
Purpose
Amid increased size and complexity of the banking industry, operational risk has a greater potential to occur in more harmful ways than many other sources of risk. This paper seeks to provide a succinct overview of the current regulatory framework of operational risk under the New Basel Accord with a view to inform a critical debate about the influence of data collection, loss reporting, and model specification on the consistency of risk‐sensitive capital rules.
Design/methodology/approach
The paper's approach is to investigate the regulatory implications of varying characteristics of operational risk and different methods to identify operational risk exposure.
Findings
The findings reveal that effective operational risk measurement hinges on how the reporting of operational risk losses and the model sensitivity of quantitative methods affect the generation of consistent risk estimates.
Originality/value
The presented findings offer tractable recommendations for a more coherent and consistent regulation of operational risk.
Details
Keywords
Ramona Serrano Bautista and José Antonio Núñez Mora
This paper tests the accuracies of the models that predict the Value-at-Risk (VaR) for the Market Integrated Latin America (MILA) and Association of Southeast Asian Nations…
Abstract
Purpose
This paper tests the accuracies of the models that predict the Value-at-Risk (VaR) for the Market Integrated Latin America (MILA) and Association of Southeast Asian Nations (ASEAN) emerging stock markets during crisis periods.
Design/methodology/approach
Many VaR estimation models have been presented in the literature. In this paper, the VaR is estimated using the Generalized Autoregressive Conditional Heteroskedasticity, EGARCH and GJR-GARCH models under normal, skewed-normal, Student-t and skewed-Student-t distributional assumptions and compared with the predictive performance of the Conditional Autoregressive Value-at-Risk (CaViaR) considering the four alternative specifications proposed by Engle and Manganelli (2004).
Findings
The results support the robustness of the CaViaR model in out-sample VaR forecasting for the MILA and ASEAN-5 emerging stock markets in crisis periods. This evidence is based on the results of the backtesting approach that analyzed the predictive performance of the models according to their accuracy.
Originality/value
An important issue in market risk is the inaccurate estimation of risk since different VaR models lead to different risk measures, which means that there is not yet an accepted method for all situations and markets. In particular, quantifying and forecasting the risk for the MILA and ASEAN-5 stock markets is crucial for evaluating global market risk since the MILA is the biggest stock exchange in Latin America and the ASEAN region accounted for 11% of the total global foreign direct investment inflows in 2014. Furthermore, according to the Asian Development Bank, this region is projected to average 7% annual growth by 2025.
Subal C. Kumbhakar and Efthymios G. Tsionas
This paper deals with estimation of risk and the risk preference function when producers face uncertainties in production (usually labeled as production risk) and output price…
Abstract
This paper deals with estimation of risk and the risk preference function when producers face uncertainties in production (usually labeled as production risk) and output price. These uncertainties are modeled in the context of production theory where the objective of the producers is to maximize expected utility of normalized anticipated profit. Models are proposed to estimate risk preference of individual producers under (i) only production risk, (ii) only price risk, (iii) both production and price risks, (iv) production risk with technical inefficiency, (v) price risk with technical inefficiency, and (vi) both production and price risks with technical inefficiency. We discuss estimation of the production function, the output risk function, and the risk preference functions in some of these cases. Norwegian salmon farming data is used for an empirical application of some of the proposed models. We find that salmon farmers are, in general, risk averse. Labor is found to be risk decreasing while capital and feed are found to be risk increasing.
Fotios C. Harmantzis, Linyan Miao and Yifan Chien
This paper aims to test empirically the performance of different models in measuring VaR and ES in the presence of heavy tails in returns using historical data.
Abstract
Purpose
This paper aims to test empirically the performance of different models in measuring VaR and ES in the presence of heavy tails in returns using historical data.
Design/methodology/approach
Daily returns of popular indices (S&P500, DAX, CAC, Nikkei, TSE, and FTSE) and currencies (US dollar vs Euro, Yen, Pound, and Canadian dollar) for over ten years are modeled with empirical (or historical), Gaussian, Generalized Pareto (peak over threshold (POT) technique of extreme value theory (EVT)) and Stable Paretian distribution (both symmetric and non‐symmetric). Experimentation on different factors that affect modeling, e.g. rolling window size and confidence level, has been conducted.
Findings
In estimating VaR, the results show that models that capture rare events can predict risk more accurately than non‐fat‐tailed models. For ES estimation, the historical model (as expected) and POT method are proved to give more accurate estimations. Gaussian model underestimates ES, while Stable Paretian framework overestimates ES.
Practical implications
Research findings are useful to investors and the way they perceive market risk, risk managers and the way they measure risk and calibrate their models, e.g. shortcomings of VaR, and regulators in central banks.
Originality/value
A comparative, thorough empirical study on a number of financial time series (currencies, indices) that aims to reveal the pros and cons of Gaussian versus fat‐tailed models and Stable Paretian versus EVT, in estimating two popular risk measures (VaR and ES), in the presence of extreme events. The effects of model assumptions on different parameters have also been studied in the paper.
Details