Search results
1 – 10 of over 34000
The purpose of this paper is to show that multivariate t-distribution assumption provides a better description of stock return data than multivariate normality assumption.
Abstract
Purpose
The purpose of this paper is to show that multivariate t-distribution assumption provides a better description of stock return data than multivariate normality assumption.
Design/methodology/approach
The EM algorithm is applied to solve the statistical estimation problem almost analytically, and the asymptotic theory is provided for inference.
Findings
The authors find that the multivariate normality assumption is almost always rejected by real stock return data, while the multivariate t-distribution assumption can often be adequate. Conclusions under normality vs under t can be drastically different for estimating expected returns and Jensen’s αs, and for testing asset pricing models.
Practical implications
The results provide improved estimates of cost of capital and asset moment parameters that are useful for corporate project evaluation and portfolio management.
Originality/value
The authors proposed new procedures that makes it easy to use a multivariate t-distribution, which models well the data, as a simple and viable alternative in practice to examine the robustness of many existing results.
Details
Keywords
C.H. Wong, J. Nicholas and G.D. Holt
Today’s growing numbers of contractor selection methodologies reflect the increasing awareness of the construction industry for improving its procurement process and performance…
Abstract
Today’s growing numbers of contractor selection methodologies reflect the increasing awareness of the construction industry for improving its procurement process and performance. This paper investigates contractor classification methods that link clients’ selection aspirations and contractor performance. Multivariate techniques were used to study the intrinsic link between clients’ selection preferences, i.e. project‐specific criteria (PSC) and their respective levels of importance assigned (LIA), during tender evaluation for modelling contractor classification models in a data set of 68 case studies of UK construction projects. The logistic regression (LR) and multivariate discriminant analysis (MDA) were used. Results revealed that both techniques produced a good prediction on contractor performance and indicated that suitability of the equipment, past performance in cost and time on similar projects, contractor relationship with local authority, and contractor reputation/image are the most predominant PSC in the LR and MDA models among the 34 PSC. Suggests contractor classification models using multivariate techniques could be developed further.
Details
Keywords
– The purpose of this paper is to solve the problem that the qualitative relative factors cannot be employed in traditional multivariate grey models.
Abstract
Purpose
The purpose of this paper is to solve the problem that the qualitative relative factors cannot be employed in traditional multivariate grey models.
Design/methodology/approach
First, a new model is constructed though introducing dummy drivers. Then, the parameters estimation method and recursive function of the model are discussed. Furthermore, dummy driver setting, pre and post test methods of dummy drivers are proposed. At last, the per capita income forecasting of rural residents in Henan province of China is solved with the proposed model.
Findings
The proposed model is the reasonable extension of original one. The accuracy of it is higher than former model. In the case study, the forecasting results of proposed model are compared with other grey forecasting models, and prove that proposed model has not only high accuracy, but also clear physical meaning.
Practical implications
The method proposed in the paper could be used in policy effect measure, marketing forecasting, etc., when the predictor variables are influenced by some qualitative variables.
Originality/value
It will promote the accuracy of multivariate grey forecasting model.
Details
Keywords
Burcu Tunga and Metin Demiralp
The plain High Dimensional Model Representation (HDMR) method needs Dirac delta type weights to partition the given multivariate data set for modelling an interpolation problem…
Abstract
Purpose
The plain High Dimensional Model Representation (HDMR) method needs Dirac delta type weights to partition the given multivariate data set for modelling an interpolation problem. Dirac delta type weight imposes a different importance level to each node of this set during the partitioning procedure which directly effects the performance of HDMR. The purpose of this paper is to develop a new method by using fluctuation free integration and HDMR methods to obtain optimized weight factors needed for identifying these importance levels for the multivariate data partitioning and modelling procedure.
Design/methodology/approach
A common problem in multivariate interpolation problems where the sought function values are given at the nodes of a rectangular prismatic grid is to determine an analytical structure for the function under consideration. As the multivariance of an interpolation problem increases, incompletenesses appear in standard numerical methods and memory limitations in computer‐based applications. To overcome the multivariance problems, it is better to deal with less‐variate structures. HDMR methods which are based on divide‐and‐conquer philosophy can be used for this purpose. This corresponds to multivariate data partitioning in which at most univariate components of the Plain HDMR are taken into consideration. To obtain these components there exist a number of integrals to be evaluated and the Fluctuation Free Integration method is used to obtain the results of these integrals. This new form of HDMR integrated with Fluctuation Free Integration also allows the Dirac delta type weight usage in multivariate data partitioning to be discarded and to optimize the weight factors corresponding to the importance level of each node of the given set.
Findings
The method developed in this study is applied to the six numerical examples in which there exist different structures and very encouraging results were obtained. In addition, the new method is compared with the other methods which include Dirac delta type weight function and the obtained results are given in the numerical implementations section.
Originality/value
The authors' new method allows an optimized weight structure in modelling to be determined in the given problem, instead of imposing the use of a certain weight function such as Dirac delta type weight. This allows the HDMR philosophy to have the chance of a flexible weight utilization in multivariate data modelling problems.
Details
Keywords
Ryan Larsen, David Leatham and Kunlapath Sukcharoen
Portfolio theory suggests that geographical diversification of production units could potentially help manage the risks associated with farming, yet little research has been done…
Abstract
Purpose
Portfolio theory suggests that geographical diversification of production units could potentially help manage the risks associated with farming, yet little research has been done to evaluate the effectiveness of a geographical diversification strategy in agriculture. The paper aims to discuss this issue.
Design/methodology/approach
The paper utilizes several tools from modern finance theory, including Conditional Value-at-Risk (CVaR) and copulas, to construct a model for the evaluation of a diversification strategy. The proposed model – the copula-based mean-CVaR model – is then applied to the producer’s acreage allocation problem to examine the potential benefits of risk reduction from a geographical diversification strategy in US wheat farming. Along with the copula-based model, the multivariate-normal mean-CVaR model is also estimated as a benchmark.
Findings
The mean-CVaR optimization results suggest that geographical diversification is a viable risk management strategy from a farm’s profit margin perspective. In addition, the copula-based model appears more appropriate than the traditional multivariate-normal model for conservative agricultural producers who are concerned with the extreme losses of farm profitability in that the later model tends to underestimate the minimum level of risk faced by the producers for a given level of profitability.
Originality/value
The effectiveness of geographical diversification in US wheat farming is evaluated. As a methodological contribution, the copula approach is used to model the joint distribution of profit margins and CVaR is employed as a measure of downside risk.
Details
Keywords
Yang Li and Tianxiang Lan
This paper aims to employ a multivariate nonlinear regression analysis to establish a predictive model for the final fracture area, while accounting for the impact of individual…
Abstract
Purpose
This paper aims to employ a multivariate nonlinear regression analysis to establish a predictive model for the final fracture area, while accounting for the impact of individual parameters.
Design/methodology/approach
This analysis is based on the numerical simulation data obtained, using the hybrid finite element–discrete element (FE–DE) method. The forecasting model was compared with the numerical results and the accuracy of the model was evaluated by the root mean square (RMS) and the RMS error, the mean absolute error and the mean absolute percentage error.
Findings
The multivariate nonlinear regression model can accurately predict the nonlinear relationships between injection rate, leakoff coefficient, elastic modulus, permeability, Poisson’s ratio, pore pressure and final fracture area. The regression equations obtained from the Newton iteration of the least squares method are strong in terms of the fit to the six sensitive parameters, and the model follow essentially the same trend with the numerical simulation data, with no systematic divergence detected. Least absolutely deviation has a significantly weaker performance than the least squares method. The percentage contribution of sensitive parameters to the final fracture area is available from the simulation results and forecast model. Injection rate, leakoff coefficient, permeability, elastic modulus, pore pressure and Poisson’s ratio contribute 43.4%, −19.4%, 24.8%, −19.2%, −21.3% and 10.1% to the final fracture area, respectively, as they increased gradually. In summary, (1) the fluid injection rate has the greatest influence on the final fracture area. (2)The multivariate nonlinear regression equation was optimally obtained after 59 iterations of the least squares-based Newton method and 27 derivative evaluations, with a decidability coefficient R2 = 0.711 representing the model reliability and the regression equations fit the four parameters of leakoff coefficient, permeability, elastic modulus and pore pressure very satisfactorily. The models follow essentially the identical trend with the numerical simulation data and there is no systematic divergence. The least absolute deviation has a significantly weaker fit than the least squares method. (3)The nonlinear forecasting model of physical parameters of hydraulic fracturing established in this paper can be applied as a standard for optimizing the fracturing strategy and predicting the fracturing efficiency in situ field and numerical simulation. Its effectiveness can be trained and optimized by experimental and simulation data, and taking into account more basic data and establishing regression equations, containing more fracturing parameters will be the further research interests.
Originality/value
The nonlinear forecasting model of physical parameters of hydraulic fracturing established in this paper can be applied as a standard for optimizing the fracturing strategy and predicting the fracturing efficiency in situ field and numerical simulation. Its effectiveness can be trained and optimized by experimental and simulation data, and taking into account more basic data and establishing regression equations, containing more fracturing parameters will be the further research interests.
Details
Keywords
The purpose of this paper is to examine the potential gains in hedge ratio calculation for agricultural commodities by incorporating market linkages and prices of related…
Abstract
Purpose
The purpose of this paper is to examine the potential gains in hedge ratio calculation for agricultural commodities by incorporating market linkages and prices of related commodities into the hedge ratio estimation process.
Design/methodology/approach
A vector autoregressive multivariate generalized autoregressive conditional heteroskedasticity (VAR‐MGARCH) model is used to construct a time‐varying correlation matrix for commodity prices across linked markets and across linked commodities. The MGARCH model is estimated using a two‐step approach, which allows for a large system of related prices to be estimated.
Findings
In‐sample and out‐of‐sample portfolio variance comparison among no hedge, bivariate GARCH, and MGARCH models indicates that hedge ratios estimated using the MGARCH approach reduce agricultural producers' and commercial consumers' risks in futures market participation.
Research limitations/implications
The application is limited to an examination of Montana wheat markets.
Practical implications
Agricultural producers who use futures markets to reduce market risk will have a better method for determining hedging positions, because MGARCH estimated hedge ratios incorporate more information than hedge ratios estimated using existing practices.
Social implications
Portfolio variance reduction is analogous to utility improvement for agricultural producers. More efficient hedging strategies can lead to better implementation of futures markets and increased social welfare.
Originality/value
This research substantially extends current literature on agricultural hedge strategies by illustrating the advantages of using an hedge ratio estimation approach that incorporates important information about prices at linked markets and prices of other commodities. Providing evidence that market portfolio variance can be lowered using the multivariate estimation approach, the research offers commercial agricultural producers and consumers a practical tool for improving futures market strategies.
Details
Keywords
Hemant Kumar Badaye and Jason Narsoo
This study aims to use a novel methodology to investigate the performance of several multivariate value at risk (VaR) and expected shortfall (ES) models implemented to assess the…
Abstract
Purpose
This study aims to use a novel methodology to investigate the performance of several multivariate value at risk (VaR) and expected shortfall (ES) models implemented to assess the risk of an equally weighted portfolio consisting of high-frequency (1-min) observations for five foreign currencies, namely, EUR/USD, GBP/USD, EUR/JPY, USD/JPY and GBP/JPY.
Design/methodology/approach
By applying the multiplicative component generalised autoregressive conditional heteroskedasticity (MC-GARCH) model on each return series and by modelling the dependence structure using copulas, the 95 per cent intraday portfolio VaR and ES are forecasted for an out-of-sample set using Monte Carlo simulation.
Findings
In terms of VaR forecasting performance, the backtesting results indicated that four out of the five models implemented could not be rejected at 5 per cent level of significance. However, when the models were further evaluated for their ES forecasting power, only the Student’s t and Clayton models could not be rejected. The fact that some ES models were rejected at 5 per cent significance level highlights the importance of selecting an appropriate copula model for the dependence structure.
Originality/value
To the best of the authors’ knowledge, this is the first study to use the MC-GARCH and copula models to forecast, for the next 1 min, the VaR and ES of an equally weighted portfolio of foreign currencies. It is also the first study to analyse the performance of the MC-GARCH model under seven distributional assumptions for the innovation term.
Details
Keywords
Fernando Rojas and Victor Leiva
The objective of this paper is to propose a methodology based on random demand inventory models and dependence structures for a set of raw materials, referred to as “components”…
Abstract
Purpose
The objective of this paper is to propose a methodology based on random demand inventory models and dependence structures for a set of raw materials, referred to as “components”, used by food services that produce food rations referred to as “menus”.
Design/methodology/approach
The contribution margins of food services that produce menus are optimised using random dependent demand inventory models. The statistical dependence between the demand for components and/or menus is incorporated into the model through the multivariate Gaussian (or normal) distribution. The contribution margins are optimised by using probabilistic inventory models for each component and stochastic programming with a differential evolution algorithm.
Findings
When compared to the non-optimised system previously used by the company, the (average) expected contribution margin increases by 18.32 per cent when using a continuous review inventory model for groceries and uniperiodic models for perishable components (optimised system).
Research limitations/implications
The multivariate modeling can be improved by using (a) other non-Gaussian (marginal) univariate probability distributions, by means of the copula method that considers more complex statistical dependence structures; (b) time-dependence, through autoregressive time-series structures and moving average; (c) random modelling of lead-time; and (d) demands for components with values equal to zero using zero-inflated or adjusted probability distribution.
Practical implications
Professional management of the supply chain allows the users to register data concerning component identification, demand, and stock levels to subsequently be used with the proposed methodology, which must be implemented computationally.
Originality/value
The proposed multivariate methodology allows it to describe demand dependence structures through inventory models applicable to components used to produce menus in food services.
Propuesta
Este trabajo propone una metodología basada en modelos de inventarios con demanda aleatoria y estructura de dependencia para un conjunto de materias primas, denominadas “componentes”, usadas por servicios de alimentación que producen raciones alimenticias denominadas “menús”.
Diseño/Metodología
Los margen de contribución de servicios de alimentación que producen menús son optimizados empleando modelos de inventarios con demandas aleatorias dependientes. La dependencia estadística entre demandas de componentes y/o menús es incorporada en el modelado mediante la distribución gaussiana (o normal) multivariada. La optimización de los márgenes de contribución se logra usando modelos de inventarios probabilísticos para cada componente y programación estocástica mediante el algoritmo de evolución diferencial.
Resultados
El margen de contribución esperado (promedio) aumenta en un 18,32% usando modelos de inventario de revisión continua para abarrotes y modelos uniperiódicos para componentes perecederos (sistema optimizado), en relación al sistema no optimizado usado anteriormente por la compañía.
Originalidad
La metodología multivariada propuesta permite describir estructuras de dependencia de la demanda mediante modelos de inventario aplicables a componentes usados para producir menús en servicios de alimentación.
Implicancias prácticas
Una administración profesional de la gestión de la cadena de suministros permite registrar datos de la identificación del componente, su demanda y sus niveles de stock para ser usados posteriormente con la metodología propuesta, la que debe estar implementada computacionalmente.
Limitaciones
El modelado multivariado puede ser mejorado (a) utilizando distribuciones probabilísticas univariadas (marginales) distintas a la gaussiana, mediante métodos de cópulas que recojan estructuras de dependencia estadística más complejas; (b) considerando demandas de componentes con valores iguales a cero, mediante distribuciones probabilísticas infladas en cero; (c) usando dependencia temporal, mediante estructuras de series de tiempo autorregresivas y de media móvil, y (d) modelando el lead-time en forma aleatoria.
Details
Keywords
- Contribution margins
- Multivariate distribution
- Optimization methods
- Probabilistic inventory models
- Statistical dependence
- dependencia estadística
- distribuciones multivariantes
- márgenes de contribución
- modelos de inventarios probabilísticos
- métodos de optimización
- modelos de inventarios probabilísticos
This paper examines and forecasts correlations between cryptocurrencies and major fiat currencies using Generalized Autoregressive Score (GAS) time-varying copulas. The authors…
Abstract
Purpose
This paper examines and forecasts correlations between cryptocurrencies and major fiat currencies using Generalized Autoregressive Score (GAS) time-varying copulas. The authors examine to which extent the multivariate GAS method captures the volatility persistence and the nonlinear interaction effects between cryptocurrencies and major fiat currencies.
Design/methodology/approach
The authors model tail dependence between conventional currencies and Bitcoin utilizing a Glosten-Jagannathan-Runkle Generalized Autoregressive Conditional Heteroscedastic model (GJR-GARCH)-GAS copula specification, which allows detecting the leptokurtic feature and clustering effects of currency returns distribution.
Findings
The authors' results show evidence of multiple tail dependence regimes, implying the unsuitability of applying static models to entirely describe the extreme dependence between Bitcoin and fiat currencies. Compared to the most common constant copulas, the authors find that the multivariate GAS copulas better forecast the volatility and dependency between cryptocurrencies and foreign exchange markets. Furthermore, based on the value-at-risk (VaR) and expected shortfall (ES) analyses, the authors show that the multivariate GAS models produce accurate risk measures by adding cryptocurrencies to a portfolio of fiat currencies.
Originality/value
This paper has two main contributions to the existing literature on cryptocurrencies. First, the authors empirically examine the tail dependence structure between common conventional currencies and bitcoin using GJR-GARCH GAS copulas which consider the leptokurtic feature and clustering effects of currency returns distribution. Second, by modeling VaR and ES, the authors test the implication of using time-varying models on the performance of currency portfolios, including cryptocurrencies.
Details