Search results

1 – 10 of 705
To view the access options for this content please click here
Book part
Publication date: 30 May 2018

Francesco Moscone, Veronica Vinciotti and Elisa Tosetti

This chapter reviews graphical modeling techniques for estimating large covariance matrices and their inverse. The chapter provides a selective survey of different models

Abstract

This chapter reviews graphical modeling techniques for estimating large covariance matrices and their inverse. The chapter provides a selective survey of different models and estimators proposed by the graphical modeling literature and offers some practical examples where these methods could be applied in the area of health economics.

To view the access options for this content please click here
Book part
Publication date: 19 November 2014

Daniel Felix Ahelegbey and Paolo Giudici

The latest financial crisis has stressed the need of understanding the world financial system as a network of interconnected institutions, where financial linkages play a…

Abstract

The latest financial crisis has stressed the need of understanding the world financial system as a network of interconnected institutions, where financial linkages play a fundamental role in the spread of systemic risks. In this paper we propose to enrich the topological perspective of network models with a more structured statistical framework, that of Bayesian Gaussian graphical models. From a statistical viewpoint, we propose a new class of hierarchical Bayesian graphical models that can split correlations between institutions into country specific and idiosyncratic ones, in a way that parallels the decomposition of returns in the well-known Capital Asset Pricing Model. From a financial economics viewpoint, we suggest a way to model systemic risk that can explicitly take into account frictions between different financial markets, particularly suited to study the ongoing banking union process in Europe. From a computational viewpoint, we develop a novel Markov chain Monte Carlo algorithm based on Bayes factor thresholding.

To view the access options for this content please click here
Book part
Publication date: 30 May 2018

Abstract

Details

Health Econometrics
Type: Book
ISBN: 978-1-78714-541-2

To view the access options for this content please click here
Article
Publication date: 11 July 2008

Vivian W.Y. Tam and Khoa N. Le

Various method have been used by organisations in the construction industry to improve quality, employing mainly two major techniques: management techniques such as…

Abstract

Purpose

Various method have been used by organisations in the construction industry to improve quality, employing mainly two major techniques: management techniques such as quality control, quality assurance, total quality management; and statistical techniques such as cost of quality, customer satisfaction and the six sigma principle. The purpose of this paper is to show that it is possible to employ the six sigma principle in the field of construction management provided that sufficient information on a particular population is obtained.

Design/methodology/approach

Statistical properties of the hyperbolic distribution are given and quality factors such as population in range, number of defects, yield percentage and defects per million opportunities are estimated. Graphical illustrations of the hyperbolic and Gaussian distributions are also given. From that, detailed comparisons of these two distributions are numerically obtained. The impacts of these quality factors are briefly discussed to give a rough guidance to organisations in the construction industry on how to lower cost and to improve project quality by prevention. A case study on a construction project is given in which it is shown that the hyperbolic distribution is better suited to the cost data than the Gaussian distribution. Cost and quality data of all projects in the company are collected over a period of eight years. Each project may consist of a number of phases, typically spanning about three months. Each phase can be considered as a member of the project population. Quality factors of this population are estimated using the six sigma principle.

Findings

The paper finds that by using a suitable distribution, it is possible to improve quality factors such as population in range, yield percentage and number of defects per million opportunities.

Originality/value

This paper is of value in assessing the suitability of the hyperbolic and Gaussian distributions in modelling the population and showing that hyperbolic distribution can be more effectively used to model the cost data than the Gaussian distribution.

Details

Journal of Engineering, Design and Technology, vol. 6 no. 2
Type: Research Article
ISSN: 1726-0531

Keywords

To view the access options for this content please click here
Book part
Publication date: 19 November 2014

Esther Hee Lee

Copula modeling enables the analysis of multivariate count data that has previously required imposition of potentially undesirable correlation restrictions or has limited…

Abstract

Copula modeling enables the analysis of multivariate count data that has previously required imposition of potentially undesirable correlation restrictions or has limited attention to models with only a few outcomes. This article presents a method for analyzing correlated counts that is appealing because it retains well-known marginal distributions for each response while simultaneously allowing for flexible correlations among the outcomes. The proposed framework extends the applicability of the method to settings with high-dimensional outcomes and provides an efficient simulation method to generate the correlation matrix in a single step. Another open problem that is tackled is that of model comparison. In particular, the article presents techniques for estimating marginal likelihoods and Bayes factors in copula models. The methodology is implemented in a study of the joint behavior of four categories of US technology patents. The results reveal that patent counts exhibit high levels of correlation among categories and that joint modeling is crucial for eliciting the interactions among these variables.

Details

Bayesian Model Comparison
Type: Book
ISBN: 978-1-78441-185-5

Keywords

To view the access options for this content please click here
Article
Publication date: 3 April 2017

Pawel D. Domanski and Mateusz Gintrowski

This paper aims to present the results of the comparison between different approaches to the prediction of electricity prices. It is well-known that the properties of the…

Abstract

Purpose

This paper aims to present the results of the comparison between different approaches to the prediction of electricity prices. It is well-known that the properties of the data generation process may prefer some modeling methods over the others. The data having an origin in social or market processes are characterized by unexpectedly wide realization space resulting in the existence of the long tails in the probabilistic density function. These data may not be easy in time series prediction using standard approaches based on the normal distribution assumptions. The electricity prices on the deregulated market fall into this category.

Design/methodology/approach

The paper presents alternative approaches, i.e. memory-based prediction and fractal approach compared with established nonlinear method of neural networks. The appropriate interpretation of results is supported with the statistical data analysis and data conditioning. These algorithms have been applied to the problem of the energy price prediction on the deregulated electricity market with data from Polish and Austrian energy stock exchanges.

Findings

The first outcome of the analysis is that there are several situations in the task of time series prediction, when standard modeling approach based on the assumption that each change is independent of the last following random Gaussian bell pattern may not be a true. In this paper, such a case was considered: price data from energy markets. Electricity prices data are biased by the human nature. It is shown that more relevant for data properties was Cauchy probabilistic distribution. Results have shown that alternative approaches may be used and prediction for both data memory-based approach resulted in the best performance.

Research limitations/implications

“Personalization” of the model is crucial aspect in the whole methodology. All available knowledge should be used on the forecasted phenomenon and incorporate it into the model. In case of the memory-based modeling, it is a specific design of the history searching routine that uses the understanding of the process features. Importance should shift toward methodology structure design and algorithm customization and then to parameter estimation. Such modeling approach may be more descriptive for the user enabling understanding of the process and further iterative improvement in a continuous striving for perfection.

Practical implications

Memory-based modeling can be practically applied. These models have large potential that is worth to be exploited. One disadvantage of this modeling approach is large calculation effort connected with a need of constant evaluation of large data sets. It was shown that a graphics processing unit (GPU) approach through parallel calculation on the graphical cards can improve it dramatically.

Social implications

The modeling of the electricity prices has big impact of the daily operation of the electricity traders and distributors. From one side, appropriate modeling can improve performance mitigating risks associated with the process. Thus, the end users should receive higher quality of services ultimately with lower prices and minimized risk of the energy loss incidents.

Originality/value

The use of the alternative approaches, such as memory-based reasoning or fractals, is very rare in the field of the electricity price forecasting. Thus, it gives a new impact for further research enabling development of better solutions incorporating all available process knowledge and customized hybrid algorithms.

Details

International Journal of Energy Sector Management, vol. 11 no. 1
Type: Research Article
ISSN: 1750-6220

Keywords

To view the access options for this content please click here
Article
Publication date: 15 March 2021

Putta Hemalatha and Geetha Mary Amalanathan

Adequate resources for learning and training the data are an important constraint to develop an efficient classifier with outstanding performance. The data usually follows…

Abstract

Purpose

Adequate resources for learning and training the data are an important constraint to develop an efficient classifier with outstanding performance. The data usually follows a biased distribution of classes that reflects an unequal distribution of classes within a dataset. This issue is known as the imbalance problem, which is one of the most common issues occurring in real-time applications. Learning of imbalanced datasets is a ubiquitous challenge in the field of data mining. Imbalanced data degrades the performance of the classifier by producing inaccurate results.

Design/methodology/approach

In the proposed work, a novel fuzzy-based Gaussian synthetic minority oversampling (FG-SMOTE) algorithm is proposed to process the imbalanced data. The mechanism of the Gaussian SMOTE technique is based on finding the nearest neighbour concept to balance the ratio between minority and majority class datasets. The ratio of the datasets belonging to the minority and majority class is balanced using a fuzzy-based Levenshtein distance measure technique.

Findings

The performance and the accuracy of the proposed algorithm is evaluated using the deep belief networks classifier and the results showed the efficiency of the fuzzy-based Gaussian SMOTE technique achieved an AUC: 93.7%. F1 Score Prediction: 94.2%, Geometric Mean Score: 93.6% predicted from confusion matrix.

Research limitations/implications

The proposed research still retains some of the challenges that need to be focused such as application FG-SMOTE to multiclass imbalanced dataset and to evaluate dataset imbalance problem in a distributed environment.

Originality/value

The proposed algorithm fundamentally solves the data imbalance issues and challenges involved in handling the imbalanced data. FG-SMOTE has aided in balancing minority and majority class datasets.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 14 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Book part
Publication date: 1 December 2016

Jacob Dearmon and Tony E. Smith

Statistical methods of spatial analysis are often successful at either prediction or explanation, but not necessarily both. In a recent paper, Dearmon and Smith (2016…

Abstract

Statistical methods of spatial analysis are often successful at either prediction or explanation, but not necessarily both. In a recent paper, Dearmon and Smith (2016) showed that by combining Gaussian Process Regression (GPR) with Bayesian Model Averaging (BMA), a modeling framework could be developed in which both needs are addressed. In particular, the smoothness properties of GPR together with the robustness of BMA allow local spatial analyses of individual variable effects that yield remarkably stable results. However, this GPR-BMA approach is not without its limitations. In particular, the standard (isotropic) covariance kernel of GPR treats all explanatory variables in a symmetric way that limits the analysis of their individual effects. Here we extend this approach by introducing a mixture of kernels (both isotropic and anisotropic) which allow different length scales for each variable. To do so in a computationally efficient manner, we also explore a number of Bayes-factor approximations that avoid the need for costly reversible-jump Monte Carlo methods.

To demonstrate the effectiveness of this Variable Length Scale (VLS) model in terms of both predictions and local marginal analyses, we employ selected simulations to compare VLS with Geographically Weighted Regression (GWR), which is currently the most popular method for such spatial modeling. In addition, we employ the classical Boston Housing data to compare VLS not only with GWR but also with other well-known spatial regression models that have been applied to this same data. Our main results are to show that VLS not only compares favorably with spatial regression at the aggregate level but is also far more accurate than GWR at the local level.

Details

Spatial Econometrics: Qualitative and Limited Dependent Variables
Type: Book
ISBN: 978-1-78560-986-2

Keywords

To view the access options for this content please click here
Article
Publication date: 20 July 2010

F.A. DiazDelaO and S. Adhikari

In the dynamical analysis of engineering systems, running a detailed high‐resolution finite element model can be expensive even for obtaining the dynamic response at few…

Abstract

Purpose

In the dynamical analysis of engineering systems, running a detailed high‐resolution finite element model can be expensive even for obtaining the dynamic response at few frequency points. To address this problem, this paper aims to investigate the possibility of representing the output of an expensive computer code as a Gaussian stochastic process.

Design/methodology/approach

The Gaussian process emulator method is discussed and then applied to both simulated and experimentally measured data from the frequency response of a cantilever plate excited by a harmonic force. The dynamic response over a frequency range is approximated using only a small number of response values, obtained both by running a finite element model at carefully selected frequency points and from experimental measurements. The results are then validated applying some adequacy diagnostics.

Findings

It is shown that the Gaussian process emulator method can be an effective predictive tool for medium and high‐frequency vibration problems, whenever the data are expensive to obtain, either from a computer‐intensive code or a resource‐consuming experiment.

Originality/value

Although Gaussian process emulators have been used in other disciplines, there is no knowledge of it having been implemented for structural dynamic analyses and it has good potential for this area of engineering.

To view the access options for this content please click here
Article
Publication date: 9 February 2010

Ning Rong and Stefan Trück

The purpose of this paper is to provide an analysis of the dependence structure between returns from real estate investment trusts (REITS) and a stock market index…

Abstract

Purpose

The purpose of this paper is to provide an analysis of the dependence structure between returns from real estate investment trusts (REITS) and a stock market index. Further, the aim is to illustrate how copula approaches can be applied to model the complex dependence structure between the assets and for risk measurement of a portfolio containing investments in REIT and equity indices.

Design/methodology/approach

The usually suggested multivariate normal or variance‐ covariance approach is applied, as well as various copula models in order to investigate the dependence structure between returns of Australian REITS and the Australian stock market. Different models including the Gaussian, Student t, Clayton and Gumbel copula are estimated and goodness‐of‐fit tests are conducted. For the return series, both the Gaussian and a non‐parametric estimate of the distribution is applied. A risk analysis is provided based on Monte Carlo simulations for the different models. The value‐at‐risk measure is also applied for quantification of the risks for a portfolio combining investments in real estate and stock markets.

Findings

The findings suggest that the multivariate normal model is not appropriate to measure the complex dependence structure between the returns of the two asset classes. Instead, a model using non‐parametric estimates for the return series in combination with a Student t copula is clearly more suitable. It further illustrates that the usually applied variance‐covariance approach leads to a significant underestimation of the actual risk for a portfolio consisting of investments in REITS and equity indices. The nature of risk is better captured by the suggested copula models.

Originality/value

To the authors', knowledge, this is one of the first studies to apply and test different copula models in real estate markets. Results help international investors and portfolio managers to deepen their understanding of the dependence structure between returns from real estate and equity markets. Additionally, the results should be helpful for implementation of a more adequate risk management for portfolios containing investments in both REITS and equity indices.

Details

Journal of Property Investment & Finance, vol. 28 no. 1
Type: Research Article
ISSN: 1463-578X

Keywords

1 – 10 of 705