Search results

1 – 10 of 637
Book part
Publication date: 24 March 2006

Valeriy V. Gavrishchaka

Increasing availability of the financial data has opened new opportunities for quantitative modeling. It has also exposed limitations of the existing frameworks, such as low…

Abstract

Increasing availability of the financial data has opened new opportunities for quantitative modeling. It has also exposed limitations of the existing frameworks, such as low accuracy of the simplified analytical models and insufficient interpretability and stability of the adaptive data-driven algorithms. I make the case that boosting (a novel, ensemble learning technique) can serve as a simple and robust framework for combining the best features of the analytical and data-driven models. Boosting-based frameworks for typical financial and econometric applications are outlined. The implementation of a standard boosting procedure is illustrated in the context of the problem of symbolic volatility forecasting for IBM stock time series. It is shown that the boosted collection of the generalized autoregressive conditional heteroskedastic (GARCH)-type models is systematically more accurate than both the best single model in the collection and the widely used GARCH(1,1) model.

Details

Econometric Analysis of Financial and Economic Time Series
Type: Book
ISBN: 978-1-84950-388-4

Article
Publication date: 14 December 2021

Deepak S. Uplaonkar, Virupakshappa and Nagabhushan Patil

The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.

Abstract

Purpose

The purpose of this study is to develop a hybrid algorithm for segmenting tumor from ultrasound images of the liver.

Design/methodology/approach

After collecting the ultrasound images, contrast-limited adaptive histogram equalization approach (CLAHE) is applied as preprocessing, in order to enhance the visual quality of the images that helps in better segmentation. Then, adaptively regularized kernel-based fuzzy C means (ARKFCM) is used to segment tumor from the enhanced image along with local ternary pattern combined with selective level set approaches.

Findings

The proposed segmentation algorithm precisely segments the tumor portions from the enhanced images with lower computation cost. The proposed segmentation algorithm is compared with the existing algorithms and ground truth values in terms of Jaccard coefficient, dice coefficient, precision, Matthews correlation coefficient, f-score and accuracy. The experimental analysis shows that the proposed algorithm achieved 99.18% of accuracy and 92.17% of f-score value, which is better than the existing algorithms.

Practical implications

From the experimental analysis, the proposed ARKFCM with enhanced level set algorithm obtained better performance in ultrasound liver tumor segmentation related to graph-based algorithm. However, the proposed algorithm showed 3.11% improvement in dice coefficient compared to graph-based algorithm.

Originality/value

The image preprocessing is carried out using CLAHE algorithm. The preprocessed image is segmented by employing selective level set model and Local Ternary Pattern in ARKFCM algorithm. In this research, the proposed algorithm has advantages such as independence of clustering parameters, robustness in preserving the image details and optimal in finding the threshold value that effectively reduces the computational cost.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 15 November 2011

S. Bausson, V. Thomas, P.‐Y. Joubert, L. Blanc‐Féraud, J. Darbon and G. Aubert

The inverse problem in the eddy current (EC) imaging of metallic parts is an ill‐posed problem. The purpose of the paper is to compare the performances of regularized algorithms

Abstract

Purpose

The inverse problem in the eddy current (EC) imaging of metallic parts is an ill‐posed problem. The purpose of the paper is to compare the performances of regularized algorithms to estimate the 3D geometry of a surface breaking defect.

Design/methodology/approach

The forward problem is solved using a mesh‐free semi‐analytical model, the distributed point source method, which allows EC data to be simulated according to the shape of the considered defect. The inverse problem is solved using two regularization methods, namely the Tikhonov (l2) and the 3D total variation (tv) methods, implemented with first‐ and second‐order algorithms. The inversion performances were evaluated in terms of both mean square error (MSE) and computation time, while considering additive white and colored noise, respectively, standing for acquisition errors and model errors.

Findings

In presence of colored noise, the authors found out that first‐ and second‐order methods provide approximately the same result according to the SEs obtained while estimating the defect voxels. Nevertheless, in comparison with (l2), the (tv) regularization was proved to decrease the MSE by 10 voxels, at the cost of less than twice the computational effort.

Originality/value

In this paper, an easy to implement mesh‐free model, based on virtual defect current sources, was used to generated EC data relative to a defect positioned at the surface of a metallic part. A 3D total variation regularization approach was used in combination with the proposed model, which appears to be well suited to the reconstruction of volumic defects.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 30 no. 6
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 11 January 2023

Ajit Kumar and A.K. Ghosh

The purpose of this study is to estimate aerodynamic parameters using regularized regression-based methods.

Abstract

Purpose

The purpose of this study is to estimate aerodynamic parameters using regularized regression-based methods.

Design/methodology/approach

Regularized regression methods used are LASSO, ridge and elastic net.

Findings

A viable option of aerodynamic parameter estimation from regularized regression-based methods is found.

Practical implications

Efficacy of the methods is examined on flight test data.

Originality/value

This study provides regularized regression-based methods for aerodynamic parameter estimation from the flight test data.

Details

Aircraft Engineering and Aerospace Technology, vol. 95 no. 5
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 11 November 2021

Sandeep Kumar Hegde and Monica R. Mundada

Chronic diseases are considered as one of the serious concerns and threats to public health across the globe. Diseases such as chronic diabetes mellitus (CDM), cardio…

Abstract

Purpose

Chronic diseases are considered as one of the serious concerns and threats to public health across the globe. Diseases such as chronic diabetes mellitus (CDM), cardio vasculardisease (CVD) and chronic kidney disease (CKD) are major chronic diseases responsible for millions of death. Each of these diseases is considered as a risk factor for the other two diseases. Therefore, noteworthy attention is being paid to reduce the risk of these diseases. A gigantic amount of medical data is generated in digital form from smart healthcare appliances in the current era. Although numerous machine learning (ML) algorithms are proposed for the early prediction of chronic diseases, these algorithmic models are neither generalized nor adaptive when the model is imposed on new disease datasets. Hence, these algorithms have to process a huge amount of disease data iteratively until the model converges. This limitation may make it difficult for ML models to fit and produce imprecise results. A single algorithm may not yield accurate results. Nonetheless, an ensemble of classifiers built from multiple models, that works based on a voting principle has been successfully applied to solve many classification tasks. The purpose of this paper is to make early prediction of chronic diseases using hybrid generative regression based deep intelligence network (HGRDIN) model.

Design/methodology/approach

In the proposed paper generative regression (GR) model is used in combination with deep neural network (DNN) for the early prediction of chronic disease. The GR model will obtain prior knowledge about the labelled data by analyzing the correlation between features and class labels. Hence, the weight assignment process of DNN is influenced by the relationship between attributes rather than random assignment. The knowledge obtained through these processes is passed as input to the DNN network for further prediction. Since the inference about the input data instances is drawn at the DNN through the GR model, the model is named as hybrid generative regression-based deep intelligence network (HGRDIN).

Findings

The credibility of the implemented approach is rigorously validated using various parameters such as accuracy, precision, recall, F score and area under the curve (AUC) score. During the training phase, the proposed algorithm is constantly regularized using the elastic net regularization technique and also hyper-tuned using the various parameters such as momentum and learning rate to minimize the misprediction rate. The experimental results illustrate that the proposed approach predicted the chronic disease with a minimal error by avoiding the possible overfitting and local minima problems. The result obtained with the proposed approach is also compared with the various traditional approaches.

Research limitations/implications

Usually, the diagnostic data are multi-dimension in nature where the performance of the ML algorithm will degrade due to the data overfitting, curse of dimensionality issues. The result obtained through the experiment has achieved an average accuracy of 95%. Hence, analysis can be made further to improve predictive accuracy by overcoming the curse of dimensionality issues.

Practical implications

The proposed ML model can mimic the behavior of the doctor's brain. These algorithms have the capability to replace clinical tasks. The accurate result obtained through the innovative algorithms can free the physician from the mundane care and practices so that the physician can focus more on the complex issues.

Social implications

Utilizing the proposed predictive model at the decision-making level for the early prediction of the disease is considered as a promising change towards the healthcare sector. The global burden of chronic disease can be reduced at an exceptional level through these approaches.

Originality/value

In the proposed HGRDIN model, the concept of transfer learning approach is used where the knowledge acquired through the GR process is applied on DNN that identified the possible relationship between the dependent and independent feature variables by mapping the chronic data instances to its corresponding target class before it is being passed as input to the DNN network. Hence, the result of the experiments illustrated that the proposed approach obtained superior performance in terms of various validation parameters than the existing conventional techniques.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 15 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 6 March 2017

Oleg M. Alifanov

The main purpose of this study, reflecting mainly the content of the authors’ plenary lecture, is to make a brief overview of several approaches developed by the author and his…

Abstract

Purpose

The main purpose of this study, reflecting mainly the content of the authors’ plenary lecture, is to make a brief overview of several approaches developed by the author and his colleagues to the solution to ill-posed inverse heat transfer problems (IHTPs) with their possible extension to a wider class of inverse problems of mathematical physics and, most importantly, to show the wide possibilities of this methodology by examples of aerospace applications. In this regard, this study can be seen as a continuation of those applications that were discussed in the lecture.

Design/methodology/approach

The application of the inverse method was pre-tested with experimental investigations on a special test equipment in laboratory conditions. In these studies, the author used the solution to the nonlinear inverse problem in the conjugate (conductive and convective) statement. The corresponding iterative algorithm has been developed and tested by a numerical and experimental way.

Findings

It can be stated that the theory and methodology of solving IHTPs combined with experimental simulation of thermal conditions is an effective tool for various fundamental and applied research and development in the field of heat and mass transfer.

Originality/value

With the help of the developed methods of inverse problems, the investigation was conducted for a porous cooling with a gaseous coolant for heat protection of the re-entry vehicle in the natural environment of hypersonic flight. Moreover, the analysis showed that the inverse methods can make a useful contribution to the study of heat transfer at the surface of a solid body under the influence of the hypersonic heterogeneous (dusty) gas stream and in many other aerospace applications.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 27 no. 3
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 1 June 2021

Linda Ponta, Gloria Puliga and Raffaella Manzini

The measure of companies' Innovation Performance is fundamental for enhancing the value and decision-making processes of firms. The purpose of this paper is to present a new…

12079

Abstract

Purpose

The measure of companies' Innovation Performance is fundamental for enhancing the value and decision-making processes of firms. The purpose of this paper is to present a new measure of Innovation Performance, called Innovation Patent Index (IPI), which makes it possible to quantitatively summarize different aspects of firms' innovation.

Design/methodology/approach

In order to define the IPI, a secondary source, i.e. patent data, has been used. The five dimensions of IPI, i.e. efficiency, time, diversification, quality and internationalization have been defined both analyzing the literature and applying three different machine learning algorithms (regularized least squares, deep neural networks and decision trees), considering patent forward citations as a proxy of the innovation performance.

Findings

Results show that the IPI index is a very useful tool, simple to use and very promptly. In fact, it is possible to get important results without making time consuming analysis with primary sources. It is a tool that can be used by managers, businessmen, policymakers, organizations, patent experts and financiers to evaluate and plan future activities, to enhance the innovation capability, to find financing and to support and improve innovation.

Research limitations/implications

Patent data are not widely used in all the sectors. Moreover, the pure number of forward citations is not the only forward looking indicator suggested by the literature.

Originality/value

The demand for a useable Innovation Performance tool, as well as the lack of tools able to grasp different aspects of the innovation, highlight the need to develop new instruments. In fact, although previous studies provide several measures of Innovation Performance, these are often difficult for managers to use, do not appreciate different aspects of the innovation and are not forward looking.

Details

Management Decision, vol. 59 no. 13
Type: Research Article
ISSN: 0025-1747

Keywords

Article
Publication date: 4 May 2012

Piotr Putek, Guillaume Crevecoeur, Marian Slodička, Roger van Keer, Ben Van de Wiele and Luc Dupré

The purpose of this paper is to solve an inverse problem of structure recognition arising in eddy current testing (ECT) – type NDT. For this purpose, the space mapping (SM…

Abstract

Purpose

The purpose of this paper is to solve an inverse problem of structure recognition arising in eddy current testing (ECT) – type NDT. For this purpose, the space mapping (SM) technique with an extraction based on the Gauss‐Newton algorithm with Tikhonov regularization is applied.

Design/methodology/approach

The aim is to have a computationally fast recognition procedure of defects since the monitoring results in a large amount of data points that need to be analyzed by 3D eddy current model. According to the SM optimization, the finite element method (FEM) is used as a fine model, while the model based on an integral method such as the volume integral method (VIM) serves as a coarse model. This approach, being an example of a two‐level optimization method, allows shifting the optimization load from a time consuming and accurate model to the less precise but faster coarse surrogate.

Findings

The application of this method enables shortening of the evaluation time that is required to provide the proper parameter estimation of surface defects.

Research limitations/implications

In this work only the specific kinds of surface defects were considered. Therefore, the reconstruction of arbitrary shapes of defects when using real measurement data from ECT system can be treated in further research.

Originality/value

The paper investigated the eddy current inverse problem. According to aggressive space mapping method, a suitable coarse model is needed. In this case, for the purpose of 3D defect reconstruction, the reduced VIM approach was applied. From a practical view point, the authors demonstrated that the two‐level inversion procedures allow saving of up to 50 percent CPU time in comparison with the optimization by means of regularized Gauss‐Newton algorithm in the same FE model.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 31 no. 3
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 1 June 2005

B. Auchmann, S. Kurz, O. Rain and S. Russenschuck

To introduce a Whitney‐element based coupling of the Finite Element Method (FEM) and the Boundary Element Method (BEM); to discuss the algebraic properties of the resulting system…

1409

Abstract

Purpose

To introduce a Whitney‐element based coupling of the Finite Element Method (FEM) and the Boundary Element Method (BEM); to discuss the algebraic properties of the resulting system and propose solver strategies.

Design/methodology/approach

The FEM is interpreted in the framework of the theory of discrete electromagnetism (DEM). The BEM formulation is given in a DEM‐compatible notation. This allows for a physical interpretation of the algebraic properties of the resulting BEM‐FEM system matrix. To these ends we give a concise introduction to the mathematical concepts of DEM.

Findings

Although the BEM‐FEM system matrix is not symmetric, its kernel is equivalent to the kernel of its transpose. This surprising finding allows for the use of two solution techniques: regularization or an adapted GMRES solver.

Research limitations/implications

The programming of the proposed techniques is a work in progress. The numerical results to support the presented theory are limited to a small number of test cases.

Practical implications

The paper will help to improve the understanding of the topological and geometrical implications in the algebraic structure of the BEM‐FEM coupling.

Originality/value

Several original concepts are presented: a new interpretation of the FEM boundary term leads to an intuitive understanding of the coupling of BEM and FEM. The adapted GMRES solver allows for an accurate solution of a singular, unsymetric system with a right‐hand side that is not in the image of the matrix. The issue of a grid‐transfer matrix is briefly mentioned.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering, vol. 24 no. 2
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 8 June 2020

Ming Li, Ying Li, YingCheng Xu and Li Wang

In community question answering (CQA), people who answer questions assume readers have mastered the content in the answers. Nevertheless, some readers cannot understand all…

Abstract

Purpose

In community question answering (CQA), people who answer questions assume readers have mastered the content in the answers. Nevertheless, some readers cannot understand all content. Thus, there is a need for further explanation of the concepts that appear in the answers. Moreover, the large number of question and answer (Q&A) documents make manual retrieval difficult. This paper aims to alleviate these issues for CQA websites.

Design/methodology/approach

In the paper, an algorithm for recommending explanatory Q&A documents is proposed. Q&A documents are modeled with the biterm topic model (BTM) (Yan et al., 2013). Then, the growing neural gas (GNG) algorithm (Fritzke, 1995) is used to cluster Q&A documents. To train multiple classifiers, three features are extracted from the Q&A categories. Thereafter, an ensemble classification model is constructed to identify the explanatory relationships. Finally, the explanatory Q&A documents are recommended.

Findings

The GNG algorithm shows good clustering performance. The ensemble classification model performs better than other classifiers. The both effect and quality scores of explanatory Q&A recommendations are high. These scores indicate the practicality and good performance of the proposed recommendation algorithm.

Research limitations/implications

The proposed algorithm alleviates information overload in CQA from the new perspective of recommending explanatory knowledge. It provides new insight into research on recommendations in CQA. Moreover, in practice, CQA websites can use it to help retrieve Q&A documents and facilitate understanding of their contents. However, the algorithm is for the general recommendation of Q&A documents which does not consider individual personalized characteristics. In future work, personalized recommendations will be evaluated.

Originality/value

A novel explanatory Q&A recommendation algorithm is proposed for CQA to alleviate the burden of manual retrieval and Q&A overload. The novel GNG clustering algorithm and ensemble classification model provide a more accurate way to identify explanatory Q&A documents. The method of ranking the explanatory Q&A documents improves the effectiveness and quality of the recommendation. The proposed algorithm improves the accuracy and efficiency of retrieving explanatory Q&A documents. It assists users in grasping answers easily.

Details

Data Technologies and Applications, vol. 54 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

1 – 10 of 637