Search results

1 – 10 of over 1000
Article
Publication date: 30 October 2023

Qiangqiang Zhai, Zhao Liu, Zhouzhou Song and Ping Zhu

Kriging surrogate model has demonstrated a powerful ability to be applied to a variety of engineering challenges by emulating time-consuming simulations. However, when it comes to…

Abstract

Purpose

Kriging surrogate model has demonstrated a powerful ability to be applied to a variety of engineering challenges by emulating time-consuming simulations. However, when it comes to problems with high-dimensional input variables, it may be difficult to obtain a model with high accuracy and efficiency due to the curse of dimensionality. To meet this challenge, an improved high-dimensional Kriging modeling method based on maximal information coefficient (MIC) is developed in this work.

Design/methodology/approach

The hyperparameter domain is first derived and the dataset of hyperparameter and likelihood function is collected by Latin Hypercube Sampling. MIC values are innovatively calculated from the dataset and used as prior knowledge for optimizing hyperparameters. Then, an auxiliary parameter is introduced to establish the relationship between MIC values and hyperparameters. Next, the hyperparameters are obtained by transforming the optimized auxiliary parameter. Finally, to further improve the modeling accuracy, a novel local optimization step is performed to discover more suitable hyperparameters.

Findings

The proposed method is then applied to five representative mathematical functions with dimensions ranging from 20 to 100 and an engineering case with 30 design variables.

Originality/value

The results show that the proposed high-dimensional Kriging modeling method can obtain more accurate results than the other three methods, and it has an acceptable modeling efficiency. Moreover, the proposed method is also suitable for high-dimensional problems with limited sample points.

Details

Engineering Computations, vol. 40 no. 9/10
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 25 August 2023

Youwei He, Kuan Tan, Chunming Fu and Jinliang Luo

The modeling cost of the gradient-enhanced kriging (GEK) method is prohibitive for high-dimensional problems. This study aims to develop an efficient modeling strategy for the GEK…

Abstract

Purpose

The modeling cost of the gradient-enhanced kriging (GEK) method is prohibitive for high-dimensional problems. This study aims to develop an efficient modeling strategy for the GEK method.

Design/methodology/approach

A two-step tuning strategy is proposed for the construction of the GEK model. First, an auxiliary kriging is built efficiently. Then, the hyperparameter of the kriging model is served as a good initial guess to that of the GEK model, and a local optimal search is further used to explore the search space of hyperparameter to guarantee the accuracy of the GEK model. In the construction of the auxiliary kriging, the maximal information coefficient is adopted to estimate the relative magnitude of the hyperparameter, which is used to transform the high-dimension maximum likelihood estimation problem into a one-dimensional optimization. The tuning problem of the auxiliary kriging becomes independent of the dimension. Therefore, the modeling efficiency can be improved significantly.

Findings

The performance of the proposed method is studied with analytic problems ranging from 10D to 50D and an 18D aerodynamic airfoil example. It is further compared with two efficient GEK modeling methods. The empirical experiments show that the proposed model can significantly improve the modeling efficiency without sacrificing accuracy compared with other efficient modeling methods.

Originality/value

This paper developed an efficient modeling strategy for GEK and demonstrated the effectiveness of the proposed method in modeling high-dimension problems.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 33 no. 12
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 28 May 2021

Zhibin Xiong and Jun Huang

Ensemble models that combine multiple base classifiers have been widely used to improve prediction performance in credit risk evaluation. However, an arbitrary selection of base…

Abstract

Purpose

Ensemble models that combine multiple base classifiers have been widely used to improve prediction performance in credit risk evaluation. However, an arbitrary selection of base classifiers is problematic. The purpose of this paper is to develop a framework for selecting base classifiers to improve the overall classification performance of an ensemble model.

Design/methodology/approach

In this study, selecting base classifiers is treated as a feature selection problem, where the output from a base classifier can be considered a feature. The proposed correlation-based classifier selection using the maximum information coefficient (MIC-CCS), a correlation-based classifier selection under the maximum information coefficient method, selects the features (classifiers) using nonlinear optimization programming, which seeks to optimize the relationship between the accuracy and diversity of base classifiers, based on MIC.

Findings

The empirical results show that ensemble models perform better than stand-alone ones, whereas the ensemble model based on MIC-CCS outperforms the ensemble models with unselected base classifiers and other ensemble models based on traditional forward and backward selection methods. Additionally, the classification performance of the ensemble model in which correlation is measured with MIC is better than that measured with the Pearson correlation coefficient.

Research limitations/implications

The study provides an alternate solution to effectively select base classifiers that are significantly different, so that they can provide complementary information and, as these selected classifiers have good predictive capabilities, the classification performance of the ensemble model is improved.

Originality/value

This paper introduces MIC to the correlation-based selection process to better capture nonlinear and nonfunctional relationships in a complex credit data structure and construct a novel nonlinear programming model for base classifiers selection that has not been used in other studies.

Article
Publication date: 15 February 2021

Panos Fousekis and Vasilis Grigoriadis

This paper aims to identify and quantify directional predictability between returns and volume in major cryptocurrencies markets.

Abstract

Purpose

This paper aims to identify and quantify directional predictability between returns and volume in major cryptocurrencies markets.

Design/methodology/approach

The empirical analysis relies on the cross-quantilogram approach that allows one to assess the temporal (lag-lead) association between two stationary time series at different parts of their joint distribution. The data are daily prices and trading volumes from four markets (Bitcoin, Ethereum, Ripple and Litecoin).

Findings

Extreme returns either positive or negative tend to lead high volume levels. Low levels of trading activity have in general no information content about future returns; high levels, however, tend to precede extreme positive returns.

Originality/value

This is the first work that uses the cross-quantilogram approach to assess the temporal association between returns and volume in cryptocurrencies markets. The findings provide new insights about the informational efficiency of these markets and the traders’ strategies.

Details

Studies in Economics and Finance, vol. 38 no. 4
Type: Research Article
ISSN: 1086-7376

Keywords

Article
Publication date: 25 May 2018

Mindong Chen, Huijie Zhang, Liang Chen and Dongmei Fu

An electrochemical method based on the open circuit potential (OCP) fluctuations was put forward. It can be used to optimize the alloy compositions for improving the corrosion…

Abstract

Purpose

An electrochemical method based on the open circuit potential (OCP) fluctuations was put forward. It can be used to optimize the alloy compositions for improving the corrosion resistance of rust layer.

Design/methodology/approach

The potential trends and potential fluctuations of carbon steels in seawater were separated by Hodrick–Prescott filter. The Spearman correlation coefficient and max information coefficient were used to explore the correlation of alloy compositions and potential fluctuations.

Findings

After long-term immersion, potential fluctuation resistance (PFR) can be used to characterize the corrosion resistance of metals and its rust layers. In the 1,500 to 2,500 h exposure period, Fe, C and S compositions have strong negative correlations, whereas PFR and P composition have weak negative correlations. Mn, Cu and Ti alloy compositions help the rust layer of carbon steels have higher PFRs. These elements that exhibit higher PFRs in this period have been confirmed to have the effect on improving the corrosion resistance of rust layer.

Originality/value

A new computing method for alloy composition optimization of carbon steels based on the OCP fluctuations was put forward. This method combines electrochemical monitoring with the long-term actual seawater environmental tests of various carbon steels.

Details

Anti-Corrosion Methods and Materials, vol. 65 no. 3
Type: Research Article
ISSN: 0003-5599

Keywords

Article
Publication date: 28 July 2020

Sathyaraj R, Ramanathan L, Lavanya K, Balasubramanian V and Saira Banu J

The innovation in big data is increasing day by day in such a way that the conventional software tools face several problems in managing the big data. Moreover, the occurrence of…

Abstract

Purpose

The innovation in big data is increasing day by day in such a way that the conventional software tools face several problems in managing the big data. Moreover, the occurrence of the imbalance data in the massive data sets is a major constraint to the research industry.

Design/methodology/approach

The purpose of the paper is to introduce a big data classification technique using the MapReduce framework based on an optimization algorithm. The big data classification is enabled using the MapReduce framework, which utilizes the proposed optimization algorithm, named chicken-based bacterial foraging (CBF) algorithm. The proposed algorithm is generated by integrating the bacterial foraging optimization (BFO) algorithm with the cat swarm optimization (CSO) algorithm. The proposed model executes the process in two stages, namely, training and testing phases. In the training phase, the big data that is produced from different distributed sources is subjected to parallel processing using the mappers in the mapper phase, which perform the preprocessing and feature selection based on the proposed CBF algorithm. The preprocessing step eliminates the redundant and inconsistent data, whereas the feature section step is done on the preprocessed data for extracting the significant features from the data, to provide improved classification accuracy. The selected features are fed into the reducer for data classification using the deep belief network (DBN) classifier, which is trained using the proposed CBF algorithm such that the data are classified into various classes, and finally, at the end of the training process, the individual reducers present the trained models. Thus, the incremental data are handled effectively based on the training model in the training phase. In the testing phase, the incremental data are taken and split into different subsets and fed into the different mappers for the classification. Each mapper contains a trained model which is obtained from the training phase. The trained model is utilized for classifying the incremental data. After classification, the output obtained from each mapper is fused and fed into the reducer for the classification.

Findings

The maximum accuracy and Jaccard coefficient are obtained using the epileptic seizure recognition database. The proposed CBF-DBN produces a maximal accuracy value of 91.129%, whereas the accuracy values of the existing neural network (NN), DBN, naive Bayes classifier-term frequency–inverse document frequency (NBC-TFIDF) are 82.894%, 86.184% and 86.512%, respectively. The Jaccard coefficient of the proposed CBF-DBN produces a maximal Jaccard coefficient value of 88.928%, whereas the Jaccard coefficient values of the existing NN, DBN, NBC-TFIDF are 75.891%, 79.850% and 81.103%, respectively.

Originality/value

In this paper, a big data classification method is proposed for categorizing massive data sets for meeting the constraints of huge data. The big data classification is performed on the MapReduce framework based on training and testing phases in such a way that the data are handled in parallel at the same time. In the training phase, the big data is obtained and partitioned into different subsets of data and fed into the mapper. In the mapper, the features extraction step is performed for extracting the significant features. The obtained features are subjected to the reducers for classifying the data using the obtained features. The DBN classifier is utilized for the classification wherein the DBN is trained using the proposed CBF algorithm. The trained model is obtained as an output after the classification. In the testing phase, the incremental data are considered for the classification. New data are first split into subsets and fed into the mapper for classification. The trained models obtained from the training phase are used for the classification. The classified results from each mapper are fused and fed into the reducer for the classification of big data.

Details

Data Technologies and Applications, vol. 55 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 2 May 2017

Shanshan Zhang and Ping He

This paper aims to investigate the investment strategy of a two-sided platform on reducing transaction costs of two user sides and to study the pricing problem of the platform.

1052

Abstract

Purpose

This paper aims to investigate the investment strategy of a two-sided platform on reducing transaction costs of two user sides and to study the pricing problem of the platform.

Design/methodology/approach

Mathematical derivation is used to compute the optimal decisions of a two-sided platform on pricing and investment. Numerical analysis is used to illustrate the findings.

Findings

It is found that the demand of one user side decreases in the maximal transaction costs reduction to this side but increases in the maximal transaction costs reduction to the other side. It is also found that a platform should never choose the investment in such a way that the maximal transaction costs reductions of two user sides are the same.

Research limitations/implications

Several limitations exist in this paper, most of which exist due to the assumptions. These limitations could be good research directions in the future. For example, only one platform’s decision is considered, and platforms’ competition is not taken into account. Considering other platforms’ competition, the decisions of the users and the platform would be different.

Originality/value

From the transaction costs perspective, this paper finds that a platform should never choose the investment in such a way that the maximal transaction costs reductions of two user sides are the same. This conclusion has not been found in previous literature.

Details

Kybernetes, vol. 46 no. 5
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 16 August 2021

V. Vinolin and M. Sucharitha

With the advancements in photo editing software, it is possible to generate fake images, degrading the trust in digital images. Forged images, which appear like authentic images…

Abstract

Purpose

With the advancements in photo editing software, it is possible to generate fake images, degrading the trust in digital images. Forged images, which appear like authentic images, can be created without leaving any visual clues about the alteration in the image. Image forensic field has introduced several forgery detection techniques, which effectively distinguish fake images from the original ones, to restore the trust in digital images. Among several forgery images, spliced images involving human faces are more unsafe. Hence, there is a need for a forgery detection approach to detect the spliced images.

Design/methodology/approach

This paper proposes a Taylor-rider optimization algorithm-based deep convolutional neural network (Taylor-ROA-based DeepCNN) for detecting spliced images. Initially, the human faces in the spliced images are detected using the Viola–Jones algorithm, from which the 3-dimensional (3D) shape of the face is established using landmark-based 3D morphable model (L3DMM), which estimates the light coefficients. Then, the distance measures, such as Bhattacharya, Seuclidean, Euclidean, Hamming, Chebyshev and correlation coefficients are determined from the light coefficients of the faces. These form the feature vector to the proposed Taylor-ROA-based DeepCNN, which determines the spliced images.

Findings

Experimental analysis using DSO-1, DSI-1, real dataset and hybrid dataset reveal that the proposed approach acquired the maximal accuracy, true positive rate (TPR) and true negative rate (TNR) of 99%, 98.88% and 96.03%, respectively, for DSO-1 dataset. The proposed method reached the performance improvement of 24.49%, 8.92%, 6.72%, 4.17%, 0.25%, 0.13%, 0.06%, and 0.06% in comparison to the existing methods, such as Kee and Farid's, shape from shading (SFS), random guess, Bo Peng et al., neural network, FOA-SVNN, CNN-based MBK, and Manoj Kumar et al., respectively, in terms of accuracy.

Originality/value

The Taylor-ROA is developed by integrating the Taylor series in rider optimization algorithm (ROA) for optimally tuning the DeepCNN.

Details

Data Technologies and Applications, vol. 56 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 1 December 2006

Qiuwang Wang, Feng Wu, Min Zeng, Laiqin Luo and Jiguo Sun

To find the optimal number of channels of rocket engine thrust chamber, it was found that the optimal channel number is 335, at which the cooling effect of the thrust chamber…

2073

Abstract

Purpose

To find the optimal number of channels of rocket engine thrust chamber, it was found that the optimal channel number is 335, at which the cooling effect of the thrust chamber cooling channel reaches the best, which can be helpful to design rocket engine thrust chamber.

Design/methodology/approach

The commercial computational fluid dynamics (CFD) software FLUENT with standard kε turbulent model was used. The CFD method was validated via comparing with the available experimental data.

Findings

It was found that both the highest temperature and the maximal heat flux through the wall on the hot‐gas side occurs about the throat region at the symmetrical center of the cooling channel. Owing to the strong curvature of the cooling channel geometry, the secondary flow reached its strongest level around the throat region. The typical values of pressure drop and temperature difference between the inlet and exit of cooling channel were 2.7 MPa and 67.38 K (standard case), respectively. Besides an optimal number of channels exist, and it is approximately 335, which can make the effect of heat transfer of cooling channels best with acceptable pressure drop. As a whole, the present study gives some useful information to the thermal design of liquid rocket engine thrust chamber.

Research limitations/implications

More detailed computation and optimization should be performed for the fluid flow and heat transfer of cooling channel.

Practical implications

A very useful optimization on heat transfer and fluid flow in cooling channel of liquid rocket engine thrust chamber.

Originality/value

This paper provides the performance of optimization on heat transfer and fluid flow in cooling channel of liquid rocket engine thrust chamber, which can make the effect of heat transfer of cooling channels best with acceptable pressure drop. As a whole, the present study gives some useful information to the thermal design of liquid rocket engine thrust chamber.

Details

Engineering Computations, vol. 23 no. 8
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 1 March 1974

D. RINE

In this paper we exploit a new correlation coefficient ρX1 for pairs of discrete‐valued lung function parameters X1, X2 based on the expected maximal probability of predicting an…

Abstract

In this paper we exploit a new correlation coefficient ρX1 for pairs of discrete‐valued lung function parameters X1, X2 based on the expected maximal probability of predicting an Xj value given any Xi value. Then, for X1, X2 representing two different lung function parameters one can use ρX1, ρX2 in order to measure the strength of predicting Xi from Xj. E(max.g(y/X)) is a reasonable replacement for the information entropy function when dealing with discrete‐valued random variables X, Y. (In the Appendix and Program MX = X1 and KY = X2.)

Details

Kybernetes, vol. 3 no. 3
Type: Research Article
ISSN: 0368-492X

1 – 10 of over 1000