Search results

1 – 10 of 48
Article
Publication date: 31 January 2023

Zhenjun Li and Chunyu Zhao

This paper aims to discuss the inverse problems that arise in various practical heat transfer processes. The purpose of this paper is to provide an identification method for…

Abstract

Purpose

This paper aims to discuss the inverse problems that arise in various practical heat transfer processes. The purpose of this paper is to provide an identification method for predicting the internal boundary conditions for thermal analysis of mechanical structure. A few examples of heat transfer systems are given to illustrate the applicability of the method and the challenges that must be addressed in solving the inverse problem.

Design/methodology/approach

In this paper, the thermal network method and the finite difference method are used to model the two-dimensional heat conduction inverse problem of the tube structure, and the heat balance equation is arranged into an explicit form for heat load prediction. To solve the matrix ill-conditioned problem in the process of solving the inverse problem, a Tikhonov regularization parameter selection method based on the inverse computation-contrast-adjustment-approach was proposed.

Findings

The applicability of the proposed method is illustrated by numerical examples for different dynamically varying heat source functions. It is proved that the method can predict dynamic heat source with different complexity.

Practical implications

The modeling calculation method described in this paper can be used to predict the boundary conditions for the inner wall of the heat transfer tube, where the temperature sensor cannot be placed.

Originality/value

This paper presents a general method for the direct prediction of heat sources or boundary conditions in mechanical structure. It can directly obtain the time-varying heat flux load and thtemperature field of the machine structure.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 33 no. 6
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 4 April 2024

Chuyu Tang, Hao Wang, Genliang Chen and Shaoqiu Xu

This paper aims to propose a robust method for non-rigid point set registration, using the Gaussian mixture model and accommodating non-rigid transformations. The posterior…

Abstract

Purpose

This paper aims to propose a robust method for non-rigid point set registration, using the Gaussian mixture model and accommodating non-rigid transformations. The posterior probabilities of the mixture model are determined through the proposed integrated feature divergence.

Design/methodology/approach

The method involves an alternating two-step framework, comprising correspondence estimation and subsequent transformation updating. For correspondence estimation, integrated feature divergences including both global and local features, are coupled with deterministic annealing to address the non-convexity problem of registration. For transformation updating, the expectation-maximization iteration scheme is introduced to iteratively refine correspondence and transformation estimation until convergence.

Findings

The experiments confirm that the proposed registration approach exhibits remarkable robustness on deformation, noise, outliers and occlusion for both 2D and 3D point clouds. Furthermore, the proposed method outperforms existing analogous algorithms in terms of time complexity. Application of stabilizing and securing intermodal containers loaded on ships is performed. The results demonstrate that the proposed registration framework exhibits excellent adaptability for real-scan point clouds, and achieves comparatively superior alignments in a shorter time.

Originality/value

The integrated feature divergence, involving both global and local information of points, is proven to be an effective indicator for measuring the reliability of point correspondences. This inclusion prevents premature convergence, resulting in more robust registration results for our proposed method. Simultaneously, the total operating time is reduced due to a lower number of iterations.

Details

Robotic Intelligence and Automation, vol. 44 no. 2
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 19 December 2023

Jinchao Huang

Single-shot multi-category clothing recognition and retrieval play a crucial role in online searching and offline settlement scenarios. Existing clothing recognition methods based…

Abstract

Purpose

Single-shot multi-category clothing recognition and retrieval play a crucial role in online searching and offline settlement scenarios. Existing clothing recognition methods based on RGBD clothing images often suffer from high-dimensional feature representations, leading to compromised performance and efficiency.

Design/methodology/approach

To address this issue, this paper proposes a novel method called Manifold Embedded Discriminative Feature Selection (MEDFS) to select global and local features, thereby reducing the dimensionality of the feature representation and improving performance. Specifically, by combining three global features and three local features, a low-dimensional embedding is constructed to capture the correlations between features and categories. The MEDFS method designs an optimization framework utilizing manifold mapping and sparse regularization to achieve feature selection. The optimization objective is solved using an alternating iterative strategy, ensuring convergence.

Findings

Empirical studies conducted on a publicly available RGBD clothing image dataset demonstrate that the proposed MEDFS method achieves highly competitive clothing classification performance while maintaining efficiency in clothing recognition and retrieval.

Originality/value

This paper introduces a novel approach for multi-category clothing recognition and retrieval, incorporating the selection of global and local features. The proposed method holds potential for practical applications in real-world clothing scenarios.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 4 March 2024

Yongjiang Xue, Wei Wang and Qingzeng Song

The primary objective of this study is to tackle the enduring challenge of preserving feature integrity during the manipulation of geometric data in computer graphics. Our work…

Abstract

Purpose

The primary objective of this study is to tackle the enduring challenge of preserving feature integrity during the manipulation of geometric data in computer graphics. Our work aims to introduce and validate a variational sparse diffusion model that enhances the capability to maintain the definition of sharp features within meshes throughout complex processing tasks such as segmentation and repair.

Design/methodology/approach

We developed a variational sparse diffusion model that integrates a high-order L1 regularization framework with Dirichlet boundary constraints, specifically designed to preserve edge definition. This model employs an innovative vertex updating strategy that optimizes the quality of mesh repairs. We leverage the augmented Lagrangian method to address the computational challenges inherent in this approach, enabling effective management of the trade-off between diffusion strength and feature preservation. Our methodology involves a detailed analysis of segmentation and repair processes, focusing on maintaining the acuity of features on triangulated surfaces.

Findings

Our findings indicate that the proposed variational sparse diffusion model significantly outperforms traditional smooth diffusion methods in preserving sharp features during mesh processing. The model ensures the delineation of clear boundaries in mesh segmentation and achieves high-fidelity restoration of deteriorated meshes in repair tasks. The innovative vertex updating strategy within the model contributes to enhanced mesh quality post-repair. Empirical evaluations demonstrate that our approach maintains the integrity of original, sharp features more effectively, especially in complex geometries with intricate detail.

Originality/value

The originality of this research lies in the novel application of a high-order L1 regularization framework to the field of mesh processing, a method not conventionally applied in this context. The value of our work is in providing a robust solution to the problem of feature degradation during the mesh manipulation process. Our model’s unique vertex updating strategy and the use of the augmented Lagrangian method for optimization are distinctive contributions that enhance the state-of-the-art in geometry processing. The empirical success of our model in preserving features during mesh segmentation and repair presents an advancement in computer graphics, offering practical benefits to both academic research and industry applications.

Details

Engineering Computations, vol. 41 no. 2
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 28 February 2023

Meltem Aksoy, Seda Yanık and Mehmet Fatih Amasyali

When a large number of project proposals are evaluated to allocate available funds, grouping them based on their similarities is beneficial. Current approaches to group proposals…

Abstract

Purpose

When a large number of project proposals are evaluated to allocate available funds, grouping them based on their similarities is beneficial. Current approaches to group proposals are primarily based on manual matching of similar topics, discipline areas and keywords declared by project applicants. When the number of proposals increases, this task becomes complex and requires excessive time. This paper aims to demonstrate how to effectively use the rich information in the titles and abstracts of Turkish project proposals to group them automatically.

Design/methodology/approach

This study proposes a model that effectively groups Turkish project proposals by combining word embedding, clustering and classification techniques. The proposed model uses FastText, BERT and term frequency/inverse document frequency (TF/IDF) word-embedding techniques to extract terms from the titles and abstracts of project proposals in Turkish. The extracted terms were grouped using both the clustering and classification techniques. Natural groups contained within the corpus were discovered using k-means, k-means++, k-medoids and agglomerative clustering algorithms. Additionally, this study employs classification approaches to predict the target class for each document in the corpus. To classify project proposals, various classifiers, including k-nearest neighbors (KNN), support vector machines (SVM), artificial neural networks (ANN), classification and regression trees (CART) and random forest (RF), are used. Empirical experiments were conducted to validate the effectiveness of the proposed method by using real data from the Istanbul Development Agency.

Findings

The results show that the generated word embeddings can effectively represent proposal texts as vectors, and can be used as inputs for clustering or classification algorithms. Using clustering algorithms, the document corpus is divided into five groups. In addition, the results demonstrate that the proposals can easily be categorized into predefined categories using classification algorithms. SVM-Linear achieved the highest prediction accuracy (89.2%) with the FastText word embedding method. A comparison of manual grouping with automatic classification and clustering results revealed that both classification and clustering techniques have a high success rate.

Research limitations/implications

The proposed model automatically benefits from the rich information in project proposals and significantly reduces numerous time-consuming tasks that managers must perform manually. Thus, it eliminates the drawbacks of the current manual methods and yields significantly more accurate results. In the future, additional experiments should be conducted to validate the proposed method using data from other funding organizations.

Originality/value

This study presents the application of word embedding methods to effectively use the rich information in the titles and abstracts of Turkish project proposals. Existing research studies focus on the automatic grouping of proposals; traditional frequency-based word embedding methods are used for feature extraction methods to represent project proposals. Unlike previous research, this study employs two outperforming neural network-based textual feature extraction techniques to obtain terms representing the proposals: BERT as a contextual word embedding method and FastText as a static word embedding method. Moreover, to the best of our knowledge, there has been no research conducted on the grouping of project proposals in Turkish.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 16 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 9 April 2024

Lu Wang, Jiahao Zheng, Jianrong Yao and Yuangao Chen

With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although…

Abstract

Purpose

With the rapid growth of the domestic lending industry, assessing whether the borrower of each loan is at risk of default is a pressing issue for financial institutions. Although there are some models that can handle such problems well, there are still some shortcomings in some aspects. The purpose of this paper is to improve the accuracy of credit assessment models.

Design/methodology/approach

In this paper, three different stages are used to improve the classification performance of LSTM, so that financial institutions can more accurately identify borrowers at risk of default. The first approach is to use the K-Means-SMOTE algorithm to eliminate the imbalance within the class. In the second step, ResNet is used for feature extraction, and then two-layer LSTM is used for learning to strengthen the ability of neural networks to mine and utilize deep information. Finally, the model performance is improved by using the IDWPSO algorithm for optimization when debugging the neural network.

Findings

On two unbalanced datasets (category ratios of 700:1 and 3:1 respectively), the multi-stage improved model was compared with ten other models using accuracy, precision, specificity, recall, G-measure, F-measure and the nonparametric Wilcoxon test. It was demonstrated that the multi-stage improved model showed a more significant advantage in evaluating the imbalanced credit dataset.

Originality/value

In this paper, the parameters of the ResNet-LSTM hybrid neural network, which can fully mine and utilize the deep information, are tuned by an innovative intelligent optimization algorithm to strengthen the classification performance of the model.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 30 May 2023

Everton Boos, Fermín S.V. Bazán and Vanda M. Luchesi

This paper aims to reconstruct the spatially varying orthotropic conductivity based on a two-dimensional inverse heat conduction problem described by a partial differential…

23

Abstract

Purpose

This paper aims to reconstruct the spatially varying orthotropic conductivity based on a two-dimensional inverse heat conduction problem described by a partial differential equation (PDE) model with mixed boundary conditions. The proposed discretization uses a highly accurate technique and allows simple implementations. Also, the authors solve the related inverse problem in such a way that smoothness is enforced on the iterations, showing promising results in synthetic examples and real problems with moving heat source.

Design/methodology/approach

The discretization procedure applied to the model for the direct problem uses a pseudospectral collocation strategy in the spatial variables and Crank–Nicolson method for the time-dependent variable. Then, the related inverse problem of recovering the conductivity from temperature measurements is solved by a modified version of Levenberg–Marquardt method (LMM) which uses singular scaling matrices. Problems where data availability is limited are also considered, motivated by a face milling operation problem. Numerical examples are presented to indicate the accuracy and efficiency of the proposed method.

Findings

The paper presents a discretization for the PDEs model aiming on simple implementations and numerical performance. The modified version of LMM introduced using singular scaling matrices shows the capabilities on recovering quantities with precision at a low number of iterations. Numerical results showed good fit between exact and approximate solutions for synthetic noisy data and quite acceptable inverse solutions when experimental data are inverted.

Originality/value

The paper is significant because of the pseudospectral approach, known for its high precision and easy implementation, and usage of singular regularization matrices on LMM iterations, unlike classic implementations of the method, impacting positively on the reconstruction process.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 33 no. 8
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 18 January 2024

Jing Tang, Yida Guo and Yilin Han

Coal is a critical global energy source, and fluctuations in its price significantly impact related enterprises' profitability. This study aims to develop a robust model for…

Abstract

Purpose

Coal is a critical global energy source, and fluctuations in its price significantly impact related enterprises' profitability. This study aims to develop a robust model for predicting the coal price index to enhance coal purchase strategies for coal-consuming enterprises and provide crucial information for global carbon emission reduction.

Design/methodology/approach

The proposed coal price forecasting system combines data decomposition, semi-supervised feature engineering, ensemble learning and deep learning. It addresses the challenge of merging low-resolution and high-resolution data by adaptively combining both types of data and filling in missing gaps through interpolation for internal missing data and self-supervision for initiate/terminal missing data. The system employs self-supervised learning to complete the filling of complex missing data.

Findings

The ensemble model, which combines long short-term memory, XGBoost and support vector regression, demonstrated the best prediction performance among the tested models. It exhibited superior accuracy and stability across multiple indices in two datasets, namely the Bohai-Rim steam-coal price index and coal daily settlement price.

Originality/value

The proposed coal price forecasting system stands out as it integrates data decomposition, semi-supervised feature engineering, ensemble learning and deep learning. Moreover, the system pioneers the use of self-supervised learning for filling in complex missing data, contributing to its originality and effectiveness.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 7 November 2023

Christian Nnaemeka Egwim, Hafiz Alaka, Youlu Pan, Habeeb Balogun, Saheed Ajayi, Abdul Hye and Oluwapelumi Oluwaseun Egunjobi

The study aims to develop a multilayer high-effective ensemble of ensembles predictive model (stacking ensemble) using several hyperparameter optimized ensemble machine learning…

66

Abstract

Purpose

The study aims to develop a multilayer high-effective ensemble of ensembles predictive model (stacking ensemble) using several hyperparameter optimized ensemble machine learning (ML) methods (bagging and boosting ensembles) trained with high-volume data points retrieved from Internet of Things (IoT) emission sensors, time-corresponding meteorology and traffic data.

Design/methodology/approach

For a start, the study experimented big data hypothesis theory by developing sample ensemble predictive models on different data sample sizes and compared their results. Second, it developed a standalone model and several bagging and boosting ensemble models and compared their results. Finally, it used the best performing bagging and boosting predictive models as input estimators to develop a novel multilayer high-effective stacking ensemble predictive model.

Findings

Results proved data size to be one of the main determinants to ensemble ML predictive power. Second, it proved that, as compared to using a single algorithm, the cumulative result from ensemble ML algorithms is usually always better in terms of predicted accuracy. Finally, it proved stacking ensemble to be a better model for predicting PM2.5 concentration level than bagging and boosting ensemble models.

Research limitations/implications

A limitation of this study is the trade-off between performance of this novel model and the computational time required to train it. Whether this gap can be closed remains an open research question. As a result, future research should attempt to close this gap. Also, future studies can integrate this novel model to a personal air quality messaging system to inform public of pollution levels and improve public access to air quality forecast.

Practical implications

The outcome of this study will aid the public to proactively identify highly polluted areas thus potentially reducing pollution-associated/ triggered COVID-19 (and other lung diseases) deaths/ complications/ transmission by encouraging avoidance behavior and support informed decision to lock down by government bodies when integrated into an air pollution monitoring system

Originality/value

This study fills a gap in literature by providing a justification for selecting appropriate ensemble ML algorithms for PM2.5 concentration level predictive modeling. Second, it contributes to the big data hypothesis theory, which suggests that data size is one of the most important factors of ML predictive capability. Third, it supports the premise that when using ensemble ML algorithms, the cumulative output is usually always better in terms of predicted accuracy than using a single algorithm. Finally developing a novel multilayer high-performant hyperparameter optimized ensemble of ensembles predictive model that can accurately predict PM2.5 concentration levels with improved model interpretability and enhanced generalizability, as well as the provision of a novel databank of historic pollution data from IoT emission sensors that can be purchased for research, consultancy and policymaking.

Details

Journal of Engineering, Design and Technology , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1726-0531

Keywords

Article
Publication date: 13 June 2023

G. Deepa, A.J. Niranjana and A.S. Balu

This study aims at proposing a hybrid model for early cost prediction of a construction project. Early cost prediction for a construction project is the basic approach to procure…

Abstract

Purpose

This study aims at proposing a hybrid model for early cost prediction of a construction project. Early cost prediction for a construction project is the basic approach to procure a project within a predefined budget. However, most of the projects routinely face the impact of cost overruns. Furthermore, conventional and manual cost computing techniques are hectic, time-consuming and error-prone. To deal with such challenges, soft computing techniques such as artificial neural networks (ANNs), fuzzy logic and genetic algorithms are applied in construction management. Each technique has its own constraints not only in terms of efficiency but also in terms of feasibility, practicability, reliability and environmental impacts. However, appropriate combination of the techniques improves the model owing to their inherent nature.

Design/methodology/approach

This paper proposes a hybrid model by combining machine learning (ML) techniques with ANN to accurately predict the cost of pile foundations. The parameters contributing toward the cost of pile foundations were collected from five different projects in India. Out of 180 collected data entries, 176 entries were finally used after data cleaning. About 70% of the final data were used for building the model and the remaining 30% were used for validation.

Findings

The proposed model is capable of predicting the pile foundation costs with an accuracy of 97.42%.

Originality/value

Although various cost estimation techniques are available, appropriate use and combination of various ML techniques aid in improving the prediction accuracy. The proposed model will be a value addition to cost estimation of pile foundations.

Details

Journal of Engineering, Design and Technology , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1726-0531

Keywords

1 – 10 of 48