Search results

1 – 10 of 609
Article
Publication date: 15 February 2023

Tiago F.A.C. Sigahi and Laerte Idal Sznelwar

The purpose of this paper is twofold: (1) to map and analyze existing complexity typologies and (2) to develop a framework for characterizing complexity-based approaches.

Abstract

Purpose

The purpose of this paper is twofold: (1) to map and analyze existing complexity typologies and (2) to develop a framework for characterizing complexity-based approaches.

Design/methodology/approach

This study was conducted in three stages: (1) initial identification of typologies related to complexity following a structured procedure based on the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol; (2) backward and forward review to identify additional relevant typologies and (3) content analysis of the selected typologies, categorization and framework development.

Findings

Based on 17 selected typologies, a comprehensive overview of complexity studies is provided. Each typology is described considering key concepts, contributions and convergences and differences between them. The epistemological, theoretical and methodological diversity of complexity studies was explored, allowing the identification of the main schools of thought and authors. A framework for characterizing complexity-based approaches was proposed including the following perspectives: ontology of complexity, epistemology of complexity, purpose and object of interest, methodology and methods and theoretical pillars.

Originality/value

This study examines the main typologies of complexity from an integrated and multidisciplinary perspective and, based on that, proposes a novel framework to understanding and characterizing complexity-based approaches.

Details

Kybernetes, vol. 53 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 22 March 2024

Mohd Mustaqeem, Suhel Mustajab and Mahfooz Alam

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have…

Abstract

Purpose

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have proposed a novel hybrid approach that combines Gray Wolf Optimization with Feature Selection (GWOFS) and multilayer perceptron (MLP) for SDP. The GWOFS-MLP hybrid model is designed to optimize feature selection, ultimately enhancing the accuracy and efficiency of SDP. Gray Wolf Optimization, inspired by the social hierarchy and hunting behavior of gray wolves, is employed to select a subset of relevant features from an extensive pool of potential predictors. This study investigates the key challenges that traditional SDP approaches encounter and proposes promising solutions to overcome time complexity and the curse of the dimensionality reduction problem.

Design/methodology/approach

The integration of GWOFS and MLP results in a robust hybrid model that can adapt to diverse software datasets. This feature selection process harnesses the cooperative hunting behavior of wolves, allowing for the exploration of critical feature combinations. The selected features are then fed into an MLP, a powerful artificial neural network (ANN) known for its capability to learn intricate patterns within software metrics. MLP serves as the predictive engine, utilizing the curated feature set to model and classify software defects accurately.

Findings

The performance evaluation of the GWOFS-MLP hybrid model on a real-world software defect dataset demonstrates its effectiveness. The model achieves a remarkable training accuracy of 97.69% and a testing accuracy of 97.99%. Additionally, the receiver operating characteristic area under the curve (ROC-AUC) score of 0.89 highlights the model’s ability to discriminate between defective and defect-free software components.

Originality/value

Experimental implementations using machine learning-based techniques with feature reduction are conducted to validate the proposed solutions. The goal is to enhance SDP’s accuracy, relevance and efficiency, ultimately improving software quality assurance processes. The confusion matrix further illustrates the model’s performance, with only a small number of false positives and false negatives.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 26 February 2024

Chong Wu, Xiaofang Chen and Yongjie Jiang

While the Chinese securities market is booming, the phenomenon of listed companies falling into financial distress is also emerging, which affects the operation and development of…

Abstract

Purpose

While the Chinese securities market is booming, the phenomenon of listed companies falling into financial distress is also emerging, which affects the operation and development of enterprises and also jeopardizes the interests of investors. Therefore, it is important to understand how to accurately and reasonably predict the financial distress of enterprises.

Design/methodology/approach

In the present study, ensemble feature selection (EFS) and improved stacking were used for financial distress prediction (FDP). Mutual information, analysis of variance (ANOVA), random forest (RF), genetic algorithms, and recursive feature elimination (RFE) were chosen for EFS to select features. Since there may be missing information when feeding the results of the base learner directly into the meta-learner, the features with high importance were fed into the meta-learner together. A screening layer was added to select the meta-learner with better performance. Finally, Optima hyperparameters were used for parameter tuning by the learners.

Findings

An empirical study was conducted with a sample of A-share listed companies in China. The F1-score of the model constructed using the features screened by EFS reached 84.55%, representing an improvement of 4.37% compared to the original features. To verify the effectiveness of improved stacking, benchmark model comparison experiments were conducted. Compared to the original stacking model, the accuracy of the improved stacking model was improved by 0.44%, and the F1-score was improved by 0.51%. In addition, the improved stacking model had the highest area under the curve (AUC) value (0.905) among all the compared models.

Originality/value

Compared to previous models, the proposed FDP model has better performance, thus bridging the research gap of feature selection. The present study provides new ideas for stacking improvement research and a reference for subsequent research in this field.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 29 March 2024

Pratheek Suresh and Balaji Chakravarthy

As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a…

Abstract

Purpose

As data centres grow in size and complexity, traditional air-cooling methods are becoming less effective and more expensive. Immersion cooling, where servers are submerged in a dielectric fluid, has emerged as a promising alternative. Ensuring reliable operations in data centre applications requires the development of an effective control framework for immersion cooling systems, which necessitates the prediction of server temperature. While deep learning-based temperature prediction models have shown effectiveness, further enhancement is needed to improve their prediction accuracy. This study aims to develop a temperature prediction model using Long Short-Term Memory (LSTM) Networks based on recursive encoder-decoder architecture.

Design/methodology/approach

This paper explores the use of deep learning algorithms to predict the temperature of a heater in a two-phase immersion-cooled system using NOVEC 7100. The performance of recursive-long short-term memory-encoder-decoder (R-LSTM-ED), recursive-convolutional neural network-LSTM (R-CNN-LSTM) and R-LSTM approaches are compared using mean absolute error, root mean square error, mean absolute percentage error and coefficient of determination (R2) as performance metrics. The impact of window size, sampling period and noise within training data on the performance of the model is investigated.

Findings

The R-LSTM-ED consistently outperforms the R-LSTM model by 6%, 15.8% and 12.5%, and R-CNN-LSTM model by 4%, 11% and 12.3% in all forecast ranges of 10, 30 and 60 s, respectively, averaged across all the workloads considered in the study. The optimum sampling period based on the study is found to be 2 s and the window size to be 60 s. The performance of the model deteriorates significantly as the noise level reaches 10%.

Research limitations/implications

The proposed models are currently trained on data collected from an experimental setup simulating data centre loads. Future research should seek to extend the applicability of the models by incorporating time series data from immersion-cooled servers.

Originality/value

The proposed multivariate-recursive-prediction models are trained and tested by using real Data Centre workload traces applied to the immersion-cooled system developed in the laboratory.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 29 February 2024

Zhen Chen, Jing Liu, Chao Ma, Huawei Wu and Zhi Li

The purpose of this study is to propose a precise and standardized strategy for numerically simulating vehicle aerodynamics.

Abstract

Purpose

The purpose of this study is to propose a precise and standardized strategy for numerically simulating vehicle aerodynamics.

Design/methodology/approach

Error sources in computational fluid dynamics were analyzed. Additionally, controllable experiential and discretization errors, which significantly influence the calculated results, are expounded upon. Considering the airflow mechanism around a vehicle, the computational efficiency and accuracy of each solution strategy were compared and analyzed through numerous computational cases. Finally, the most suitable numerical strategy, including the turbulence model, simplified vehicle model, calculation domain, boundary conditions, grids and discretization scheme, was identified. Two simplified vehicle models were introduced, and relevant wind tunnel tests were performed to validate the selected strategy.

Findings

Errors in vehicle computational aerodynamics mainly stem from the unreasonable simplification of the vehicle model, calculation domain, definite solution conditions, grid strategy and discretization schemes. Using the proposed standardized numerical strategy, the simulated steady and transient aerodynamic characteristics agreed well with the experimental results.

Originality/value

Building upon the modified Low-Reynolds Number k-e model and Scale Adaptive Simulation model, to the best of the authors’ knowledge, a precise and standardized numerical simulation strategy for vehicle aerodynamics is proposed for the first time, which can be integrated into vehicle research and design.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 5
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 11 March 2024

Jianjun Yao and Yingzhao Li

Weak repeatability is observed in handcrafted keypoints, leading to tracking failures in visual simultaneous localization and mapping (SLAM) systems under challenging scenarios…

Abstract

Purpose

Weak repeatability is observed in handcrafted keypoints, leading to tracking failures in visual simultaneous localization and mapping (SLAM) systems under challenging scenarios such as illumination change, rapid rotation and large angle of view variation. In contrast, learning-based keypoints exhibit higher repetition but entail considerable computational costs. This paper proposes an innovative algorithm for keypoint extraction, aiming to strike an equilibrium between precision and efficiency. This paper aims to attain accurate, robust and versatile visual localization in scenes of formidable complexity.

Design/methodology/approach

SiLK-SLAM initially refines the cutting-edge learning-based extractor, SiLK, and introduces an innovative postprocessing algorithm for keypoint homogenization and operational efficiency. Furthermore, SiLK-SLAM devises a reliable relocalization strategy called PCPnP, leveraging progressive and consistent sampling, thereby bolstering its robustness.

Findings

Empirical evaluations conducted on TUM, KITTI and EuRoC data sets substantiate SiLK-SLAM’s superior localization accuracy compared to ORB-SLAM3 and other methods. Compared to ORB-SLAM3, SiLK-SLAM demonstrates an enhancement in localization accuracy even by 70.99%, 87.20% and 85.27% across the three data sets. The relocalization experiments demonstrate SiLK-SLAM’s capability in producing precise and repeatable keypoints, showcasing its robustness in challenging environments.

Originality/value

The SiLK-SLAM achieves exceedingly elevated localization accuracy and resilience in formidable scenarios, holding paramount importance in enhancing the autonomy of robots navigating intricate environments. Code is available at https://github.com/Pepper-FlavoredChewingGum/SiLK-SLAM.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 18 September 2023

Jianxiang Qiu, Jialiang Xie, Dongxiao Zhang and Ruping Zhang

Twin support vector machine (TSVM) is an effective machine learning technique. However, the TSVM model does not consider the influence of different data samples on the optimal…

Abstract

Purpose

Twin support vector machine (TSVM) is an effective machine learning technique. However, the TSVM model does not consider the influence of different data samples on the optimal hyperplane, which results in its sensitivity to noise. To solve this problem, this study proposes a twin support vector machine model based on fuzzy systems (FSTSVM).

Design/methodology/approach

This study designs an effective fuzzy membership assignment strategy based on fuzzy systems. It describes the relationship between the three inputs and the fuzzy membership of the sample by defining fuzzy inference rules and then exports the fuzzy membership of the sample. Combining this strategy with TSVM, the FSTSVM is proposed. Moreover, to speed up the model training, this study employs a coordinate descent strategy with shrinking by active set. To evaluate the performance of FSTSVM, this study conducts experiments designed on artificial data sets and UCI data sets.

Findings

The experimental results affirm the effectiveness of FSTSVM in addressing binary classification problems with noise, demonstrating its superior robustness and generalization performance compared to existing learning models. This can be attributed to the proposed fuzzy membership assignment strategy based on fuzzy systems, which effectively mitigates the adverse effects of noise.

Originality/value

This study designs a fuzzy membership assignment strategy based on fuzzy systems that effectively reduces the negative impact caused by noise and then proposes the noise-robust FSTSVM model. Moreover, the model employs a coordinate descent strategy with shrinking by active set to accelerate the training speed of the model.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 22 March 2024

Yahao Wang, Zhen Li, Yanghong Li and Erbao Dong

In response to the challenge of reduced efficiency or failure of robot motion planning algorithms when faced with end-effector constraints, this study aims to propose a new…

Abstract

Purpose

In response to the challenge of reduced efficiency or failure of robot motion planning algorithms when faced with end-effector constraints, this study aims to propose a new constraint method to improve the performance of the sampling-based planner.

Design/methodology/approach

In this work, a constraint method (TC method) based on the idea of cross-sampling is proposed. This method uses the tangent space in the workspace to approximate the constrained manifold pattern and projects the entire sampling process into the workspace for constraint correction. This method avoids the need for extensive computational work involving multiple iterations of the Jacobi inverse matrix in the configuration space and retains the sampling properties of the sampling-based algorithm.

Findings

Simulation results demonstrate that the performance of the planner when using the TC method under the end-effector constraint surpasses that of other methods. Physical experiments further confirm that the TC-Planner does not cause excessive constraint errors that might lead to task failure. Moreover, field tests conducted on robots underscore the effectiveness of the TC-Planner, and its excellent performance, thereby advancing the autonomy of robots in power-line connection tasks.

Originality/value

This paper proposes a new constraint method combined with the rapid-exploring random trees algorithm to generate collision-free trajectories that satisfy the constraints for a high-dimensional robotic system under end-effector constraints. In a series of simulation and experimental tests, the planner using the TC method under end-effector constraints efficiently performs. Tests on a power distribution live-line operation robot also show that the TC method can greatly aid the robot in completing operation tasks with end-effector constraints. This helps robots to perform tasks with complex end-effector constraints such as grinding and welding more efficiently and autonomously.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 4 April 2024

Chuyu Tang, Hao Wang, Genliang Chen and Shaoqiu Xu

This paper aims to propose a robust method for non-rigid point set registration, using the Gaussian mixture model and accommodating non-rigid transformations. The posterior…

Abstract

Purpose

This paper aims to propose a robust method for non-rigid point set registration, using the Gaussian mixture model and accommodating non-rigid transformations. The posterior probabilities of the mixture model are determined through the proposed integrated feature divergence.

Design/methodology/approach

The method involves an alternating two-step framework, comprising correspondence estimation and subsequent transformation updating. For correspondence estimation, integrated feature divergences including both global and local features, are coupled with deterministic annealing to address the non-convexity problem of registration. For transformation updating, the expectation-maximization iteration scheme is introduced to iteratively refine correspondence and transformation estimation until convergence.

Findings

The experiments confirm that the proposed registration approach exhibits remarkable robustness on deformation, noise, outliers and occlusion for both 2D and 3D point clouds. Furthermore, the proposed method outperforms existing analogous algorithms in terms of time complexity. Application of stabilizing and securing intermodal containers loaded on ships is performed. The results demonstrate that the proposed registration framework exhibits excellent adaptability for real-scan point clouds, and achieves comparatively superior alignments in a shorter time.

Originality/value

The integrated feature divergence, involving both global and local information of points, is proven to be an effective indicator for measuring the reliability of point correspondences. This inclusion prevents premature convergence, resulting in more robust registration results for our proposed method. Simultaneously, the total operating time is reduced due to a lower number of iterations.

Details

Robotic Intelligence and Automation, vol. 44 no. 2
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 8 June 2021

Naga Swetha R, Vimal K. Shrivastava and K. Parvathi

The mortality rate due to skin cancers has been increasing over the past decades. Early detection and treatment of skin cancers can save lives. However, due to visual resemblance…

Abstract

Purpose

The mortality rate due to skin cancers has been increasing over the past decades. Early detection and treatment of skin cancers can save lives. However, due to visual resemblance of normal skin and lesion and blurred lesion borders, skin cancer diagnosis has become a challenging task even for skilled dermatologists. Hence, the purpose of this study is to present an image-based automatic approach for multiclass skin lesion classification and compare the performance of various models.

Design/methodology/approach

In this paper, the authors have presented a multiclass skin lesion classification approach based on transfer learning of deep convolutional neural network. The following pre-trained models have been used: VGG16, VGG19, ResNet50, ResNet101, ResNet152, Xception, MobileNet and compared their performances on skin cancer classification.

Findings

The experiments have been performed on HAM10000 dataset, which contains 10,015 dermoscopic images of seven skin lesion classes. The categorical accuracy of 83.69%, Top2 accuracy of 91.48% and Top3 accuracy of 96.19% has been obtained.

Originality/value

Early detection and treatment of skin cancer can save millions of lives. This work demonstrates that the transfer learning can be an effective way to classify skin cancer images, providing adequate performance with less computational complexity.

Details

International Journal of Intelligent Unmanned Systems, vol. 12 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

1 – 10 of 609