Search results

1 – 10 of 26
Open Access
Article
Publication date: 29 January 2024

Miaoxian Guo, Shouheng Wei, Chentong Han, Wanliang Xia, Chao Luo and Zhijian Lin

Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical…

Abstract

Purpose

Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical modeling takes a lot of effort. To predict the surface roughness of milling processing, this paper aims to construct a neural network based on deep learning and data augmentation.

Design/methodology/approach

This study proposes a method consisting of three steps. Firstly, the machine tool multisource data acquisition platform is established, which combines sensor monitoring with machine tool communication to collect processing signals. Secondly, the feature parameters are extracted to reduce the interference and improve the model generalization ability. Thirdly, for different expectations, the parameters of the deep belief network (DBN) model are optimized by the tent-SSA algorithm to achieve more accurate roughness classification and regression prediction.

Findings

The adaptive synthetic sampling (ADASYN) algorithm can improve the classification prediction accuracy of DBN from 80.67% to 94.23%. After the DBN parameters were optimized by Tent-SSA, the roughness prediction accuracy was significantly improved. For the classification model, the prediction accuracy is improved by 5.77% based on ADASYN optimization. For regression models, different objective functions can be set according to production requirements, such as root-mean-square error (RMSE) or MaxAE, and the error is reduced by more than 40% compared to the original model.

Originality/value

A roughness prediction model based on multiple monitoring signals is proposed, which reduces the dependence on the acquisition of environmental variables and enhances the model's applicability. Furthermore, with the ADASYN algorithm, the Tent-SSA intelligent optimization algorithm is introduced to optimize the hyperparameters of the DBN model and improve the optimization performance.

Details

Journal of Intelligent Manufacturing and Special Equipment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2633-6596

Keywords

Article
Publication date: 28 November 2023

Huan Wang, Daao Wang, Peng Wang and Zhigeng Fang

The purpose of this research is to provide a theoretical framework for complex equipment quality risk evaluation. The primary aim of the framework is to enhance the ability to…

Abstract

Purpose

The purpose of this research is to provide a theoretical framework for complex equipment quality risk evaluation. The primary aim of the framework is to enhance the ability to identify risks and improve risk control efficiency during the development phase.

Design/methodology/approach

A novel framework for quality risk evaluation in complex equipment is proposed, which integrates probabilistic hesitant fuzzy set-quality function deployment (PHFS-QFD) and grey clustering. PHFS-QFD is applied to identify the quality risk factors, and grey clustering is used to evaluate quality risks in cases of poor quality information during the development stage. The unfolding function of QFD is applied to simplify complex evaluation problems.

Findings

The methodology presents an innovative approach to quality risk evaluation for complex equipment development. The case analysis demonstrates that this method can efficiently evaluate the quality risks for aircraft development and systematically trace back the risk factors through hierarchical relationships. In comparison to traditional failure mode and effects analysis methods for quality risk assessment, this approach exhibits superior effectiveness and reliability in managing quality risks for complex equipment development.

Originality/value

This study contributes to the field by introducing a novel theoretical framework that combines PHFS-QFD and grey clustering. The integration of these approaches significantly improves the quality risk evaluation process for complex equipment development, overcoming challenges related to data scarcity and simplifying the assessment of intricate systems.

Details

Grey Systems: Theory and Application, vol. 14 no. 1
Type: Research Article
ISSN: 2043-9377

Keywords

Book part
Publication date: 5 April 2024

Zhichao Wang and Valentin Zelenyuk

Estimation of (in)efficiency became a popular practice that witnessed applications in virtually any sector of the economy over the last few decades. Many different models were…

Abstract

Estimation of (in)efficiency became a popular practice that witnessed applications in virtually any sector of the economy over the last few decades. Many different models were deployed for such endeavors, with Stochastic Frontier Analysis (SFA) models dominating the econometric literature. Among the most popular variants of SFA are Aigner, Lovell, and Schmidt (1977), which launched the literature, and Kumbhakar, Ghosh, and McGuckin (1991), which pioneered the branch taking account of the (in)efficiency term via the so-called environmental variables or determinants of inefficiency. Focusing on these two prominent approaches in SFA, the goal of this chapter is to try to understand the production inefficiency of public hospitals in Queensland. While doing so, a recognized yet often overlooked phenomenon emerges where possible dramatic differences (and consequently very different policy implications) can be derived from different models, even within one paradigm of SFA models. This emphasizes the importance of exploring many alternative models, and scrutinizing their assumptions, before drawing policy implications, especially when such implications may substantially affect people’s lives, as is the case in the hospital sector.

Article
Publication date: 29 January 2024

Juan Manuel Aristizábal, Edwin Tarapuez and Carlos Alberto Astudillo

This study aims to analyze the entrepreneurial intention (EI) of Colombian researchers using machine learning (ML) techniques, considering their academic activity, contexts and…

Abstract

Purpose

This study aims to analyze the entrepreneurial intention (EI) of Colombian researchers using machine learning (ML) techniques, considering their academic activity, contexts and social norms (SN).

Design/methodology/approach

Unsupervised classification techniques were applied, including principal component analysis, hierarchical clustering with the Ward method and a logistic model to evaluate the classification. This was done to group researchers according to their characteristics and EI.

Findings

The methodology used allowed the identification of three groups of academics with distinct characteristics, of which two showed a high presence of EI. The results indicate that EI is influenced by the connection with the private sector (consulting, intellectual property and applied research) and by the lack of institutional support from universities. Regarding SN, only the preference for entrepreneurial activity over being an employee and the social appreciation of entrepreneurial dedication were identified as predictors of EI.

Originality/value

The use of ML techniques to study the EI of researchers is uncommon. This study highlights the ability of the methodology used to identify differences between two groups of academics with similar characteristics but different levels of EI. One group was identified that, despite rejecting values associated with entrepreneurs, has a high predisposition to develop a career as an entrepreneur. This provides valuable information for designing policies that promote EI among Colombian researchers.

Details

Journal of Entrepreneurship in Emerging Economies, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2053-4604

Keywords

Article
Publication date: 4 April 2024

Chuyu Tang, Hao Wang, Genliang Chen and Shaoqiu Xu

This paper aims to propose a robust method for non-rigid point set registration, using the Gaussian mixture model and accommodating non-rigid transformations. The posterior…

Abstract

Purpose

This paper aims to propose a robust method for non-rigid point set registration, using the Gaussian mixture model and accommodating non-rigid transformations. The posterior probabilities of the mixture model are determined through the proposed integrated feature divergence.

Design/methodology/approach

The method involves an alternating two-step framework, comprising correspondence estimation and subsequent transformation updating. For correspondence estimation, integrated feature divergences including both global and local features, are coupled with deterministic annealing to address the non-convexity problem of registration. For transformation updating, the expectation-maximization iteration scheme is introduced to iteratively refine correspondence and transformation estimation until convergence.

Findings

The experiments confirm that the proposed registration approach exhibits remarkable robustness on deformation, noise, outliers and occlusion for both 2D and 3D point clouds. Furthermore, the proposed method outperforms existing analogous algorithms in terms of time complexity. Application of stabilizing and securing intermodal containers loaded on ships is performed. The results demonstrate that the proposed registration framework exhibits excellent adaptability for real-scan point clouds, and achieves comparatively superior alignments in a shorter time.

Originality/value

The integrated feature divergence, involving both global and local information of points, is proven to be an effective indicator for measuring the reliability of point correspondences. This inclusion prevents premature convergence, resulting in more robust registration results for our proposed method. Simultaneously, the total operating time is reduced due to a lower number of iterations.

Details

Robotic Intelligence and Automation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-6969

Keywords

Book part
Publication date: 23 April 2024

Emerson Norabuena-Figueroa, Roger Rurush-Asencio, K. P. Jaheer Mukthar, Jose Sifuentes-Stratti and Elia Ramírez-Asís

The development of information technologies has led to a considerable transformation in human resource management from conventional or commonly known as personnel management to…

Abstract

The development of information technologies has led to a considerable transformation in human resource management from conventional or commonly known as personnel management to modern one. Data mining technology, which has been widely used in several applications, including those that function on the web, includes clustering algorithms as a key component. Web intelligence is a recent academic field that calls for sophisticated analytics and machine learning techniques to facilitate information discovery, particularly on the web. Human resource data gathered from the web are typically enormous, highly complex, dynamic, and unstructured. Traditional clustering methods need to be upgraded because they are ineffective. Standard clustering algorithms are enhanced and expanded with optimization capabilities to address this difficulty by swarm intelligence, a subset of nature-inspired computing. We collect the initial raw human resource data and preprocess the data wherein data cleaning, data normalization, and data integration takes place. The proposed K-C-means-data driven cuckoo bat optimization algorithm (KCM-DCBOA) is used for clustering of the human resource data. The feature extraction is done using principal component analysis (PCA) and the classification of human resource data is done using support vector machine (SVM). Other approaches from the literature were contrasted with the suggested approach. According to the experimental findings, the suggested technique has extremely promising features in terms of the quality of clustering and execution time.

Details

Technological Innovations for Business, Education and Sustainability
Type: Book
ISBN: 978-1-83753-106-6

Keywords

Article
Publication date: 3 August 2023

Yandong Hou, Zhengbo Wu, Xinghua Ren, Kaiwen Liu and Zhengquan Chen

High-resolution remote sensing images possess a wealth of semantic information. However, these images often contain objects of different sizes and distributions, which make the…

Abstract

Purpose

High-resolution remote sensing images possess a wealth of semantic information. However, these images often contain objects of different sizes and distributions, which make the semantic segmentation task challenging. In this paper, a bidirectional feature fusion network (BFFNet) is designed to address this challenge, which aims at increasing the accurate recognition of surface objects in order to effectively classify special features.

Design/methodology/approach

There are two main crucial elements in BFFNet. Firstly, the mean-weighted module (MWM) is used to obtain the key features in the main network. Secondly, the proposed polarization enhanced branch network performs feature extraction simultaneously with the main network to obtain different feature information. The authors then fuse these two features in both directions while applying a cross-entropy loss function to monitor the network training process. Finally, BFFNet is validated on two publicly available datasets, Potsdam and Vaihingen.

Findings

In this paper, a quantitative analysis method is used to illustrate that the proposed network achieves superior performance of 2–6%, respectively, compared to other mainstream segmentation networks from experimental results on two datasets. Complete ablation experiments are also conducted to demonstrate the effectiveness of the elements in the network. In summary, BFFNet has proven to be effective in achieving accurate identification of small objects and in reducing the effect of shadows on the segmentation process.

Originality/value

The originality of the paper is the proposal of a BFFNet based on multi-scale and multi-attention strategies to improve the ability to accurately segment high-resolution and complex remote sensing images, especially for small objects and shadow-obscured objects.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 17 no. 1
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 22 March 2024

Mohd Mustaqeem, Suhel Mustajab and Mahfooz Alam

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have…

Abstract

Purpose

Software defect prediction (SDP) is a critical aspect of software quality assurance, aiming to identify and manage potential defects in software systems. In this paper, we have proposed a novel hybrid approach that combines Gray Wolf Optimization with Feature Selection (GWOFS) and multilayer perceptron (MLP) for SDP. The GWOFS-MLP hybrid model is designed to optimize feature selection, ultimately enhancing the accuracy and efficiency of SDP. Gray Wolf Optimization, inspired by the social hierarchy and hunting behavior of gray wolves, is employed to select a subset of relevant features from an extensive pool of potential predictors. This study investigates the key challenges that traditional SDP approaches encounter and proposes promising solutions to overcome time complexity and the curse of the dimensionality reduction problem.

Design/methodology/approach

The integration of GWOFS and MLP results in a robust hybrid model that can adapt to diverse software datasets. This feature selection process harnesses the cooperative hunting behavior of wolves, allowing for the exploration of critical feature combinations. The selected features are then fed into an MLP, a powerful artificial neural network (ANN) known for its capability to learn intricate patterns within software metrics. MLP serves as the predictive engine, utilizing the curated feature set to model and classify software defects accurately.

Findings

The performance evaluation of the GWOFS-MLP hybrid model on a real-world software defect dataset demonstrates its effectiveness. The model achieves a remarkable training accuracy of 97.69% and a testing accuracy of 97.99%. Additionally, the receiver operating characteristic area under the curve (ROC-AUC) score of 0.89 highlights the model’s ability to discriminate between defective and defect-free software components.

Originality/value

Experimental implementations using machine learning-based techniques with feature reduction are conducted to validate the proposed solutions. The goal is to enhance SDP’s accuracy, relevance and efficiency, ultimately improving software quality assurance processes. The confusion matrix further illustrates the model’s performance, with only a small number of false positives and false negatives.

Details

International Journal of Intelligent Computing and Cybernetics, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 20 February 2024

Saba Sareminia, Zahra Ghayoumian and Fatemeh Haghighat

The textile industry holds immense significance in the economy of any nation, particularly in the production of synthetic yarn and fabrics. Consequently, the pursuit of acquiring…

Abstract

Purpose

The textile industry holds immense significance in the economy of any nation, particularly in the production of synthetic yarn and fabrics. Consequently, the pursuit of acquiring high-quality products at a reduced cost has become a significant concern for countries. The primary objective of this research is to leverage data mining and data intelligence techniques to enhance and refine the production process of texturized yarn by developing an intelligent operating guide that enables the adjustment of production process parameters in the texturized yarn manufacturing process, based on the specifications of raw materials.

Design/methodology/approach

This research undertook a systematic literature review to explore the various factors that influence yarn quality. Data mining techniques, including deep learning, K-nearest neighbor (KNN), decision tree, Naïve Bayes, support vector machine and VOTE, were employed to identify the most crucial factors. Subsequently, an executive and dynamic guide was developed utilizing data intelligence tools such as Power BI (Business Intelligence). The proposed model was then applied to the production process of a textile company in Iran 2020 to 2021.

Findings

The results of this research highlight that the production process parameters exert a more significant influence on texturized yarn quality than the characteristics of raw materials. The executive production guide was designed by selecting the optimal combination of production process parameters, namely draw ratio, D/Y and primary temperature, with the incorporation of limiting indexes derived from the raw material characteristics to predict tenacity and elongation.

Originality/value

This paper contributes by introducing a novel method for creating a dynamic guide. An intelligent and dynamic guide for tenacity and elongation in texturized yarn production was proposed, boasting an approximate accuracy rate of 80%. This developed guide is dynamic and seamlessly integrated with the production database. It undergoes regular updates every three months, incorporating the selected features of the process and raw materials, their respective thresholds, and the predicted levels of elongation and tenacity.

Details

International Journal of Clothing Science and Technology, vol. 36 no. 2
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 1 February 2024

Ismael Gómez-Talal, Lydia González-Serrano, José Luis Rojo-Álvarez and Pilar Talón-Ballestero

This study aims to address the global food waste problem in restaurants by analyzing customer sales information provided by restaurant tickets to gain valuable insights into…

Abstract

Purpose

This study aims to address the global food waste problem in restaurants by analyzing customer sales information provided by restaurant tickets to gain valuable insights into directing sales of perishable products and optimizing product purchases according to customer demand.

Design/methodology/approach

A system based on unsupervised machine learning (ML) data models was created to provide a simple and interpretable management tool. This system performs analysis based on two elements: first, it consolidates and visualizes mutual and nontrivial relationships between information features extracted from tickets using multicomponent analysis, bootstrap resampling and ML domain description. Second, it presents statistically relevant relationships in color-coded tables that provide food waste-related recommendations to restaurant managers.

Findings

The study identified relationships between products and customer sales in specific months. Other ticket elements have been related, such as products with days, hours or functional areas and products with products (cross-selling). Big data (BD) technology helped analyze restaurant tickets and obtain information on product sales behavior.

Research limitations/implications

This study addresses food waste in restaurants using BD and unsupervised ML models. Despite limitations in ticket information and lack of product detail, it opens up research opportunities in relationship analysis, cross-selling, productivity and deep learning applications.

Originality/value

The value and originality of this work lie in the application of BD and unsupervised ML technologies to analyze restaurant tickets and obtain information on product sales behavior. Better sales projection can adjust product purchases to customer demand, reducing food waste and optimizing profits.

1 – 10 of 26