Search results

1 – 10 of over 2000
Article
Publication date: 25 December 2023

Umair Khan, William Pao, Karl Ezra Salgado Pilario, Nabihah Sallih and Muhammad Rehan Khan

Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime…

119

Abstract

Purpose

Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime identification.

Design/methodology/approach

A numerical two-phase flow model was validated against experimental data and was used to generate dynamic pressure signals for three different flow regimes. First, four distinct methods were used for feature extraction: discrete wavelet transform (DWT), empirical mode decomposition, power spectral density and the time series analysis method. Kernel Fisher discriminant analysis (KFDA) was used to simultaneously perform dimensionality reduction and machine learning (ML) classification for each set of features. Finally, the Shapley additive explanations (SHAP) method was applied to make the workflow explainable.

Findings

The results highlighted that the DWT + KFDA method exhibited the highest testing and training accuracy at 95.2% and 88.8%, respectively. Results also include a virtual flow regime map to facilitate the visualization of features in two dimension. Finally, SHAP analysis showed that minimum and maximum values extracted at the fourth and second signal decomposition levels of DWT are the best flow-distinguishing features.

Practical implications

This workflow can be applied to opaque pipes fitted with pressure sensors to achieve flow assurance and automatic monitoring of two-phase flow occurring in many process industries.

Originality/value

This paper presents a novel flow regime identification method by fusing dynamic pressure measurements with ML techniques. The authors’ novel DWT + KFDA method demonstrates superior performance for flow regime identification with explainability.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 8
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 29 September 2021

Swetha Parvatha Reddy Chandrasekhara, Mohan G. Kabadi and Srivinay

This study has mainly aimed to compare and contrast two completely different image processing algorithms that are very adaptive for detecting prostate cancer using wearable…

Abstract

Purpose

This study has mainly aimed to compare and contrast two completely different image processing algorithms that are very adaptive for detecting prostate cancer using wearable Internet of Things (IoT) devices. Cancer in these modern times is still considered as one of the most dreaded disease, which is continuously pestering the mankind over a past few decades. According to Indian Council of Medical Research, India alone registers about 11.5 lakh cancer related cases every year and closely up to 8 lakh people die with cancer related issues each year. Earlier the incidence of prostate cancer was commonly seen in men aged above 60 years, but a recent study has revealed that this type of cancer has been on rise even in men between the age groups of 35 and 60 years as well. These findings make it even more necessary to prioritize the research on diagnosing the prostate cancer at an early stage, so that the patients can be cured and can lead a normal life.

Design/methodology/approach

The research focuses on two types of feature extraction algorithms, namely, scale invariant feature transform (SIFT) and gray level co-occurrence matrix (GLCM) that are commonly used in medical image processing, in an attempt to discover and improve the gap present in the potential detection of prostate cancer in medical IoT. Later the results obtained by these two strategies are classified separately using a machine learning based classification model called multi-class support vector machine (SVM). Owing to the advantage of better tissue discrimination and contrast resolution, magnetic resonance imaging images have been considered for this study. The classification results obtained for both the SIFT as well as GLCM methods are then compared to check, which feature extraction strategy provides the most accurate results for diagnosing the prostate cancer.

Findings

The potential of both the models has been evaluated in terms of three aspects, namely, accuracy, sensitivity and specificity. Each model’s result was checked against diversified ranges of training and test data set. It was found that the SIFT-multiclass SVM model achieved a highest performance rate of 99.9451% accuracy, 100% sensitivity and 99% specificity at 40:60 ratio of the training and testing data set.

Originality/value

The SIFT-multi SVM versus GLCM-multi SVM based comparison has been introduced for the first time to perceive the best model to be used for the accurate diagnosis of prostate cancer. The performance of the classification for each of the feature extraction strategies is enumerated in terms of accuracy, sensitivity and specificity.

Details

International Journal of Pervasive Computing and Communications, vol. 20 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 21 August 2023

Zengxin Kang, Jing Cui and Zhongyi Chu

Accurate segmentation of artificial assembly action is the basis of autonomous industrial assembly robots. This paper aims to study the precise segmentation method of manual…

Abstract

Purpose

Accurate segmentation of artificial assembly action is the basis of autonomous industrial assembly robots. This paper aims to study the precise segmentation method of manual assembly action.

Design/methodology/approach

In this paper, a temporal-spatial-contact features segmentation system (TSCFSS) for manual assembly actions recognition and segmentation is proposed. The system consists of three stages: spatial features extraction, contact force features extraction and action segmentation in the temporal dimension. In the spatial features extraction stage, a vectors assembly graph (VAG) is proposed to precisely describe the motion state of the objects and relative position between objects in an RGB-D video frame. Then graph networks are used to extract the spatial features from the VAG. In the contact features extraction stage, a sliding window is used to cut contact force features between hands and tools/parts corresponding to the video frame. Finally, in the action segmentation stage, the spatial and contact features are concatenated as the input of temporal convolution networks for action recognition and segmentation. The experiments have been conducted on a new manual assembly data set containing RGB-D video and contact force.

Findings

In the experiments, the TSCFSS is used to recognize 11 kinds of assembly actions in demonstrations and outperforms the other comparative action identification methods.

Originality/value

A novel manual assembly actions precisely segmentation system, which fuses temporal features, spatial features and contact force features, has been proposed. The VAG, a symbolic knowledge representation for describing assembly scene state, is proposed, making action segmentation more convenient. A data set with RGB-D video and contact force is specifically tailored for researching manual assembly actions.

Details

Robotic Intelligence and Automation, vol. 43 no. 5
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 30 April 2024

Baoxu Tu, Yuanfei Zhang, Kang Min, Fenglei Ni and Minghe Jin

This paper aims to estimate contact location from sparse and high-dimensional soft tactile array sensor data using the tactile image. The authors used three feature extraction…

Abstract

Purpose

This paper aims to estimate contact location from sparse and high-dimensional soft tactile array sensor data using the tactile image. The authors used three feature extraction methods: handcrafted features, convolutional features and autoencoder features. Subsequently, these features were mapped to contact locations through a contact location regression network. Finally, the network performance was evaluated using spherical fittings of three different radii to further determine the optimal feature extraction method.

Design/methodology/approach

This paper aims to estimate contact location from sparse and high-dimensional soft tactile array sensor data using the tactile image.

Findings

This research indicates that data collected by probes can be used for contact localization. Introducing a batch normalization layer after the feature extraction stage significantly enhances the model’s generalization performance. Through qualitative and quantitative analyses, the authors conclude that convolutional methods can more accurately estimate contact locations.

Originality/value

The paper provides both qualitative and quantitative analyses of the performance of three contact localization methods across different datasets. To address the challenge of obtaining accurate contact locations in quantitative analysis, an indirect measurement metric is proposed.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 31 July 2023

Xinzhi Cao, Yinsai Guo, Wenbin Yang, Xiangfeng Luo and Shaorong Xie

Unsupervised domain adaptation object detection not only mitigates model terrible performance resulting from domain gap, but also has the ability to apply knowledge trained on a…

Abstract

Purpose

Unsupervised domain adaptation object detection not only mitigates model terrible performance resulting from domain gap, but also has the ability to apply knowledge trained on a definite domain to a distinct domain. However, aligning the whole feature may confuse the object and background information, making it challenging to extract discriminative features. This paper aims to propose an improved approach which is called intrinsic feature extraction domain adaptation (IFEDA) to extract discriminative features effectively.

Design/methodology/approach

IFEDA consists of the intrinsic feature extraction (IFE) module and object consistency constraint (OCC). The IFE module, designed on the instance level, mainly solves the issue of the difficult extraction of discriminative object features. Specifically, the discriminative region of the objects can be paid more attention to. Meanwhile, the OCC is deployed to determine whether category prediction in the target domain brings into correspondence with it in the source domain.

Findings

Experimental results demonstrate the validity of our approach and achieve good outcomes on challenging data sets.

Research limitations/implications

Limitations to this research are that only one target domain is applied, and it may change the ability of model generalization when the problem of insufficient data sets or unseen domain appeared.

Originality/value

This paper solves the issue of critical information defects by tackling the difficulty of extracting discriminative features. And the categories in both domains are compelled to be consistent for better object detection.

Details

International Journal of Web Information Systems, vol. 19 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 21 May 2024

Joseph Vivek, Naveen Venkatesh S., Tapan K. Mahanta, Sugumaran V., M. Amarnath, Sangharatna M. Ramteke and Max Marian

This study aims to explore the integration of machine learning (ML) in tribology to optimize lubrication interval decisions, aiming to enhance equipment lifespan and operational…

Abstract

Purpose

This study aims to explore the integration of machine learning (ML) in tribology to optimize lubrication interval decisions, aiming to enhance equipment lifespan and operational efficiency through wear image analysis.

Design/methodology/approach

Using a data set of scanning electron microscopy images from an internal combustion engine, the authors used AlexNet as the feature extraction algorithm and the J48 decision tree algorithm for feature selection and compared 15 ML classifiers from the lazy-, Bayes and tree-based families.

Findings

From the analyzed ML classifiers, instance-based k-nearest neighbor emerged as the optimal algorithm with a 95% classification accuracy against testing data. This surpassed individually trained convolutional neural networks’ (CNNs) and closely approached ensemble deep learning (DL) techniques’ accuracy.

Originality/value

The proposed approach simplifies the process, enhances efficiency and improves interpretability compared to more complex CNNs and ensemble DL techniques.

Details

Industrial Lubrication and Tribology, vol. 76 no. 5
Type: Research Article
ISSN: 0036-8792

Keywords

Article
Publication date: 10 April 2024

Qihua Ma, Qilin Li, Wenchao Wang and Meng Zhu

This study aims to achieve superior localization and mapping performance in point cloud degradation scenarios through the effective removal of dynamic obstacles. With the…

Abstract

Purpose

This study aims to achieve superior localization and mapping performance in point cloud degradation scenarios through the effective removal of dynamic obstacles. With the continuous development of various technologies for autonomous vehicles, the LIDAR-based Simultaneous localization and mapping (SLAM) system is becoming increasingly important. However, in SLAM systems, effectively addressing the challenges of point cloud degradation scenarios is essential for accurate localization and mapping, with dynamic obstacle removal being a key component.

Design/methodology/approach

This paper proposes a method that combines adaptive feature extraction and loop closure detection algorithms to address this challenge. In the SLAM system, the ground point cloud and non-ground point cloud are separated to reduce the impact of noise. And based on the cylindrical projection image of the point cloud, the intensity features are adaptively extracted, the degradation direction is determined by the degradation factor and the intensity features are matched with the map to correct the degraded pose. Moreover, through the difference in raster distribution of the point clouds before and after two frames in the loop process, the dynamic point clouds are identified and removed, and the map is updated.

Findings

Experimental results show that the method has good performance. The absolute displacement accuracy of the laser odometer is improved by 27.1%, the relative displacement accuracy is improved by 33.5% and the relative angle accuracy is improved by 23.8% after using the adaptive intensity feature extraction method. The position error is reduced by 30% after removing the dynamic target.

Originality/value

Compared with LiDAR odometry and mapping algorithm, the method has greater robustness and accuracy in mapping and localization.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 12 September 2024

Zhanglin Peng, Tianci Yin, Xuhui Zhu, Xiaonong Lu and Xiaoyu Li

To predict the price of battery-grade lithium carbonate accurately and provide proper guidance to investors, a method called MFTBGAM is proposed in this study. This method…

Abstract

Purpose

To predict the price of battery-grade lithium carbonate accurately and provide proper guidance to investors, a method called MFTBGAM is proposed in this study. This method integrates textual and numerical information using TCN-BiGRU–Attention.

Design/methodology/approach

The Word2Vec model is initially employed to process the gathered textual data concerning battery-grade lithium carbonate. Subsequently, a dual-channel text-numerical extraction model, integrating TCN and BiGRU, is constructed to extract textual and numerical features separately. Following this, the attention mechanism is applied to extract fusion features from the textual and numerical data. Finally, the market price prediction results for battery-grade lithium carbonate are calculated and outputted using the fully connected layer.

Findings

Experiments in this study are carried out using datasets consisting of news and investor commentary. The findings reveal that the MFTBGAM model exhibits superior performance compared to alternative models, showing its efficacy in precisely forecasting the future market price of battery-grade lithium carbonate.

Research limitations/implications

The dataset analyzed in this study spans from 2020 to 2023, and thus, the forecast results are specifically relevant to this timeframe. Altering the sample data would necessitate repetition of the experimental process, resulting in different outcomes. Furthermore, recognizing that raw data might include noise and irrelevant information, future endeavors will explore efficient data preprocessing techniques to mitigate such issues, thereby enhancing the model’s predictive capabilities in long-term forecasting tasks.

Social implications

The price prediction model serves as a valuable tool for investors in the battery-grade lithium carbonate industry, facilitating informed investment decisions. By using the results of price prediction, investors can discern opportune moments for investment. Moreover, this study utilizes two distinct types of text information – news and investor comments – as independent sources of textual data input. This approach provides investors with a more precise and comprehensive understanding of market dynamics.

Originality/value

We propose a novel price prediction method based on TCN-BiGRU Attention for “text-numerical” information fusion. We separately use two types of textual information, news and investor comments, for prediction to enhance the model's effectiveness and generalization ability. Additionally, we utilize news datasets including both titles and content to improve the accuracy of battery-grade lithium carbonate market price predictions.

Details

Kybernetes, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 23 January 2024

Wang Zhang, Lizhe Fan, Yanbin Guo, Weihua Liu and Chao Ding

The purpose of this study is to establish a method for accurately extracting torch and seam features. This will improve the quality of narrow gap welding. An adaptive deflection…

Abstract

Purpose

The purpose of this study is to establish a method for accurately extracting torch and seam features. This will improve the quality of narrow gap welding. An adaptive deflection correction system based on passive light vision sensors was designed using the Halcon software from MVtec Germany as a platform.

Design/methodology/approach

This paper proposes an adaptive correction system for welding guns and seams divided into image calibration and feature extraction. In the image calibration method, the field of view distortion because of the position of the camera is resolved using image calibration techniques. In the feature extraction method, clear features of the weld gun and weld seam are accurately extracted after processing using algorithms such as impact filtering, subpixel (XLD), Gaussian Laplacian and sense region for the weld gun and weld seam. The gun and weld seam centers are accurately fitted using least squares. After calculating the deviation values, the error values are monitored, and error correction is achieved by programmable logic controller (PLC) control. Finally, experimental verification and analysis of the tracking errors are carried out.

Findings

The results show that the system achieves great results in dealing with camera aberrations. Weld gun features can be effectively and accurately identified. The difference between a scratch and a weld is effectively distinguished. The system accurately detects the center features of the torch and weld and controls the correction error to within 0.3mm.

Originality/value

An adaptive correction system based on a passive light vision sensor is designed which corrects the field-of-view distortion caused by the camera’s position deviation. Differences in features between scratches and welds are distinguished, and image features are effectively extracted. The final system weld error is controlled to 0.3 mm.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 23 July 2024

B. Maheswari and Rajganesh Nagarajan

A new Chatbot system is implemented to provide both voice-based and textual-based communication to address student queries without any delay. Initially, the input texts are…

Abstract

Purpose

A new Chatbot system is implemented to provide both voice-based and textual-based communication to address student queries without any delay. Initially, the input texts are gathered from the chat and then the gathered text is fed to pre-processing techniques like tokenization, stemming of words and removal of stop words. Then, the pre-processed data are given to the Natural Learning Process (NLP) for extracting the features, where the XLnet and Bidirectional Encoder Representations from Transformers (BERT) are utilized to extract the features. From these extracted features, the target-based fused feature pools are obtained. Then, the intent detection is carried out to extract the answers related to the user queries via Enhanced 1D-Convolutional Neural Networks with Long Short Term Memory (E1DCNN-LSTM) where the parameters are optimized using Position Averaging of Binary Emperor Penguin Optimizer with Colony Predation Algorithm (PA-BEPOCPA). Finally, the answers are extracted based on the intent of a particular student’s teaching materials like video, image or text. The implementation results are analyzed through different recently developed Chatbot detection models to validate the effectiveness of the newly developed model.

Design/methodology/approach

A smart model for the NLP is developed to help education-related institutions for an easy way of interaction between students and teachers with high prediction of accurate data for the given query. This research work aims to design a new educational Chatbot to assist the teaching-learning process with the NLP. The input data are gathered from the user through chats and given to the pre-processing stage, where tokenization, steaming of words and removal of stop words are used. The output data from the pre-processing stage is given to the feature extraction phase where XLnet and BERT are used. In this feature extraction, the optimal features are extracted using hybrid PA-BEPOCPA to maximize the correlation coefficient. The features from XLnet and features from BERT were given to target-based features fused pool to produce optimal features. Here, the best features are optimally selected using developed PA-BEPOCPA for maximizing the correlation among coefficients. The output of selected features is given to E1DCNN-LSTM for implementation of educational Chatbot with high accuracy and precision.

Findings

The investigation result shows that the implemented model achieves maximum accuracy of 57% more than Bidirectional long short-term memory (BiLSTM), 58% more than One Dimansional Convolutional Neural Network (1DCNN), 59% more than LSTM and 62% more than Ensemble for the given dataset.

Originality/value

The prediction accuracy was high in this proposed deep learning-based educational Chatbot system when compared with various baseline works.

1 – 10 of over 2000