Search results

1 – 10 of over 2000
Article
Publication date: 25 December 2023

Umair Khan, William Pao, Karl Ezra Salgado Pilario, Nabihah Sallih and Muhammad Rehan Khan

Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime…

70

Abstract

Purpose

Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime identification.

Design/methodology/approach

A numerical two-phase flow model was validated against experimental data and was used to generate dynamic pressure signals for three different flow regimes. First, four distinct methods were used for feature extraction: discrete wavelet transform (DWT), empirical mode decomposition, power spectral density and the time series analysis method. Kernel Fisher discriminant analysis (KFDA) was used to simultaneously perform dimensionality reduction and machine learning (ML) classification for each set of features. Finally, the Shapley additive explanations (SHAP) method was applied to make the workflow explainable.

Findings

The results highlighted that the DWT + KFDA method exhibited the highest testing and training accuracy at 95.2% and 88.8%, respectively. Results also include a virtual flow regime map to facilitate the visualization of features in two dimension. Finally, SHAP analysis showed that minimum and maximum values extracted at the fourth and second signal decomposition levels of DWT are the best flow-distinguishing features.

Practical implications

This workflow can be applied to opaque pipes fitted with pressure sensors to achieve flow assurance and automatic monitoring of two-phase flow occurring in many process industries.

Originality/value

This paper presents a novel flow regime identification method by fusing dynamic pressure measurements with ML techniques. The authors’ novel DWT + KFDA method demonstrates superior performance for flow regime identification with explainability.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 29 September 2021

Swetha Parvatha Reddy Chandrasekhara, Mohan G. Kabadi and Srivinay

This study has mainly aimed to compare and contrast two completely different image processing algorithms that are very adaptive for detecting prostate cancer using wearable…

Abstract

Purpose

This study has mainly aimed to compare and contrast two completely different image processing algorithms that are very adaptive for detecting prostate cancer using wearable Internet of Things (IoT) devices. Cancer in these modern times is still considered as one of the most dreaded disease, which is continuously pestering the mankind over a past few decades. According to Indian Council of Medical Research, India alone registers about 11.5 lakh cancer related cases every year and closely up to 8 lakh people die with cancer related issues each year. Earlier the incidence of prostate cancer was commonly seen in men aged above 60 years, but a recent study has revealed that this type of cancer has been on rise even in men between the age groups of 35 and 60 years as well. These findings make it even more necessary to prioritize the research on diagnosing the prostate cancer at an early stage, so that the patients can be cured and can lead a normal life.

Design/methodology/approach

The research focuses on two types of feature extraction algorithms, namely, scale invariant feature transform (SIFT) and gray level co-occurrence matrix (GLCM) that are commonly used in medical image processing, in an attempt to discover and improve the gap present in the potential detection of prostate cancer in medical IoT. Later the results obtained by these two strategies are classified separately using a machine learning based classification model called multi-class support vector machine (SVM). Owing to the advantage of better tissue discrimination and contrast resolution, magnetic resonance imaging images have been considered for this study. The classification results obtained for both the SIFT as well as GLCM methods are then compared to check, which feature extraction strategy provides the most accurate results for diagnosing the prostate cancer.

Findings

The potential of both the models has been evaluated in terms of three aspects, namely, accuracy, sensitivity and specificity. Each model’s result was checked against diversified ranges of training and test data set. It was found that the SIFT-multiclass SVM model achieved a highest performance rate of 99.9451% accuracy, 100% sensitivity and 99% specificity at 40:60 ratio of the training and testing data set.

Originality/value

The SIFT-multi SVM versus GLCM-multi SVM based comparison has been introduced for the first time to perceive the best model to be used for the accurate diagnosis of prostate cancer. The performance of the classification for each of the feature extraction strategies is enumerated in terms of accuracy, sensitivity and specificity.

Details

International Journal of Pervasive Computing and Communications, vol. 20 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 21 August 2023

Zengxin Kang, Jing Cui and Zhongyi Chu

Accurate segmentation of artificial assembly action is the basis of autonomous industrial assembly robots. This paper aims to study the precise segmentation method of manual…

Abstract

Purpose

Accurate segmentation of artificial assembly action is the basis of autonomous industrial assembly robots. This paper aims to study the precise segmentation method of manual assembly action.

Design/methodology/approach

In this paper, a temporal-spatial-contact features segmentation system (TSCFSS) for manual assembly actions recognition and segmentation is proposed. The system consists of three stages: spatial features extraction, contact force features extraction and action segmentation in the temporal dimension. In the spatial features extraction stage, a vectors assembly graph (VAG) is proposed to precisely describe the motion state of the objects and relative position between objects in an RGB-D video frame. Then graph networks are used to extract the spatial features from the VAG. In the contact features extraction stage, a sliding window is used to cut contact force features between hands and tools/parts corresponding to the video frame. Finally, in the action segmentation stage, the spatial and contact features are concatenated as the input of temporal convolution networks for action recognition and segmentation. The experiments have been conducted on a new manual assembly data set containing RGB-D video and contact force.

Findings

In the experiments, the TSCFSS is used to recognize 11 kinds of assembly actions in demonstrations and outperforms the other comparative action identification methods.

Originality/value

A novel manual assembly actions precisely segmentation system, which fuses temporal features, spatial features and contact force features, has been proposed. The VAG, a symbolic knowledge representation for describing assembly scene state, is proposed, making action segmentation more convenient. A data set with RGB-D video and contact force is specifically tailored for researching manual assembly actions.

Details

Robotic Intelligence and Automation, vol. 43 no. 5
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 31 July 2023

Xinzhi Cao, Yinsai Guo, Wenbin Yang, Xiangfeng Luo and Shaorong Xie

Unsupervised domain adaptation object detection not only mitigates model terrible performance resulting from domain gap, but also has the ability to apply knowledge trained on a…

Abstract

Purpose

Unsupervised domain adaptation object detection not only mitigates model terrible performance resulting from domain gap, but also has the ability to apply knowledge trained on a definite domain to a distinct domain. However, aligning the whole feature may confuse the object and background information, making it challenging to extract discriminative features. This paper aims to propose an improved approach which is called intrinsic feature extraction domain adaptation (IFEDA) to extract discriminative features effectively.

Design/methodology/approach

IFEDA consists of the intrinsic feature extraction (IFE) module and object consistency constraint (OCC). The IFE module, designed on the instance level, mainly solves the issue of the difficult extraction of discriminative object features. Specifically, the discriminative region of the objects can be paid more attention to. Meanwhile, the OCC is deployed to determine whether category prediction in the target domain brings into correspondence with it in the source domain.

Findings

Experimental results demonstrate the validity of our approach and achieve good outcomes on challenging data sets.

Research limitations/implications

Limitations to this research are that only one target domain is applied, and it may change the ability of model generalization when the problem of insufficient data sets or unseen domain appeared.

Originality/value

This paper solves the issue of critical information defects by tackling the difficulty of extracting discriminative features. And the categories in both domains are compelled to be consistent for better object detection.

Details

International Journal of Web Information Systems, vol. 19 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 10 April 2024

Qihua Ma, Qilin Li, Wenchao Wang and Meng Zhu

This study aims to achieve superior localization and mapping performance in point cloud degradation scenarios through the effective removal of dynamic obstacles. With the…

Abstract

Purpose

This study aims to achieve superior localization and mapping performance in point cloud degradation scenarios through the effective removal of dynamic obstacles. With the continuous development of various technologies for autonomous vehicles, the LIDAR-based Simultaneous localization and mapping (SLAM) system is becoming increasingly important. However, in SLAM systems, effectively addressing the challenges of point cloud degradation scenarios is essential for accurate localization and mapping, with dynamic obstacle removal being a key component.

Design/methodology/approach

This paper proposes a method that combines adaptive feature extraction and loop closure detection algorithms to address this challenge. In the SLAM system, the ground point cloud and non-ground point cloud are separated to reduce the impact of noise. And based on the cylindrical projection image of the point cloud, the intensity features are adaptively extracted, the degradation direction is determined by the degradation factor and the intensity features are matched with the map to correct the degraded pose. Moreover, through the difference in raster distribution of the point clouds before and after two frames in the loop process, the dynamic point clouds are identified and removed, and the map is updated.

Findings

Experimental results show that the method has good performance. The absolute displacement accuracy of the laser odometer is improved by 27.1%, the relative displacement accuracy is improved by 33.5% and the relative angle accuracy is improved by 23.8% after using the adaptive intensity feature extraction method. The position error is reduced by 30% after removing the dynamic target.

Originality/value

Compared with LiDAR odometry and mapping algorithm, the method has greater robustness and accuracy in mapping and localization.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 23 January 2024

Wang Zhang, Lizhe Fan, Yanbin Guo, Weihua Liu and Chao Ding

The purpose of this study is to establish a method for accurately extracting torch and seam features. This will improve the quality of narrow gap welding. An adaptive deflection…

Abstract

Purpose

The purpose of this study is to establish a method for accurately extracting torch and seam features. This will improve the quality of narrow gap welding. An adaptive deflection correction system based on passive light vision sensors was designed using the Halcon software from MVtec Germany as a platform.

Design/methodology/approach

This paper proposes an adaptive correction system for welding guns and seams divided into image calibration and feature extraction. In the image calibration method, the field of view distortion because of the position of the camera is resolved using image calibration techniques. In the feature extraction method, clear features of the weld gun and weld seam are accurately extracted after processing using algorithms such as impact filtering, subpixel (XLD), Gaussian Laplacian and sense region for the weld gun and weld seam. The gun and weld seam centers are accurately fitted using least squares. After calculating the deviation values, the error values are monitored, and error correction is achieved by programmable logic controller (PLC) control. Finally, experimental verification and analysis of the tracking errors are carried out.

Findings

The results show that the system achieves great results in dealing with camera aberrations. Weld gun features can be effectively and accurately identified. The difference between a scratch and a weld is effectively distinguished. The system accurately detects the center features of the torch and weld and controls the correction error to within 0.3mm.

Originality/value

An adaptive correction system based on a passive light vision sensor is designed which corrects the field-of-view distortion caused by the camera’s position deviation. Differences in features between scratches and welds are distinguished, and image features are effectively extracted. The final system weld error is controlled to 0.3 mm.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 8 September 2022

Ziming Zeng, Tingting Li, Jingjing Sun, Shouqiang Sun and Yu Zhang

The proliferation of bots in social networks has profoundly affected the interactions of legitimate users. Detecting and rejecting these unwelcome bots has become part of the…

Abstract

Purpose

The proliferation of bots in social networks has profoundly affected the interactions of legitimate users. Detecting and rejecting these unwelcome bots has become part of the collective Internet agenda. Unfortunately, as bot creators use more sophisticated approaches to avoid being discovered, it has become increasingly difficult to distinguish social bots from legitimate users. Therefore, this paper proposes a novel social bot detection mechanism to adapt to new and different kinds of bots.

Design/methodology/approach

This paper proposes a research framework to enhance the generalization of social bot detection from two dimensions: feature extraction and detection approaches. First, 36 features are extracted from four views for social bot detection. Then, this paper analyzes the feature contribution in different kinds of social bots, and the features with stronger generalization are proposed. Finally, this paper introduces outlier detection approaches to enhance the ever-changing social bot detection.

Findings

The experimental results show that the more important features can be more effectively generalized to different social bot detection tasks. Compared with the traditional binary-class classifier, the proposed outlier detection approaches can better adapt to the ever-changing social bots with a performance of 89.23 per cent measured using the F1 score.

Originality/value

Based on the visual interpretation of the feature contribution, the features with stronger generalization in different detection tasks are found. The outlier detection approaches are first introduced to enhance the detection of ever-changing social bots.

Details

Data Technologies and Applications, vol. 57 no. 2
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 13 July 2023

Luya Yang, Xinbo Huang, Yucheng Ren, Qi Han and Yanchen Huang

In the process of continuous casting and rolling of steel plate, due to the influence of rolling equipment and process, there are scratches, inclusions, patches, scabs and pitted…

Abstract

Purpose

In the process of continuous casting and rolling of steel plate, due to the influence of rolling equipment and process, there are scratches, inclusions, patches, scabs and pitted surfaces on the surface of steel plate, which will not only affect the corrosion resistance, wear resistance and fatigue strength of steel plate but also may cause production accidents. Therefore, the detection of steel plate surface defect must be strengthened to ensure the production quality of steel plate and the smooth development of industrial construction.

Design/methodology/approach

(1) A steel plate surface defect detection technology based on small datasets is proposed, which can detect multiple surface defects and fill in the blank of scab defect detection. (2) A detection system based on intelligent recognition technology is built. The steel plate images are collected by the front-end monitoring device, then transmitted to the back-end monitoring center and processed by the embedded intelligent algorithms. (3) In order to reduce the impact of external light on the image, an improved Multi-Scale Retinex (MSR) enhancement algorithm based on adaptive weight calculation is proposed, which lays the foundation for subsequent object segmentation and feature extraction. (4) According to the different factors such as the cause and shape, the texture and shape features are combined to classify different defects on the steel plate surface. The defect classification model is constructed and the classification results are recorded and stored, which has certain application value in the field of steel plate surface defect detection. (5) The practicability and effectiveness of the proposed method are verified by comparison with other methods, and the field running tests are conducted based on the equipment commissioning field of China Heavy Machinery Institute.

Findings

When applied to small dataset, the precision of the proposed method is 94.5% and the time is 23.7 ms. In order to compare with deep learning technology, after expanding the image dataset, the precision and detection time of this paper are 0.948 and 24.2 ms, respectively. The proposed method is superior to other traditional image processing and deep learning methods. And the field recognition precision is 91.7%.

Originality/value

In brief, the steel plate surface defect detection technology based on computer vision is effective, but the previous attempts and methods are not comprehensive and the accuracy and detection speed need to be improved. Therefore, a more practical and comprehensive technology is developed in this paper. The main contributions are as follows: (1) A steel plate surface defect detection technology based on small datasets is proposed, which can detect multiple surface defects and fill in the blank of scab defect detection. (2) A detection system based on intelligent recognition technology is built. The steel plate images are collected by the front-end monitoring device, then transmitted to the back-end monitoring center and processed by the embedded intelligent algorithms. (3) In order to reduce the impact of external light on the image, an improved MSR enhancement algorithm based on adaptive weight calculation is proposed, which lays the foundation for subsequent object segmentation and feature extraction. (4) According to the different factors such as the cause and shape, the texture and shape features are combined to classify different defects on the steel plate surface. The defect classification model is constructed and the classification results are recorded and stored, which has certain application value in the field of steel plate surface defect detection. (5) The practicability and effectiveness of the proposed method are verified by comparison with other methods, and the field running tests are conducted based on the equipment commissioning field of China Heavy Machinery Institute.

Details

Engineering Computations, vol. 40 no. 6
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 10 March 2022

Jayaram Boga and Dhilip Kumar V.

For achieving the profitable human activity recognition (HAR) method, this paper solves the HAR problem under wireless body area network (WBAN) using a developed ensemble learning…

95

Abstract

Purpose

For achieving the profitable human activity recognition (HAR) method, this paper solves the HAR problem under wireless body area network (WBAN) using a developed ensemble learning approach. The purpose of this study is,to solve the HAR problem under WBAN using a developed ensemble learning approach for achieving the profitable HAR method. There are three data sets used for this HAR in WBAN, namely, human activity recognition using smartphones, wireless sensor data mining and Kaggle. The proposed model undergoes four phases, namely, “pre-processing, feature extraction, feature selection and classification.” Here, the data can be preprocessed by artifacts removal and median filtering techniques. Then, the features are extracted by techniques such as “t-Distributed Stochastic Neighbor Embedding”, “Short-time Fourier transform” and statistical approaches. The weighted optimal feature selection is considered as the next step for selecting the important features based on computing the data variance of each class. This new feature selection is achieved by the hybrid coyote Jaya optimization (HCJO). Finally, the meta-heuristic-based ensemble learning approach is used as a new recognition approach with three classifiers, namely, “support vector machine (SVM), deep neural network (DNN) and fuzzy classifiers.” Experimental analysis is performed.

Design/methodology/approach

The proposed HCJO algorithm was developed for optimizing the membership function of fuzzy, iteration limit of SVM and hidden neuron count of DNN for getting superior classified outcomes and to enhance the performance of ensemble classification.

Findings

The accuracy for enhanced HAR model was pretty high in comparison to conventional models, i.e. higher than 6.66% to fuzzy, 4.34% to DNN, 4.34% to SVM, 7.86% to ensemble and 6.66% to Improved Sealion optimization algorithm-Attention Pyramid-Convolutional Neural Network-AP-CNN, respectively.

Originality/value

The suggested HAR model with WBAN using HCJO algorithm is accurate and improves the effectiveness of the recognition.

Details

International Journal of Pervasive Computing and Communications, vol. 19 no. 4
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 22 January 2024

Jun Liu, Junyuan Dong, Mingming Hu and Xu Lu

Existing Simultaneous Localization and Mapping (SLAM) algorithms have been relatively well developed. However, when in complex dynamic environments, the movement of the dynamic…

Abstract

Purpose

Existing Simultaneous Localization and Mapping (SLAM) algorithms have been relatively well developed. However, when in complex dynamic environments, the movement of the dynamic points on the dynamic objects in the image in the mapping can have an impact on the observation of the system, and thus there will be biases and errors in the position estimation and the creation of map points. The aim of this paper is to achieve more accurate accuracy in SLAM algorithms compared to traditional methods through semantic approaches.

Design/methodology/approach

In this paper, the semantic segmentation of dynamic objects is realized based on U-Net semantic segmentation network, followed by motion consistency detection through motion detection method to determine whether the segmented objects are moving in the current scene or not, and combined with the motion compensation method to eliminate dynamic points and compensate for the current local image, so as to make the system robust.

Findings

Experiments comparing the effect of detecting dynamic points and removing outliers are conducted on a dynamic data set of Technische Universität München, and the results show that the absolute trajectory accuracy of this paper's method is significantly improved compared with ORB-SLAM3 and DS-SLAM.

Originality/value

In this paper, in the semantic segmentation network part, the segmentation mask is combined with the method of dynamic point detection, elimination and compensation, which reduces the influence of dynamic objects, thus effectively improving the accuracy of localization in dynamic environments.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 2000