Search results

1 – 10 of 112
Article
Publication date: 23 January 2024

Wang Zhang, Lizhe Fan, Yanbin Guo, Weihua Liu and Chao Ding

The purpose of this study is to establish a method for accurately extracting torch and seam features. This will improve the quality of narrow gap welding. An adaptive deflection…

Abstract

Purpose

The purpose of this study is to establish a method for accurately extracting torch and seam features. This will improve the quality of narrow gap welding. An adaptive deflection correction system based on passive light vision sensors was designed using the Halcon software from MVtec Germany as a platform.

Design/methodology/approach

This paper proposes an adaptive correction system for welding guns and seams divided into image calibration and feature extraction. In the image calibration method, the field of view distortion because of the position of the camera is resolved using image calibration techniques. In the feature extraction method, clear features of the weld gun and weld seam are accurately extracted after processing using algorithms such as impact filtering, subpixel (XLD), Gaussian Laplacian and sense region for the weld gun and weld seam. The gun and weld seam centers are accurately fitted using least squares. After calculating the deviation values, the error values are monitored, and error correction is achieved by programmable logic controller (PLC) control. Finally, experimental verification and analysis of the tracking errors are carried out.

Findings

The results show that the system achieves great results in dealing with camera aberrations. Weld gun features can be effectively and accurately identified. The difference between a scratch and a weld is effectively distinguished. The system accurately detects the center features of the torch and weld and controls the correction error to within 0.3mm.

Originality/value

An adaptive correction system based on a passive light vision sensor is designed which corrects the field-of-view distortion caused by the camera’s position deviation. Differences in features between scratches and welds are distinguished, and image features are effectively extracted. The final system weld error is controlled to 0.3 mm.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 2 January 2024

Xiangdi Yue, Yihuan Zhang, Jiawei Chen, Junxin Chen, Xuanyi Zhou and Miaolei He

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and…

Abstract

Purpose

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and mapping (SLAM) techniques. This paper aims to provide a significant reference for researchers and engineers in robotic mapping.

Design/methodology/approach

This paper focused on the research state of LiDAR-based SLAM for robotic mapping as well as a literature survey from the perspective of various LiDAR types and configurations.

Findings

This paper conducted a comprehensive literature review of the LiDAR-based SLAM system based on three distinct LiDAR forms and configurations. The authors concluded that multi-robot collaborative mapping and multi-source fusion SLAM systems based on 3D LiDAR with deep learning will be new trends in the future.

Originality/value

To the best of the authors’ knowledge, this is the first thorough survey of robotic mapping from the perspective of various LiDAR types and configurations. It can serve as a theoretical and practical guide for the advancement of academic and industrial robot mapping.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 16 April 2024

Shilong Zhang, Changyong Liu, Kailun Feng, Chunlai Xia, Yuyin Wang and Qinghe Wang

The swivel construction method is a specially designed process used to build bridges that cross rivers, valleys, railroads and other obstacles. To carry out this construction…

Abstract

Purpose

The swivel construction method is a specially designed process used to build bridges that cross rivers, valleys, railroads and other obstacles. To carry out this construction method safely, real-time monitoring of the bridge rotation process is required to ensure a smooth swivel operation without collisions. However, the traditional means of monitoring using Electronic Total Station tools cannot realize real-time monitoring, and monitoring using motion sensors or GPS is cumbersome to use.

Design/methodology/approach

This study proposes a monitoring method based on a series of computer vision (CV) technologies, which can monitor the rotation angle, velocity and inclination angle of the swivel construction in real-time. First, three proposed CV algorithms was developed in a laboratory environment. The experimental tests were carried out on a bridge scale model to select the outperformed algorithms for rotation, velocity and inclination monitor, respectively, as the final monitoring method in proposed method. Then, the selected method was implemented to monitor an actual bridge during its swivel construction to verify the applicability.

Findings

In the laboratory study, the monitoring data measured with the selected monitoring algorithms was compared with those measured by an Electronic Total Station and the errors in terms of rotation angle, velocity and inclination angle, were 0.040%, 0.040%, and −0.454%, respectively, thus validating the accuracy of the proposed method. In the pilot actual application, the method was shown to be feasible in a real construction application.

Originality/value

In a well-controlled laboratory the optimal algorithms for bridge swivel construction are identified and in an actual project the proposed method is verified. The proposed CV method is complementary to the use of Electronic Total Station tools, motion sensors, and GPS for safety monitoring of swivel construction of bridges. It also contributes to being a possible approach without data-driven model training. Its principal advantages are that it both provides real-time monitoring and is easy to deploy in real construction applications.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 19 March 2024

Cemalettin Akdoğan, Tolga Özer and Yüksel Oğuz

Nowadays, food problems are likely to arise because of the increasing global population and decreasing arable land. Therefore, it is necessary to increase the yield of…

Abstract

Purpose

Nowadays, food problems are likely to arise because of the increasing global population and decreasing arable land. Therefore, it is necessary to increase the yield of agricultural products. Pesticides can be used to improve agricultural land products. This study aims to make the spraying of cherry trees more effective and efficient with the designed artificial intelligence (AI)-based agricultural unmanned aerial vehicle (UAV).

Design/methodology/approach

Two approaches have been adopted for the AI-based detection of cherry trees: In approach 1, YOLOv5, YOLOv7 and YOLOv8 models are trained with 70, 100 and 150 epochs. In Approach 2, a new method is proposed to improve the performance metrics obtained in Approach 1. Gaussian, wavelet transform (WT) and Histogram Equalization (HE) preprocessing techniques were applied to the generated data set in Approach 2. The best-performing models in Approach 1 and Approach 2 were used in the real-time test application with the developed agricultural UAV.

Findings

In Approach 1, the best F1 score was 98% in 100 epochs with the YOLOv5s model. In Approach 2, the best F1 score and mAP values were obtained as 98.6% and 98.9% in 150 epochs, with the YOLOv5m model with an improvement of 0.6% in the F1 score. In real-time tests, the AI-based spraying drone system detected and sprayed cherry trees with an accuracy of 66% in Approach 1 and 77% in Approach 2. It was revealed that the use of pesticides could be reduced by 53% and the energy consumption of the spraying system by 47%.

Originality/value

An original data set was created by designing an agricultural drone to detect and spray cherry trees using AI. YOLOv5, YOLOv7 and YOLOv8 models were used to detect and classify cherry trees. The results of the performance metrics of the models are compared. In Approach 2, a method including HE, Gaussian and WT is proposed, and the performance metrics are improved. The effect of the proposed method in a real-time experimental application is thoroughly analyzed.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 31 August 2023

Hongwei Zhang, Shihao Wang, Hongmin Mi, Shuai Lu, Le Yao and Zhiqiang Ge

The defect detection problem of color-patterned fabric is still a huge challenge due to the lack of manual defect labeling samples. Recently, many fabric defect detection…

118

Abstract

Purpose

The defect detection problem of color-patterned fabric is still a huge challenge due to the lack of manual defect labeling samples. Recently, many fabric defect detection algorithms based on feature engineering and deep learning have been proposed, but these methods have overdetection or miss-detection problems because they cannot adapt to the complex patterns of color-patterned fabrics. The purpose of this paper is to propose a defect detection framework based on unsupervised adversarial learning for image reconstruction to solve the above problems.

Design/methodology/approach

The proposed framework consists of three parts: a generator, a discriminator and an image postprocessing module. The generator is able to extract the features of the image and then reconstruct the image. The discriminator can supervise the generator to repair defects in the samples to improve the quality of image reconstruction. The multidifference image postprocessing module is used to obtain the final detection results of color-patterned fabric defects.

Findings

The proposed framework is compared with state-of-the-art methods on the public dataset YDFID-1(Yarn-Dyed Fabric Image Dataset-version1). The proposed framework is also validated on several classes in the MvTec AD dataset. The experimental results of various patterns/classes on YDFID-1 and MvTecAD demonstrate the effectiveness and superiority of this method in fabric defect detection.

Originality/value

It provides an automatic defect detection solution that is convenient for engineering applications for the inspection process of the color-patterned fabric manufacturing industry. A public dataset is provided for academia.

Details

International Journal of Clothing Science and Technology, vol. 35 no. 6
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 12 September 2023

Yang Zhou, Long Wang, Yongbin Lai and Xiaolong Wang

The coupling process between the loading mechanism and the tank car mouth is a crucial step in the tank car loading process. The purpose of this paper is to design a method to…

Abstract

Purpose

The coupling process between the loading mechanism and the tank car mouth is a crucial step in the tank car loading process. The purpose of this paper is to design a method to accurately measure the pose of the tanker car.

Design/methodology/approach

The collected image is first subjected to a gray enhancement operation, and the black parts of the image are extracted using Otsu’s threshold segmentation and morphological processing. The edge pixels are then filtered to remove outliers and noise, and the remaining effective points are used to fit the contour information of the tank car mouth. Using the successfully extracted contour information, the pose information of the tank car mouth in the camera coordinate system is obtained by establishing a binocular projection elliptical cone model, and the pixel position of the real circle center is obtained through the projection section. Finally, the binocular triangulation method is used to determine the position information of the tank car mouth in space.

Findings

Experimental results have shown that this method for measuring the position and orientation of the tank car mouth is highly accurate and can meet the requirements for industrial loading accuracy.

Originality/value

A method for extracting the contours of various types of complex tanker mouth is proposed. This method can accurately extract the contour of the tanker mouth when the contour is occluded or disturbed. Based on the binocular elliptic conical model and perspective projection theory, an innovative method for measuring the pose of the tanker mouth is proposed, and according to the space characteristics of the tanker mouth itself, the ambiguity of understanding is removed. This provides a new idea for the automatic loading of ash tank cars.

Details

Robotic Intelligence and Automation, vol. 43 no. 6
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 29 September 2021

Swetha Parvatha Reddy Chandrasekhara, Mohan G. Kabadi and Srivinay

This study has mainly aimed to compare and contrast two completely different image processing algorithms that are very adaptive for detecting prostate cancer using wearable…

Abstract

Purpose

This study has mainly aimed to compare and contrast two completely different image processing algorithms that are very adaptive for detecting prostate cancer using wearable Internet of Things (IoT) devices. Cancer in these modern times is still considered as one of the most dreaded disease, which is continuously pestering the mankind over a past few decades. According to Indian Council of Medical Research, India alone registers about 11.5 lakh cancer related cases every year and closely up to 8 lakh people die with cancer related issues each year. Earlier the incidence of prostate cancer was commonly seen in men aged above 60 years, but a recent study has revealed that this type of cancer has been on rise even in men between the age groups of 35 and 60 years as well. These findings make it even more necessary to prioritize the research on diagnosing the prostate cancer at an early stage, so that the patients can be cured and can lead a normal life.

Design/methodology/approach

The research focuses on two types of feature extraction algorithms, namely, scale invariant feature transform (SIFT) and gray level co-occurrence matrix (GLCM) that are commonly used in medical image processing, in an attempt to discover and improve the gap present in the potential detection of prostate cancer in medical IoT. Later the results obtained by these two strategies are classified separately using a machine learning based classification model called multi-class support vector machine (SVM). Owing to the advantage of better tissue discrimination and contrast resolution, magnetic resonance imaging images have been considered for this study. The classification results obtained for both the SIFT as well as GLCM methods are then compared to check, which feature extraction strategy provides the most accurate results for diagnosing the prostate cancer.

Findings

The potential of both the models has been evaluated in terms of three aspects, namely, accuracy, sensitivity and specificity. Each model’s result was checked against diversified ranges of training and test data set. It was found that the SIFT-multiclass SVM model achieved a highest performance rate of 99.9451% accuracy, 100% sensitivity and 99% specificity at 40:60 ratio of the training and testing data set.

Originality/value

The SIFT-multi SVM versus GLCM-multi SVM based comparison has been introduced for the first time to perceive the best model to be used for the accurate diagnosis of prostate cancer. The performance of the classification for each of the feature extraction strategies is enumerated in terms of accuracy, sensitivity and specificity.

Details

International Journal of Pervasive Computing and Communications, vol. 20 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 25 January 2023

Hui Xu, Junjie Zhang, Hui Sun, Miao Qi and Jun Kong

Attention is one of the most important factors to affect the academic performance of students. Effectively analyzing students' attention in class can promote teachers' precise…

Abstract

Purpose

Attention is one of the most important factors to affect the academic performance of students. Effectively analyzing students' attention in class can promote teachers' precise teaching and students' personalized learning. To intelligently analyze the students' attention in classroom from the first-person perspective, this paper proposes a fusion model based on gaze tracking and object detection. In particular, the proposed attention analysis model does not depend on any smart equipment.

Design/methodology/approach

Given a first-person view video of students' learning, the authors first estimate the gazing point by using the deep space–time neural network. Second, single shot multi-box detector and fast segmentation convolutional neural network are comparatively adopted to accurately detect the objects in the video. Third, they predict the gazing objects by combining the results of gazing point estimation and object detection. Finally, the personalized attention of students is analyzed based on the predicted gazing objects and the measurable eye movement criteria.

Findings

A large number of experiments are carried out on a public database and a new dataset that is built in a real classroom. The experimental results show that the proposed model not only can accurately track the students' gazing trajectory and effectively analyze the fluctuation of attention of the individual student and all students but also provide a valuable reference to evaluate the process of learning of students.

Originality/value

The contributions of this paper can be summarized as follows. The analysis of students' attention plays an important role in improving teaching quality and student achievement. However, there is little research on how to automatically and intelligently analyze students' attention. To alleviate this problem, this paper focuses on analyzing students' attention by gaze tracking and object detection in classroom teaching, which is significant for practical application in the field of education. The authors proposed an effectively intelligent fusion model based on the deep neural network, which mainly includes the gazing point module and the object detection module, to analyze students' attention in classroom teaching instead of relying on any smart wearable device. They introduce the attention mechanism into the gazing point module to improve the performance of gazing point detection and perform some comparison experiments on the public dataset to prove that the gazing point module can achieve better performance. They associate the eye movement criteria with visual gaze to get quantifiable objective data for students' attention analysis, which can provide a valuable basis to evaluate the learning process of students, provide useful learning information of students for both parents and teachers and support the development of individualized teaching. They built a new database that contains the first-person view videos of 11 subjects in a real classroom and employ it to evaluate the effectiveness and feasibility of the proposed model.

Details

Data Technologies and Applications, vol. 57 no. 5
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 14 November 2023

Khaled Hallak, Fulbert Baudoin, Virginie Griseri, Florian Bugarin, Stephane Segonds, Severine Le Roy and Gilbert Teyssedre

The purpose of this paper is to optimize and improve a bipolar charge transport (BCT) model used to simulate charge dynamics in insulating polymer materials, specifically…

Abstract

Purpose

The purpose of this paper is to optimize and improve a bipolar charge transport (BCT) model used to simulate charge dynamics in insulating polymer materials, specifically low-density polyethylene (LDPE).

Design/methodology/approach

An optimization algorithm is applied to optimize the BCT model by comparing the model outputs with experimental data obtained using two kinds of measurements: space charge distribution using the pulsed electroacoustic (PEA) method and current measurements in nonstationary conditions.

Findings

The study provides an optimal set of parameters that offers a good correlation between model outputs and several experiments conducted under varying applied fields. The study evaluates the quantity of charges remaining inside the dielectric even after 24 h of short circuit. Moreover, the effects of increasing the electric field on charge trapping and detrapping rates are addressed.

Research limitations/implications

This study only examined experiments with different applied electric fields, and thus the obtained parameters may not suit the experimental outputs if the experimental temperature varies. Further improvement may be achieved by introducing additional experiments or another source of measurements.

Originality/value

This work provides a unique set of optimal parameters that best match both current and charge density measurements for a BCT model in LDPE and demonstrates the use of trust region reflective algorithm for parameter optimization. The study also attempts to evaluate the equations used to describe charge trapping and detrapping phenomena, providing a deeper understanding of the physics behind the model.

Details

COMPEL - The international journal for computation and mathematics in electrical and electronic engineering , vol. 42 no. 6
Type: Research Article
ISSN: 0332-1649

Keywords

Article
Publication date: 2 January 2024

Fernando Peña, José Carlos Rico, Pablo Zapico, Gonzalo Valiño and Sabino Mateos

The purpose of this paper is to provide a new procedure for in-plane compensation of geometric errors that often appear in the layers deposited by an additive manufacturing (AM…

86

Abstract

Purpose

The purpose of this paper is to provide a new procedure for in-plane compensation of geometric errors that often appear in the layers deposited by an additive manufacturing (AM) process when building a part, regardless of the complexity of the layer geometry.

Design/methodology/approach

The procedure is based on comparing the real layer contours to the nominal ones extracted from the STL model of the part. Considering alignment and form deviations, the compensation algorithm generates new compensated contours that match the nominal ones as closely as possible. To assess the compensation effectiveness, two case studies were analysed. In the first case, the parts were not manufactured, but the distortions were simulated using a predictive model. In the second example, the test part was actually manufactured, and the distortions were measured on a coordinate measuring machine.

Findings

The geometric deviations detected in both case studies, as evaluated by various quality indicators, reduced significantly after applying the compensation procedure, meaning that the compensated and nominal contours were better matched both in shape and size.

Research limitations/implications

Although large contours showed deviations close to zero, dimensional overcompensation was observed when applied to small contours. The compensation procedure could be enhanced if the applied compensation factor took into account the contour size of the analysed layer and other geometric parameters that could have an influence.

Originality/value

The presented method of compensation is applicable to layers of any shape obtained in any AM process.

Details

Rapid Prototyping Journal, vol. 30 no. 3
Type: Research Article
ISSN: 1355-2546

Keywords

1 – 10 of 112