Search results

1 – 10 of over 154000
Article
Publication date: 11 July 2023

Chuyu Tang, Genliang Chen, Hao Wang and Yangfan Yu

Hull block assembly is a vital task in ship construction. It is necessary to obtain the actual poses of the assembly features to guide further block alignment. Traditional methods…

81

Abstract

Purpose

Hull block assembly is a vital task in ship construction. It is necessary to obtain the actual poses of the assembly features to guide further block alignment. Traditional methods use single-point measurement, which is time-consuming and may lead to loss of key information. Thus, large-scale scanning is introduced for data acquisition, and this paper aims to provide a precise and robust method for retrieving poses based on point set registration.

Design/methodology/approach

The main problem of point registration is to find the correct transformation between the model and the scene. In this paper, a vote framework based on a new point pair feature is used to calculate the transformation. First, a special edge indicator for multiplate objects is proposed to determine the edges. Subsequently, pair features with an edge description are noted for every point. Finally, a voting scheme based on agglomerative clustering is implemented to determine the optimal transformation.

Findings

The proposed method not only improves registration efficiency but also maintains high accuracy compared to several commonly used approaches. In particular, for objects composed of plates, the results of pose estimation are more promising because of the compact pair feature. The multiple ship longitudinal localization experiment validates the effectiveness in real scan applications.

Originality/value

The proposed edge description performs a better detection for the edges of multiplate objects. The pair feature incorporating the edge indicator is more discriminative than the original template, resulting in better robustness to outliers, noise and occlusions.

Details

Robotic Intelligence and Automation, vol. 43 no. 4
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 18 October 2018

Lijun Ding, Shuguang Dai and Pingan Mu

Measurement uncertainty calculation is an important and complicated problem in digitised components inspection. In such inspections, a coordinate measuring machine (CMM) and laser…

Abstract

Purpose

Measurement uncertainty calculation is an important and complicated problem in digitised components inspection. In such inspections, a coordinate measuring machine (CMM) and laser scanner are usually used to get the surface point clouds of the component in different postures. Then, the point clouds are registered to construct fully connected point clouds of the component’s surfaces. However, in most cases, the measurement uncertainty is difficult to estimate after the scanned point cloud has been registered. This paper aims to propose a simplified method for calculating the uncertainty of point cloud measurements based on spatial feature registration.

Design/methodology/approach

In the proposed method, algorithmic models are used to calculate the point cloud measurement uncertainty based on noncontact measurements of the planes, lines and points of the component and spatial feature registration.

Findings

The measurement uncertainty based on spatial feature registration is related to the mutual position of registration features and the number of sensor commutation in the scanning process, but not to the spatial distribution of the measured feature. The results of experiments conducted verify the efficacy of the proposed method.

Originality/value

The proposed method provides an efficient algorithm for calculating the measurement uncertainty of registration point clouds based on part features, and therefore has important theoretical and practical significance in digitised components inspection.

Details

Sensor Review, vol. 39 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

Open Access
Article
Publication date: 5 June 2020

Zijun Jiang, Zhigang Xu, Yunchao Li, Haigen Min and Jingmei Zhou

Precise vehicle localization is a basic and critical technique for various intelligent transportation system (ITS) applications. It also needs to adapt to the complex road…

1042

Abstract

Purpose

Precise vehicle localization is a basic and critical technique for various intelligent transportation system (ITS) applications. It also needs to adapt to the complex road environments in real-time. The global positioning system and the strap-down inertial navigation system are two common techniques in the field of vehicle localization. However, the localization accuracy, reliability and real-time performance of these two techniques can not satisfy the requirement of some critical ITS applications such as collision avoiding, vision enhancement and automatic parking. Aiming at the problems above, this paper aims to propose a precise vehicle ego-localization method based on image matching.

Design/methodology/approach

This study included three steps, Step 1, extraction of feature points. After getting the image, the local features in the pavement images were extracted using an improved speeded up robust features algorithm. Step 2, eliminate mismatch points. Using a random sample consensus algorithm to eliminate mismatched points of road image and make match point pairs more robust. Step 3, matching of feature points and trajectory generation.

Findings

Through the matching and validation of the extracted local feature points, the relative translation and rotation offsets between two consecutive pavement images were calculated, eventually, the trajectory of the vehicle was generated.

Originality/value

The experimental results show that the studied algorithm has an accuracy at decimeter-level and it fully meets the demand of the lane-level positioning in some critical ITS applications.

Details

Journal of Intelligent and Connected Vehicles, vol. 3 no. 2
Type: Research Article
ISSN: 2399-9802

Keywords

Article
Publication date: 21 August 2023

Minghao Wang, Ming Cong, Yu Du, Dong Liu and Xiaojing Tian

The purpose of this study is to solve the problem of an unknown initial position in a multi-robot raster map fusion. The method includes two-dimensional (2D) raster maps and…

Abstract

Purpose

The purpose of this study is to solve the problem of an unknown initial position in a multi-robot raster map fusion. The method includes two-dimensional (2D) raster maps and three-dimensional (3D) point cloud maps.

Design/methodology/approach

A fusion method using multiple algorithms was proposed. For 2D raster maps, this method uses accelerated robust feature detection to extract feature points of multi-raster maps, and then feature points are matched using a two-step algorithm of minimum Euclidean distance and adjacent feature relation. Finally, the random sample consensus algorithm was used for redundant feature fusion. On the basis of 2D raster map fusion, the method of coordinate alignment is used for 3D point cloud map fusion.

Findings

To verify the effectiveness of the algorithm, the segmentation mapping method (2D raster map) and the actual robot mapping method (2D raster map and 3D point cloud map) were used for experimental verification. The experiments demonstrated the stability and reliability of the proposed algorithm.

Originality/value

This algorithm uses a new visual method with coordinate alignment to process the raster map, which can effectively solve the problem of the demand for the initial relative position of robots in traditional methods and be more adaptable to the fusion of 3D maps. In addition, the original data of the map can come from different types of robots, which greatly improves the universality of the algorithm.

Details

Robotic Intelligence and Automation, vol. 43 no. 5
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 7 April 2023

Sixing Liu, Yan Chai, Rui Yuan and Hong Miao

Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments…

Abstract

Purpose

Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments. Existing algorithms are based on laser or visual odometry; however, the lidar sensing range is small, the amount of data features is small, the camera is vulnerable to external conditions and the localization and map building cannot be performed stably and accurately using a single sensor. This paper aims to propose a laser three dimensions tightly coupled map building method that incorporates visual information, and uses laser point cloud information and image information to complement each other to improve the overall performance of the algorithm.

Design/methodology/approach

The visual feature points are first matched at the front end of the method, and the mismatched point pairs are removed using the bidirectional random sample consensus (RANSAC) algorithm. The laser point cloud is then used to obtain its depth information, while the two types of feature points are fed into the pose estimation module for a tightly coupled local bundle adjustment solution using a heuristic simulated annealing algorithm. Finally, the visual bag-of-words model is fused in the laser point cloud information to establish a threshold to construct a loopback framework to further reduce the cumulative drift error of the system over time.

Findings

Experiments on publicly available data sets show that the proposed method in this paper can match its real trajectory well. For various scenes, the map can be constructed by using the complementary laser and vision sensors, with high accuracy and robustness. At the same time, the method is verified in a real environment using an autonomous walking acquisition platform, and the system loaded with the method can run well for a long time and take into account the environmental adaptability of multiple scenes.

Originality/value

A multi-sensor data tight coupling method is proposed to fuse laser and vision information for optimal solution of the positional attitude. A bidirectional RANSAC algorithm is used for the removal of visual mismatched point pairs. Further, oriented fast and rotated brief feature points are used to build a bag-of-words model and construct a real-time loopback framework to reduce error accumulation. According to the experimental validation results, the accuracy and robustness of the single-sensor SLAM algorithm can be improved.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 5 June 2009

Atsushi Shimada, Madoka Kanouchi, Daisaku Arita and Rin‐Ichiro Taniguchi

The purpose of this paper is to present an approach to improve the accuracy of estimating feature points of human body on a vision‐based motion capture system (MCS) by using the…

Abstract

Purpose

The purpose of this paper is to present an approach to improve the accuracy of estimating feature points of human body on a vision‐based motion capture system (MCS) by using the variable‐density self‐organizing map (VDSOM).

Design/methodology/approach

The VDSOM is a kind of self‐organizing map (SOM) and has an ability to learn training samples incrementally. The authors let VDSOM learn 3D feature points of human body when the MCS succeeded in estimating them correctly. On the other hand, one or more 3D feature point could not be estimated correctly, the VDSOM is used for the other purpose. The SOM including VDSOM has an ability to recall a part of weight vector which have learned in the learning process. This ability is used to recall correct patterns and complement such incorrect feature points by replacing such incorrect feature points with them.

Findings

Experimental results show that the approach is effective for estimation of human posture robustly compared with the other methods.

Originality/value

The proposed approach is interesting for the collaboration between an MCS and an incremental learning.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 2 no. 2
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 4 April 2024

Chuyu Tang, Hao Wang, Genliang Chen and Shaoqiu Xu

This paper aims to propose a robust method for non-rigid point set registration, using the Gaussian mixture model and accommodating non-rigid transformations. The posterior…

Abstract

Purpose

This paper aims to propose a robust method for non-rigid point set registration, using the Gaussian mixture model and accommodating non-rigid transformations. The posterior probabilities of the mixture model are determined through the proposed integrated feature divergence.

Design/methodology/approach

The method involves an alternating two-step framework, comprising correspondence estimation and subsequent transformation updating. For correspondence estimation, integrated feature divergences including both global and local features, are coupled with deterministic annealing to address the non-convexity problem of registration. For transformation updating, the expectation-maximization iteration scheme is introduced to iteratively refine correspondence and transformation estimation until convergence.

Findings

The experiments confirm that the proposed registration approach exhibits remarkable robustness on deformation, noise, outliers and occlusion for both 2D and 3D point clouds. Furthermore, the proposed method outperforms existing analogous algorithms in terms of time complexity. Application of stabilizing and securing intermodal containers loaded on ships is performed. The results demonstrate that the proposed registration framework exhibits excellent adaptability for real-scan point clouds, and achieves comparatively superior alignments in a shorter time.

Originality/value

The integrated feature divergence, involving both global and local information of points, is proven to be an effective indicator for measuring the reliability of point correspondences. This inclusion prevents premature convergence, resulting in more robust registration results for our proposed method. Simultaneously, the total operating time is reduced due to a lower number of iterations.

Details

Robotic Intelligence and Automation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 8 December 2023

Han Sun, Song Tang, Xiaozhi Qi, Zhiyuan Ma and Jianxin Gao

This study aims to introduce a novel noise filter module designed for LiDAR simultaneous localization and mapping (SLAM) systems. The primary objective is to enhance pose…

Abstract

Purpose

This study aims to introduce a novel noise filter module designed for LiDAR simultaneous localization and mapping (SLAM) systems. The primary objective is to enhance pose estimation accuracy and improve the overall system performance in outdoor environments.

Design/methodology/approach

Distinct from traditional approaches, MCFilter emphasizes enhancing point cloud data quality at the pixel level. This framework hinges on two primary elements. First, the D-Tracker, a tracking algorithm, is grounded on multiresolution three-dimensional (3D) descriptors and adeptly maintains a balance between precision and efficiency. Second, the R-Filter introduces a pixel-level attribute named motion-correlation, which effectively identifies and removes dynamic points. Furthermore, designed as a modular component, MCFilter ensures seamless integration into existing LiDAR SLAM systems.

Findings

Based on rigorous testing with public data sets and real-world conditions, the MCFilter reported an increase in average accuracy of 12.39% and reduced processing time by 24.18%. These outcomes emphasize the method’s effectiveness in refining the performance of current LiDAR SLAM systems.

Originality/value

In this study, the authors present a novel 3D descriptor tracker designed for consistent feature point matching across successive frames. The authors also propose an innovative attribute to detect and eliminate noise points. Experimental results demonstrate that integrating this method into existing LiDAR SLAM systems yields state-of-the-art performance.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 5 November 2019

Zhenbin Jiang, Juan Guo and Xinyu Zhang

A common pipeline of apparel design and simulation is adjusting 2D apparel patterns, putting them onto a virtual human model and performing 3D physically based simulation…

Abstract

Purpose

A common pipeline of apparel design and simulation is adjusting 2D apparel patterns, putting them onto a virtual human model and performing 3D physically based simulation. However, manually adjusting 2D apparel patterns and performing simulations require repetitive adjustments and trials in order to achieve satisfactory results. To support future made-to-fit apparel design and manufacturing, efficient tools for fast custom design purposes are desired. The purpose of this paper is to propose a method to automatically adjust 2D apparel patterns and rapidly generate acustom apparel style for a given human model.

Design/methodology/approach

The authors first pre-define a set of constraints using feature points, feature lines and ease allowance for existing apparels and human models. The authors formulate the apparel fitting to a human model, as a process of optimization using these predefined constraints. Then, the authors iteratively solve the problem by minimizing the total fitting metric.

Findings

The authors observed that through reusing existing apparel styles, the process of designing apparels can be greatly simplified. The authors used a new fitting function to measure the geometric fitting of corresponding feature points/lines between apparels and a human model. Then, the optimized 2D patterns are automatically obtained by minimizing the matching function. The authors’ experiments show that the authors’ approach can increase the reusability of existing apparel styles and improve apparel design efficiency.

Research limitations/implications

There are some limitations. First, in order to achieve interactive performance, the authors’ current 3D simulation does not detect collision within or between adjacent apparel surfaces. Second, the authors’ did not consider multiple layer apparels. It is non-trivial to define ease allowance between multiple layers.

Originality/value

The authors use a set of constraints such as ease allowance, feature points, feature lines, etc. for existing apparels and human models. The authors define a few new fitting functions using these pre-specified constraints. During physics-driven simulation, the authors iteratively minimize these fitting functions.

Details

International Journal of Clothing Science and Technology, vol. 32 no. 2
Type: Research Article
ISSN: 0955-6222

Keywords

Article
Publication date: 25 October 2022

Chen Chen, Tingyang Chen, Zhenhua Cai, Chunnian Zeng and Xiaoyue Jin

The traditional vision system cannot automatically adjust the feature point extraction method according to the type of welding seam. In addition, the robot cannot self-correct the…

Abstract

Purpose

The traditional vision system cannot automatically adjust the feature point extraction method according to the type of welding seam. In addition, the robot cannot self-correct the laying position error or machining error. To solve this problem, this paper aims to propose a hierarchical visual model to achieve automatic arc welding guidance.

Design/methodology/approach

The hierarchical visual model proposed in this paper is divided into two layers: welding seam classification layer and feature point extraction layer. In the welding seam classification layer, the SegNet network model is trained to identify the welding seam type, and the prediction mask is obtained to segment the corresponding point clouds. In the feature point extraction layer, the scanning path is determined by the point cloud obtained from the upper layer to correct laying position error. The feature points extraction method is automatically determined to correct machining error based on the type of welding seam. Furthermore, the corresponding specific method to extract the feature points for each type of welding seam is proposed. The proposed visual model is experimentally validated, and the feature points extraction results as well as seam tracking error are finally analyzed.

Findings

The experimental results show that the algorithm can well accomplish welding seam classification, feature points extraction and seam tracking with high precision. The prediction mask accuracy is above 90% for three types of welding seam. The proposed feature points extraction method for each type of welding seam can achieve sub-pixel feature extraction. For the three types of welding seam, the maximum seam tracking error is 0.33–0.41 mm, and the average seam tracking error is 0.11–0.22 mm.

Originality/value

The main innovation of this paper is that a hierarchical visual model for robotic arc welding is proposed, which is suitable for various types of welding seam. The proposed visual model well achieves welding seam classification, feature point extraction and error correction, which improves the automation level of robot welding.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 154000