Search results

1 – 10 of 70
Article
Publication date: 15 December 2022

Jiaxiang Hu, Xiaojun Shi, Chunyun Ma, Xin Yao and Yingxin Wang

The purpose of this paper is to propose a multi-feature, multi-metric and multi-loop tightly coupled LiDAR-visual-inertial odometry, M3LVI, for high-accuracy and robust state…

Abstract

Purpose

The purpose of this paper is to propose a multi-feature, multi-metric and multi-loop tightly coupled LiDAR-visual-inertial odometry, M3LVI, for high-accuracy and robust state estimation and mapping.

Design/methodology/approach

M3LVI is built atop a factor graph and composed of two subsystems, a LiDAR-inertial system (LIS) and a visual-inertial system (VIS). LIS implements multi-feature extraction on point cloud, and then multi-metric transformation estimation is implemented to realize LiDAR odometry. LiDAR-enhanced images and IMU pre-integration have been used in VIS to realize visual odometry, providing a reliable initial guess for LIS matching module. Location recognition is performed by a dual loop module combined with Bag of Words and LiDAR-Iris to correct accumulated drift. M³LVI also functions properly when one of the subsystems failed, which greatly increases the robustness in degraded environments.

Findings

Quantitative experiments were conducted on the KITTI data set and the campus data set to evaluate the M3LVI. The experimental results show the algorithm has higher pose estimation accuracy than existing methods.

Practical implications

The proposed method can greatly improve the positioning and mapping accuracy of AGV, and has an important impact on AGV material distribution, which is one of the most important applications of industrial robots.

Originality/value

M3LVI divides the original point cloud into six types, and uses multi-metric transformation estimation to estimate the state of robot and adopts factor graph optimization model to optimize the state estimation, which improves the accuracy of pose estimation. When one subsystem fails, the other system can complete the positioning work independently, which greatly increases the robustness in degraded environments.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 11 June 2021

Ruihao Lin, Junzhe Xu and Jianhua Zhang

Large-scale and precise three-dimensional (3D) map play an important role in autonomous driving and robot positioning. However, it is difficult to get accurate poses for mapping…

Abstract

Purpose

Large-scale and precise three-dimensional (3D) map play an important role in autonomous driving and robot positioning. However, it is difficult to get accurate poses for mapping. On one hand, the global positioning system (GPS) data are not always reliable owing to multipath effect and poor satellite visibility in many urban environments. In another hand, the LiDAR-based odometry has accumulative errors. This paper aims to propose a novel simultaneous localization and mapping (SLAM) system to obtain large-scale and precise 3D map.

Design/methodology/approach

The proposed SLAM system optimally integrates the GPS data and a LiDAR odometry. In this system, two core algorithms are developed. To effectively verify reliability of the GPS data, VGL (the abbreviation of Verify GPS data with LiDAR data) algorithm is proposed and the points from LiDAR are used by the algorithm. To obtain accurate poses in GPS-denied areas, this paper proposes EG-LOAM algorithm, a LiDAR odometry with local optimization strategy to eliminate the accumulative errors by means of reliable GPS data.

Findings

On the KITTI data set and the customized outdoor data set, the system is able to generate high-precision 3D map in both GPS-denied areas and areas covered by GPS. Meanwhile, the VGL algorithm is proved to be able to verify reliability of the GPS data with confidence and the EG-LOAM outperform the state-of-the-art baselines.

Originality/value

A novel SLAM system is proposed to obtain large-scale and precise 3D map. To improve the robustness of the system, the VGL algorithm and the EG-LOAM are designed. The whole system as well as the two algorithms have a satisfactory performance in experiments.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 8 December 2023

Han Sun, Song Tang, Xiaozhi Qi, Zhiyuan Ma and Jianxin Gao

This study aims to introduce a novel noise filter module designed for LiDAR simultaneous localization and mapping (SLAM) systems. The primary objective is to enhance pose…

Abstract

Purpose

This study aims to introduce a novel noise filter module designed for LiDAR simultaneous localization and mapping (SLAM) systems. The primary objective is to enhance pose estimation accuracy and improve the overall system performance in outdoor environments.

Design/methodology/approach

Distinct from traditional approaches, MCFilter emphasizes enhancing point cloud data quality at the pixel level. This framework hinges on two primary elements. First, the D-Tracker, a tracking algorithm, is grounded on multiresolution three-dimensional (3D) descriptors and adeptly maintains a balance between precision and efficiency. Second, the R-Filter introduces a pixel-level attribute named motion-correlation, which effectively identifies and removes dynamic points. Furthermore, designed as a modular component, MCFilter ensures seamless integration into existing LiDAR SLAM systems.

Findings

Based on rigorous testing with public data sets and real-world conditions, the MCFilter reported an increase in average accuracy of 12.39% and reduced processing time by 24.18%. These outcomes emphasize the method’s effectiveness in refining the performance of current LiDAR SLAM systems.

Originality/value

In this study, the authors present a novel 3D descriptor tracker designed for consistent feature point matching across successive frames. The authors also propose an innovative attribute to detect and eliminate noise points. Experimental results demonstrate that integrating this method into existing LiDAR SLAM systems yields state-of-the-art performance.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 9 July 2024

Zengrui Zheng, Kainan Su, Shifeng Lin, Zhiquan Fu and Chenguang Yang

Visual simultaneous localization and mapping (SLAM) has limitations such as sensitivity to lighting changes and lower measurement accuracy. The effective fusion of information…

Abstract

Purpose

Visual simultaneous localization and mapping (SLAM) has limitations such as sensitivity to lighting changes and lower measurement accuracy. The effective fusion of information from multiple modalities to address these limitations has emerged as a key research focus. This study aims to provide a comprehensive review of the development of vision-based SLAM (including visual SLAM) for navigation and pose estimation, with a specific focus on techniques for integrating multiple modalities.

Design/methodology/approach

This paper initially introduces the mathematical models and framework development of visual SLAM. Subsequently, this paper presents various methods for improving accuracy in visual SLAM by fusing different spatial and semantic features. This paper also examines the research advancements in vision-based SLAM with respect to multi-sensor fusion in both loosely coupled and tightly coupled approaches. Finally, this paper analyzes the limitations of current vision-based SLAM and provides predictions for future advancements.

Findings

The combination of vision-based SLAM and deep learning has significant potential for development. There are advantages and disadvantages to both loosely coupled and tightly coupled approaches in multi-sensor fusion, and the most suitable algorithm should be chosen based on the specific application scenario. In the future, vision-based SLAM is evolving toward better addressing challenges such as resource-limited platforms and long-term mapping.

Originality/value

This review introduces the development of vision-based SLAM and focuses on the advancements in multimodal fusion. It allows readers to quickly understand the progress and current status of research in this field.

Details

Robotic Intelligence and Automation, vol. 44 no. 4
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 7 April 2023

Sixing Liu, Yan Chai, Rui Yuan and Hong Miao

Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments…

Abstract

Purpose

Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments. Existing algorithms are based on laser or visual odometry; however, the lidar sensing range is small, the amount of data features is small, the camera is vulnerable to external conditions and the localization and map building cannot be performed stably and accurately using a single sensor. This paper aims to propose a laser three dimensions tightly coupled map building method that incorporates visual information, and uses laser point cloud information and image information to complement each other to improve the overall performance of the algorithm.

Design/methodology/approach

The visual feature points are first matched at the front end of the method, and the mismatched point pairs are removed using the bidirectional random sample consensus (RANSAC) algorithm. The laser point cloud is then used to obtain its depth information, while the two types of feature points are fed into the pose estimation module for a tightly coupled local bundle adjustment solution using a heuristic simulated annealing algorithm. Finally, the visual bag-of-words model is fused in the laser point cloud information to establish a threshold to construct a loopback framework to further reduce the cumulative drift error of the system over time.

Findings

Experiments on publicly available data sets show that the proposed method in this paper can match its real trajectory well. For various scenes, the map can be constructed by using the complementary laser and vision sensors, with high accuracy and robustness. At the same time, the method is verified in a real environment using an autonomous walking acquisition platform, and the system loaded with the method can run well for a long time and take into account the environmental adaptability of multiple scenes.

Originality/value

A multi-sensor data tight coupling method is proposed to fuse laser and vision information for optimal solution of the positional attitude. A bidirectional RANSAC algorithm is used for the removal of visual mismatched point pairs. Further, oriented fast and rotated brief feature points are used to build a bag-of-words model and construct a real-time loopback framework to reduce error accumulation. According to the experimental validation results, the accuracy and robustness of the single-sensor SLAM algorithm can be improved.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 4 October 2021

Zhe Liu, Zhijian Qiao, Chuanzhe Suo, Yingtian Liu and Kefan Jin

This paper aims to study the localization problem for autonomous industrial vehicles in the complex industrial environments. Aiming for practical applications, the pursuit is to…

Abstract

Purpose

This paper aims to study the localization problem for autonomous industrial vehicles in the complex industrial environments. Aiming for practical applications, the pursuit is to build a map-less localization system which can be used in the presence of dynamic obstacles, short-term and long-term environment changes.

Design/methodology/approach

The proposed system contains four main modules, including long-term place graph updating, global localization and re-localization, location tracking and pose registration. The first two modules fully exploit the deep-learning based three-dimensional point cloud learning techniques to achieve the map-less global localization task in large-scale environment. The location tracking module implements the particle filter framework with a newly designed perception model to track the vehicle location during movements. Finally, the pose registration module uses visual information to exclude the influence of dynamic obstacles and short-term changes and further introduces point cloud registration network to estimate the accurate vehicle pose.

Findings

Comprehensive experiments in real industrial environments demonstrate the effectiveness, robustness and practical applicability of the map-less localization approach.

Practical implications

This paper provides comprehensive experiments in real industrial environments.

Originality/value

The system can be used in the practical automated industrial vehicles for long-term localization tasks. The dynamic objects, short-/long-term environment changes and hardware limitations of industrial vehicles are all considered in the system design. Thus, this work moves a big step toward achieving real implementations of the autonomous localization in practical industrial scenarios.

Article
Publication date: 1 May 2019

Haoyao Chen, Hailin Huang, Ye Qin, Yanjie Li and Yunhui Liu

Multi-robot laser-based simultaneous localization and mapping (SLAM) in large-scale environments is an essential but challenging issue in mobile robotics, especially in situations…

Abstract

Purpose

Multi-robot laser-based simultaneous localization and mapping (SLAM) in large-scale environments is an essential but challenging issue in mobile robotics, especially in situations wherein no prior knowledge is available between robots. Moreover, the cumulative errors of every individual robot exert a serious negative effect on loop detection and map fusion. To address these problems, this paper aims to propose an efficient approach that combines laser and vision measurements.

Design/methodology/approach

A multi-robot visual laser-SLAM is developed to realize robust and efficient SLAM in large-scale environments; both vision and laser loop detections are integrated to detect robust loops. A method based on oriented brief (ORB) feature detection and bag of words (BoW) is developed, to ensure the robustness and computational effectiveness of the multi-robot SLAM system. A robust and efficient graph fusion algorithm is proposed to merge pose graphs from different robots.

Findings

The proposed method can detect loops more quickly and accurately than the laser-only SLAM, and it can fuse the submaps of each single robot to promote the efficiency, accuracy and robustness of the system.

Originality/value

Compared with the state of art of multi-robot SLAM approaches, the paper proposed a novel and more sophisticated approach. The vision-based and laser-based loops are integrated to realize a robust loop detection. The ORB features and BoW technologies are further utilized to gain real-time performance. Finally, random sample consensus and least-square methodologies are used to remove the outlier loops among robots.

Details

Assembly Automation, vol. 39 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 2 January 2024

Xiangdi Yue, Yihuan Zhang, Jiawei Chen, Junxin Chen, Xuanyi Zhou and Miaolei He

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and…

Abstract

Purpose

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and mapping (SLAM) techniques. This paper aims to provide a significant reference for researchers and engineers in robotic mapping.

Design/methodology/approach

This paper focused on the research state of LiDAR-based SLAM for robotic mapping as well as a literature survey from the perspective of various LiDAR types and configurations.

Findings

This paper conducted a comprehensive literature review of the LiDAR-based SLAM system based on three distinct LiDAR forms and configurations. The authors concluded that multi-robot collaborative mapping and multi-source fusion SLAM systems based on 3D LiDAR with deep learning will be new trends in the future.

Originality/value

To the best of the authors’ knowledge, this is the first thorough survey of robotic mapping from the perspective of various LiDAR types and configurations. It can serve as a theoretical and practical guide for the advancement of academic and industrial robot mapping.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 6 March 2024

Ruoxing Wang, Shoukun Wang, Junfeng Xue, Zhihua Chen and Jinge Si

This paper aims to investigate an autonomous obstacle-surmounting method based on a hybrid gait for the problem of crossing low-height obstacles autonomously by a six wheel-legged…

Abstract

Purpose

This paper aims to investigate an autonomous obstacle-surmounting method based on a hybrid gait for the problem of crossing low-height obstacles autonomously by a six wheel-legged robot. The autonomy of obstacle-surmounting is reflected in obstacle recognition based on multi-frame point cloud fusion.

Design/methodology/approach

In this paper, first, for the problem that the lidar on the robot cannot scan the point cloud of low-height obstacles, the lidar is driven to rotate by a 2D turntable to obtain the point cloud of low-height obstacles under the robot. Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping algorithm, fast ground segmentation algorithm and Euclidean clustering algorithm are used to recognize the point cloud of low-height obstacles and obtain low-height obstacle in-formation. Then, combined with the structural characteristics of the robot, the obstacle-surmounting action planning is carried out for two types of obstacle scenes. A segmented approach is used for action planning. Gait units are designed to describe each segment of the action. A gait matrix is used to describe the overall action. The paper also analyzes the stability and surmounting capability of the robot’s key pose and determines the robot’s surmounting capability and the value scheme of the surmounting control variables.

Findings

The experimental verification is carried out on the robot laboratory platform (BIT-6NAZA). The obstacle recognition method can accurately detect low-height obstacles. The robot can maintain a smooth posture to cross low-height obstacles, which verifies the feasibility of the adaptive obstacle-surmounting method.

Originality/value

The study can provide the theory and engineering foundation for the environmental perception of the unmanned platform. It provides environmental information to support follow-up work, for example, on the planning of obstacles and obstacles.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 2 February 2024

Bushi Chen, Xunyu Zhong, Han Xie, Pengfei Peng, Huosheng Hu, Xungao Zhong and Qiang Liu

Autonomous mobile robots (AMRs) play a crucial role in industrial and service fields. The paper aims to build a LiDAR-based simultaneous localization and mapping (SLAM) system…

Abstract

Purpose

Autonomous mobile robots (AMRs) play a crucial role in industrial and service fields. The paper aims to build a LiDAR-based simultaneous localization and mapping (SLAM) system used by AMRs to overcome challenges in dynamic and changing environments.

Design/methodology/approach

This research introduces SLAM-RAMU, a lifelong SLAM system that addresses these challenges by providing precise and consistent relocalization and autonomous map updating (RAMU). During the mapping process, local odometry is obtained using iterative error state Kalman filtering, while back-end loop detection and global pose graph optimization are used for accurate trajectory correction. In addition, a fast point cloud segmentation module is incorporated to robustly distinguish between floor, walls and roof in the environment. The segmented point clouds are then used to generate a 2.5D grid map, with particular emphasis on floor detection to filter the prior map and eliminate dynamic artifacts. In the positioning process, an initial pose alignment method is designed, which combines 2D branch-and-bound search with 3D iterative closest point registration. This method ensures high accuracy even in scenes with similar characteristics. Subsequently, scan-to-map registration is performed using the segmented point cloud on the prior map. The system also includes a map updating module that takes into account historical point cloud segmentation results. It selectively incorporates or excludes new point cloud data to ensure consistent reflection of the real environment in the map.

Findings

The performance of the SLAM-RAMU system was evaluated in real-world environments and compared against state-of-the-art (SOTA) methods. The results demonstrate that SLAM-RAMU achieves higher mapping quality and relocalization accuracy and exhibits robustness against dynamic obstacles and environmental changes.

Originality/value

Compared to other SOTA methods in simulation and real environments, SLAM-RAMU showed higher mapping quality, faster initial aligning speed and higher repeated localization accuracy.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of 70