Search results
1 – 10 of 15Abstract
Purpose
The purpose of this paper is to propose a binocular visual odometry algorithm based on the Random Sample Consensus (RANSAC) in visual navigation systems.
Design/methodology/approach
The authors propose a novel binocular visual odometry algorithm based on features from accelerated segment test (FAST) extractor and an improved matching method based on the RANSAC. Firstly, features are detected by utilizing the FAST extractor. Secondly, the detected features are roughly matched by utilizing the distance ration of the nearest neighbor and the second nearest neighbor. Finally, wrong matched feature pairs are removed by using the RANSAC method to reduce the interference of error matchings.
Findings
The performance of this new algorithm has been examined by an actual experiment data. The results shown that not only the robustness of feature detection and matching can be enhanced but also the positioning error can be significantly reduced by utilizing this novel binocular visual odometry algorithm. The feasibility and effectiveness of the proposed matching method and the improved binocular visual odometry algorithm were also verified in this paper.
Practical implications
This paper presents an improved binocular visual odometry algorithm which has been tested by real data. This algorithm can be used for outdoor vehicle navigation.
Originality/value
A binocular visual odometer algorithm based on FAST extractor and RANSAC methods is proposed to improve the positioning accuracy and robustness. Experiment results have verified the effectiveness of the present visual odometer algorithm.
Details
Keywords
Dimitrios Chrysostomou, Khaled Goher, Giovanni Muscato, Mohammad Osman Tokhi and Gurvinder S. Virk
This paper aims to quickly obtain an accurate and complete dense three-dimensional map of indoor environment with lower cost, which can be directly used in navigation.
Abstract
Purpose
This paper aims to quickly obtain an accurate and complete dense three-dimensional map of indoor environment with lower cost, which can be directly used in navigation.
Design/methodology/approach
This paper proposes an improved ORB-SLAM2 dense map optimization algorithm. This algorithm consists of three parts: ORB feature extraction based on improved FAST-12, feature point extraction with progressive sample consensus (PROSAC) and the dense ORB-SLAM2 algorithm for mapping. Here, the dense ORB-SLAM2 algorithm adds LoopClose optimization thread and dense point cloud map and octree map construction thread. The dense map is computationally expensive and occupies a large amount of memory. Therefore, the proposed method takes higher efficiency, voxel filtering can reduce the memory while ensuring the density of the map and then use the octree format to store the map to further reduce memory.
Findings
The improved ORB-SLAM2 algorithm is compared with the original ORB-SLAM2 algorithm, and the experimental results show that the map through improved ORB-SLAM2 can be directly used in navigation process with higher accuracy, shorter tracking time and smaller memory.
Originality/value
The improved ORB-SLAM2 algorithm can obtain a dense environment map, which ensures the integrity of data. The comparisons of FAST-12 and improved FAST-12, RANSAC and PROSAC prove that the improved FAST-12 and PROSAC both make the feature point extraction process faster and more accurate. Voxel filter helps to take small storage memory and low computation cost, and octree map construction on the dense map can be directly used in navigation.
Details
Keywords
Cailing Wang, Chunxia Zhao and Jingyu Yang
Positioning is a key task in most field robotics applications but can be very challenging in GPS‐denied or high‐slip environments. The purpose of this paper is to describe a…
Abstract
Purpose
Positioning is a key task in most field robotics applications but can be very challenging in GPS‐denied or high‐slip environments. The purpose of this paper is to describe a visual odometry strategy using only one camera in country roads.
Design/methodology/approach
This monocular odometery system uses as input only those images provided by a single camera mounted on the roof of the vehicle and the framework is composed of three main parts: image motion estimation, ego‐motion computation and visual odometry. The image motion is estimated based on a hyper‐complex wavelet phase‐derived optical flow field. The ego‐motion of the vehicle is computed by a blocked RANdom SAmple Consensus algorithm and a maximum likelihood estimator based on a 4‐degrees of freedom motion model. These as instantaneous ego‐motion measurements are used to update the vehicle trajectory according to a dead‐reckoning model and unscented Kalman filter.
Findings
The authors' proposed framework and algorithms are validated on videos from a real automotive platform. Furthermore, the recovered trajectory is superimposed onto a digital map, and the localization results from this method are compared to the ground truth measured with a GPS/INS joint system. These experimental results indicate that the framework and the algorithms are effective.
Originality/value
The effective framework and algorithms for visual odometry using only one camera in country roads are introduced in this paper.
Details
Keywords
Sixing Liu, Yan Chai, Rui Yuan and Hong Miao
Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments…
Abstract
Purpose
Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments. Existing algorithms are based on laser or visual odometry; however, the lidar sensing range is small, the amount of data features is small, the camera is vulnerable to external conditions and the localization and map building cannot be performed stably and accurately using a single sensor. This paper aims to propose a laser three dimensions tightly coupled map building method that incorporates visual information, and uses laser point cloud information and image information to complement each other to improve the overall performance of the algorithm.
Design/methodology/approach
The visual feature points are first matched at the front end of the method, and the mismatched point pairs are removed using the bidirectional random sample consensus (RANSAC) algorithm. The laser point cloud is then used to obtain its depth information, while the two types of feature points are fed into the pose estimation module for a tightly coupled local bundle adjustment solution using a heuristic simulated annealing algorithm. Finally, the visual bag-of-words model is fused in the laser point cloud information to establish a threshold to construct a loopback framework to further reduce the cumulative drift error of the system over time.
Findings
Experiments on publicly available data sets show that the proposed method in this paper can match its real trajectory well. For various scenes, the map can be constructed by using the complementary laser and vision sensors, with high accuracy and robustness. At the same time, the method is verified in a real environment using an autonomous walking acquisition platform, and the system loaded with the method can run well for a long time and take into account the environmental adaptability of multiple scenes.
Originality/value
A multi-sensor data tight coupling method is proposed to fuse laser and vision information for optimal solution of the positional attitude. A bidirectional RANSAC algorithm is used for the removal of visual mismatched point pairs. Further, oriented fast and rotated brief feature points are used to build a bag-of-words model and construct a real-time loopback framework to reduce error accumulation. According to the experimental validation results, the accuracy and robustness of the single-sensor SLAM algorithm can be improved.
Details
Keywords
Yong Qin and Haidong Yu
This paper aims to provide a better understanding of the challenges and potential solutions in Visual Simultaneous Localization and Mapping (SLAM), laying the foundation for its…
Abstract
Purpose
This paper aims to provide a better understanding of the challenges and potential solutions in Visual Simultaneous Localization and Mapping (SLAM), laying the foundation for its applications in autonomous navigation, intelligent driving and other related domains.
Design/methodology/approach
In analyzing the latest research, the review presents representative achievements, including methods to enhance efficiency, robustness and accuracy. Additionally, the review provides insights into the future development direction of Visual SLAM, emphasizing the importance of improving system robustness when dealing with dynamic environments. The research methodology of this review involves a literature review and data set analysis, enabling a comprehensive understanding of the current status and prospects in the field of Visual SLAM.
Findings
This review aims to comprehensively evaluate the latest advances and challenges in the field of Visual SLAM. By collecting and analyzing relevant research papers and classic data sets, it reveals the current issues faced by Visual SLAM in complex environments and proposes potential solutions. The review begins by introducing the fundamental principles and application areas of Visual SLAM, followed by an in-depth discussion of the challenges encountered when dealing with dynamic objects and complex environments. To enhance the performance of SLAM algorithms, researchers have made progress by integrating different sensor modalities, improving feature extraction and incorporating deep learning techniques, driving advancements in the field.
Originality/value
To the best of the authors’ knowledge, the originality of this review lies in its in-depth analysis of current research hotspots and predictions for future development, providing valuable references for researchers in this field.
Details
Keywords
Yanwu Zhai, Haibo Feng, Haitao Zhou, Songyuan Zhang and Yili Fu
This paper aims to propose a method to solve the problem of localization and mapping of a two-wheeled inverted pendulum (TWIP) robot on the ground using the Stereo–inertial…
Abstract
Purpose
This paper aims to propose a method to solve the problem of localization and mapping of a two-wheeled inverted pendulum (TWIP) robot on the ground using the Stereo–inertial measurement unit (IMU) system. This method reparametrizes the pose according to the motion characteristics of TWIP and considers the impact of uneven ground on vision and IMU, which is more adaptable to the real environment.
Design/methodology/approach
When TWIP moves, it is constrained by the ground and swings back and forth to maintain balance. Therefore, the authors parameterize the robot pose as SE(2) pose plus pitch according to the motion characteristics of TWIP. However, the authors do not omit disturbances in other directions but perform error modeling, which is integrated into the visual constraints and IMU pre-integration constraints as an error term. Finally, the authors analyze the influence of the error term on the vision and IMU constraints during the optimization process. Compared to traditional algorithms, the algorithm is simpler and better adapt to the real environment.
Findings
The results of indoor and outdoor experiments show that, for the TWIP robot, the method has better positioning accuracy and robustness compared with the state-of-the-art.
Originality/value
The algorithm in this paper is proposed for the localization and mapping of a TWIP robot. Different from the traditional positioning method on SE(3), this paper parameterizes the robot pose as SE(2) pose plus pitch according to the motion of TWIP and the motion disturbances in other directions are integrated into visual constraints and IMU pre-integration constraints as error terms, which simplifies the optimization parameters, better adapts to the real environment and improves the accuracy of positioning.
Details
Keywords
Yanwu Zhai, Haibo Feng and Yili Fu
This paper aims to present a pipeline to progressively deal with the online external parameter calibration and estimator initialization of the Stereo-inertial measurement unit…
Abstract
Purpose
This paper aims to present a pipeline to progressively deal with the online external parameter calibration and estimator initialization of the Stereo-inertial measurement unit (IMU) system, which does not require any prior information and is suitable for system initialization in a variety of environments.
Design/methodology/approach
Before calibration and initialization, a modified stereo tracking method is adopted to obtain a motion pose, which provides prerequisites for the next three steps. Firstly, the authors align the pose obtained with the IMU measurements and linearly calculate the rough external parameters and gravity vector to provide initial values for the next optimization. Secondly, the authors fix the pose obtained by the vision and restore the external and inertial parameters of the system by optimizing the pre-integration of the IMU. Thirdly, the result of the previous step is used to perform visual-inertial joint optimization to further refine the external and inertial parameters.
Findings
The results of public data set experiments and actual experiments show that this method has better accuracy and robustness compared with the state of-the-art.
Originality/value
This method improves the accuracy of external parameters calibration and initialization and prevents the system from falling into a local minimum. Different from the traditional method of solving inertial navigation parameters separately, in this paper, all inertial navigation parameters are solved at one time, and the results of the previous step are used as the seed for the next optimization, and gradually solve the external inertial navigation parameters from coarse to fine, which avoids falling into a local minimum, reduces the number of iterations during optimization and improves the efficiency of the system.
Details
Keywords
Guoqing Li, Yunhai Geng and Wenzheng Zhang
This paper aims to introduce an efficient active-simultaneous localization and mapping (SLAM) approach for rover navigation, future planetary rover exploration mission requires…
Abstract
Purpose
This paper aims to introduce an efficient active-simultaneous localization and mapping (SLAM) approach for rover navigation, future planetary rover exploration mission requires the rover to automatically localize itself with high accuracy.
Design/methodology/approach
A three-dimensional (3D) feature detection method is first proposed to extract salient features from the observed point cloud, after that, the salient features are employed as the candidate destinations for re-visiting under SLAM structure, followed by a path planning algorithm integrated with SLAM, wherein the path length and map utility are leveraged to reduce the growth rate of state estimation uncertainty.
Findings
The proposed approach is able to extract distinguishable 3D landmarks for feature re-visiting, and can be naturally integrated with any SLAM algorithms in an efficient manner to improve the navigation accuracy.
Originality/value
This paper proposes a novel active-SLAM structure for planetary rover exploration mission, the salient feature extraction method and active revisit patch planning method are validated to improve the accuracy of pose estimation.
Details
Keywords
Yi Jiang, Ting Wang, Shiliang Shao and Lebing Wang
In large-scale environments and unstructured scenarios, the accuracy and robustness of traditional light detection and ranging (LiDAR) simultaneous localization and mapping (SLAM…
Abstract
Purpose
In large-scale environments and unstructured scenarios, the accuracy and robustness of traditional light detection and ranging (LiDAR) simultaneous localization and mapping (SLAM) algorithms are reduced, and the algorithms might even be completely ineffective. To overcome these problems, this study aims to propose a 3D LiDAR SLAM method for ground-based mobile robots, which uses a 3D LiDAR fusion inertial measurement unit (IMU) to establish an environment map and realize real-time localization.
Design/methodology/approach
First, we use a normal distributions transform (NDT) algorithm based on a local map with a corresponding motion prediction model for point cloud registration in the front-end. Next, point cloud features are tightly coupled with IMU angle constraints, ground constraints and gravity constraints for graph-based optimization in the back-end. Subsequently, the cumulative error is reduced by adding loop closure detection.
Findings
The algorithm is tested using a public data set containing indoor and outdoor scenarios. The results confirm that the proposed algorithm has high accuracy and robustness.
Originality/value
To improve the accuracy and robustness of SLAM, this method proposed in the paper introduced the NDT algorithm in the front-end and designed ground constraints and gravity constraints in the back-end. The proposed method has a satisfactory performance when applied to ground-based mobile robots in complex environments experiments.
Details