Search results

1 – 10 of 13
Article
Publication date: 19 June 2017

Qian Sun, Ming Diao, Yibing Li and Ya Zhang

The purpose of this paper is to propose a binocular visual odometry algorithm based on the Random Sample Consensus (RANSAC) in visual navigation systems.

Abstract

Purpose

The purpose of this paper is to propose a binocular visual odometry algorithm based on the Random Sample Consensus (RANSAC) in visual navigation systems.

Design/methodology/approach

The authors propose a novel binocular visual odometry algorithm based on features from accelerated segment test (FAST) extractor and an improved matching method based on the RANSAC. Firstly, features are detected by utilizing the FAST extractor. Secondly, the detected features are roughly matched by utilizing the distance ration of the nearest neighbor and the second nearest neighbor. Finally, wrong matched feature pairs are removed by using the RANSAC method to reduce the interference of error matchings.

Findings

The performance of this new algorithm has been examined by an actual experiment data. The results shown that not only the robustness of feature detection and matching can be enhanced but also the positioning error can be significantly reduced by utilizing this novel binocular visual odometry algorithm. The feasibility and effectiveness of the proposed matching method and the improved binocular visual odometry algorithm were also verified in this paper.

Practical implications

This paper presents an improved binocular visual odometry algorithm which has been tested by real data. This algorithm can be used for outdoor vehicle navigation.

Originality/value

A binocular visual odometer algorithm based on FAST extractor and RANSAC methods is proposed to improve the positioning accuracy and robustness. Experiment results have verified the effectiveness of the present visual odometer algorithm.

Details

Industrial Robot: An International Journal, vol. 44 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Abstract

Details

Industrial Robot: An International Journal, vol. 44 no. 4
Type: Research Article
ISSN: 0143-991X

Article
Publication date: 8 July 2022

Lin Zhang and Yingjie Zhang

This paper aims to quickly obtain an accurate and complete dense three-dimensional map of indoor environment with lower cost, which can be directly used in navigation.

Abstract

Purpose

This paper aims to quickly obtain an accurate and complete dense three-dimensional map of indoor environment with lower cost, which can be directly used in navigation.

Design/methodology/approach

This paper proposes an improved ORB-SLAM2 dense map optimization algorithm. This algorithm consists of three parts: ORB feature extraction based on improved FAST-12, feature point extraction with progressive sample consensus (PROSAC) and the dense ORB-SLAM2 algorithm for mapping. Here, the dense ORB-SLAM2 algorithm adds LoopClose optimization thread and dense point cloud map and octree map construction thread. The dense map is computationally expensive and occupies a large amount of memory. Therefore, the proposed method takes higher efficiency, voxel filtering can reduce the memory while ensuring the density of the map and then use the octree format to store the map to further reduce memory.

Findings

The improved ORB-SLAM2 algorithm is compared with the original ORB-SLAM2 algorithm, and the experimental results show that the map through improved ORB-SLAM2 can be directly used in navigation process with higher accuracy, shorter tracking time and smaller memory.

Originality/value

The improved ORB-SLAM2 algorithm can obtain a dense environment map, which ensures the integrity of data. The comparisons of FAST-12 and improved FAST-12, RANSAC and PROSAC prove that the improved FAST-12 and PROSAC both make the feature point extraction process faster and more accurate. Voxel filter helps to take small storage memory and low computation cost, and octree map construction on the dense map can be directly used in navigation.

Details

Assembly Automation, vol. 42 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 23 August 2011

Cailing Wang, Chunxia Zhao and Jingyu Yang

Positioning is a key task in most field robotics applications but can be very challenging in GPS‐denied or high‐slip environments. The purpose of this paper is to describe…

Abstract

Purpose

Positioning is a key task in most field robotics applications but can be very challenging in GPS‐denied or high‐slip environments. The purpose of this paper is to describe a visual odometry strategy using only one camera in country roads.

Design/methodology/approach

This monocular odometery system uses as input only those images provided by a single camera mounted on the roof of the vehicle and the framework is composed of three main parts: image motion estimation, ego‐motion computation and visual odometry. The image motion is estimated based on a hyper‐complex wavelet phase‐derived optical flow field. The ego‐motion of the vehicle is computed by a blocked RANdom SAmple Consensus algorithm and a maximum likelihood estimator based on a 4‐degrees of freedom motion model. These as instantaneous ego‐motion measurements are used to update the vehicle trajectory according to a dead‐reckoning model and unscented Kalman filter.

Findings

The authors' proposed framework and algorithms are validated on videos from a real automotive platform. Furthermore, the recovered trajectory is superimposed onto a digital map, and the localization results from this method are compared to the ground truth measured with a GPS/INS joint system. These experimental results indicate that the framework and the algorithms are effective.

Originality/value

The effective framework and algorithms for visual odometry using only one camera in country roads are introduced in this paper.

Details

Industrial Robot: An International Journal, vol. 38 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 8 February 2022

Yanwu Zhai, Haibo Feng, Haitao Zhou, Songyuan Zhang and Yili Fu

This paper aims to propose a method to solve the problem of localization and mapping of a two-wheeled inverted pendulum (TWIP) robot on the ground using the…

Abstract

Purpose

This paper aims to propose a method to solve the problem of localization and mapping of a two-wheeled inverted pendulum (TWIP) robot on the ground using the Stereo–inertial measurement unit (IMU) system. This method reparametrizes the pose according to the motion characteristics of TWIP and considers the impact of uneven ground on vision and IMU, which is more adaptable to the real environment.

Design/methodology/approach

When TWIP moves, it is constrained by the ground and swings back and forth to maintain balance. Therefore, the authors parameterize the robot pose as SE(2) pose plus pitch according to the motion characteristics of TWIP. However, the authors do not omit disturbances in other directions but perform error modeling, which is integrated into the visual constraints and IMU pre-integration constraints as an error term. Finally, the authors analyze the influence of the error term on the vision and IMU constraints during the optimization process. Compared to traditional algorithms, the algorithm is simpler and better adapt to the real environment.

Findings

The results of indoor and outdoor experiments show that, for the TWIP robot, the method has better positioning accuracy and robustness compared with the state-of-the-art.

Originality/value

The algorithm in this paper is proposed for the localization and mapping of a TWIP robot. Different from the traditional positioning method on SE(3), this paper parameterizes the robot pose as SE(2) pose plus pitch according to the motion of TWIP and the motion disturbances in other directions are integrated into visual constraints and IMU pre-integration constraints as error terms, which simplifies the optimization parameters, better adapts to the real environment and improves the accuracy of positioning.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 2 December 2021

Yanwu Zhai, Haibo Feng and Yili Fu

This paper aims to present a pipeline to progressively deal with the online external parameter calibration and estimator initialization of the Stereo-inertial measurement…

Abstract

Purpose

This paper aims to present a pipeline to progressively deal with the online external parameter calibration and estimator initialization of the Stereo-inertial measurement unit (IMU) system, which does not require any prior information and is suitable for system initialization in a variety of environments.

Design/methodology/approach

Before calibration and initialization, a modified stereo tracking method is adopted to obtain a motion pose, which provides prerequisites for the next three steps. Firstly, the authors align the pose obtained with the IMU measurements and linearly calculate the rough external parameters and gravity vector to provide initial values for the next optimization. Secondly, the authors fix the pose obtained by the vision and restore the external and inertial parameters of the system by optimizing the pre-integration of the IMU. Thirdly, the result of the previous step is used to perform visual-inertial joint optimization to further refine the external and inertial parameters.

Findings

The results of public data set experiments and actual experiments show that this method has better accuracy and robustness compared with the state of-the-art.

Originality/value

This method improves the accuracy of external parameters calibration and initialization and prevents the system from falling into a local minimum. Different from the traditional method of solving inertial navigation parameters separately, in this paper, all inertial navigation parameters are solved at one time, and the results of the previous step are used as the seed for the next optimization, and gradually solve the external inertial navigation parameters from coarse to fine, which avoids falling into a local minimum, reduces the number of iterations during optimization and improves the efficiency of the system.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 13 November 2018

Guoqing Li, Yunhai Geng and Wenzheng Zhang

This paper aims to introduce an efficient active-simultaneous localization and mapping (SLAM) approach for rover navigation, future planetary rover exploration mission…

Abstract

Purpose

This paper aims to introduce an efficient active-simultaneous localization and mapping (SLAM) approach for rover navigation, future planetary rover exploration mission requires the rover to automatically localize itself with high accuracy.

Design/methodology/approach

A three-dimensional (3D) feature detection method is first proposed to extract salient features from the observed point cloud, after that, the salient features are employed as the candidate destinations for re-visiting under SLAM structure, followed by a path planning algorithm integrated with SLAM, wherein the path length and map utility are leveraged to reduce the growth rate of state estimation uncertainty.

Findings

The proposed approach is able to extract distinguishable 3D landmarks for feature re-visiting, and can be naturally integrated with any SLAM algorithms in an efficient manner to improve the navigation accuracy.

Originality/value

This paper proposes a novel active-SLAM structure for planetary rover exploration mission, the salient feature extraction method and active revisit patch planning method are validated to improve the accuracy of pose estimation.

Details

Aircraft Engineering and Aerospace Technology, vol. 91 no. 1
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 24 August 2022

Yi Jiang, Ting Wang, Shiliang Shao and Lebing Wang

In large-scale environments and unstructured scenarios, the accuracy and robustness of traditional light detection and ranging (LiDAR) simultaneous localization and…

Abstract

Purpose

In large-scale environments and unstructured scenarios, the accuracy and robustness of traditional light detection and ranging (LiDAR) simultaneous localization and mapping (SLAM) algorithms are reduced, and the algorithms might even be completely ineffective. To overcome these problems, this study aims to propose a 3D LiDAR SLAM method for ground-based mobile robots, which uses a 3D LiDAR fusion inertial measurement unit (IMU) to establish an environment map and realize real-time localization.

Design/methodology/approach

First, we use a normal distributions transform (NDT) algorithm based on a local map with a corresponding motion prediction model for point cloud registration in the front-end. Next, point cloud features are tightly coupled with IMU angle constraints, ground constraints and gravity constraints for graph-based optimization in the back-end. Subsequently, the cumulative error is reduced by adding loop closure detection.

Findings

The algorithm is tested using a public data set containing indoor and outdoor scenarios. The results confirm that the proposed algorithm has high accuracy and robustness.

Originality/value

To improve the accuracy and robustness of SLAM, this method proposed in the paper introduced the NDT algorithm in the front-end and designed ground constraints and gravity constraints in the back-end. The proposed method has a satisfactory performance when applied to ground-based mobile robots in complex environments experiments.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 10 July 2007

Pingyuan Cui and Fuzhan Yue

To provide an autonomous navigation system to endow lunar rovers with increased autonomy both for exploration achievement of scientific goals and for safe navigation.

2621

Abstract

Purpose

To provide an autonomous navigation system to endow lunar rovers with increased autonomy both for exploration achievement of scientific goals and for safe navigation.

Design/methodology/approach

First, algorithm and technique of initial position determination of lunar rovers are introduced. Then, matched‐features set is build by multi steps of image processing such as feature detection, feature tracking and feature matching. Based on the analysis of the image processing error, a two‐stage estimation algorithm is used to estimate the motion, robust linear motion estimation is executed to estimate the motion initially and to reject the outliers, and Levenberg‐Marquardt non‐linear estimation is used to estimate the motion precisely. Next, a weighted ZSSD algorithm is presented to estimate the image disparities by analyzing the traditional ZSSD. Finally, a virtual simulation system is constructed using the development tool of open inventor, this simulation system can provide stereo images for simulations of stereo vision and motion estimation techniques, simulation results are provided and future research work is addressed in the end.

Findings

An autonomous navigation system is build based on stereo vision, the motion estimation algorithm and disparity estimation algorithm are developed.

Research limitations/implications

The field test will be done in the near future to valid the autonomous navigation algorithm presented in this paper.

Practical implications

A very useful source of information for graduate students and technical reference for researchers who work on lunar rovers.

Originality/value

In this paper, stereo vision‐based autonomous navigation techniques for lunar rovers are discussed, and an autonomous navigation scheme which based on stereo vision is presented, and the validity of all the algorithms involved is confirmed by simulations.

Details

Aircraft Engineering and Aerospace Technology, vol. 79 no. 4
Type: Research Article
ISSN: 0002-2667

Keywords

Article
Publication date: 19 October 2010

Helge Wurdemann, Vahid Aminzadeh, Jian S. Dai, John Reed and Graham Purnell

This paper aims to introduce and identify a new 3D handling operation (bin picking) for natural discrete food products using food categorisation.

Abstract

Purpose

This paper aims to introduce and identify a new 3D handling operation (bin picking) for natural discrete food products using food categorisation.

Design/methodology/approach

The research shows a new food categorisation and the relation between food ordering processes and food categories. Bin picking in the food industry needs more flexible vision software compared to the manufacturing industry in order to decrease the degree of disarray of food products and transfer them into structure.

Findings

It has been shown that there are still manual operated ordering processes in food industry such as bin picking; it just needs new ideas of image processing algorithms such as active shape models (ASMs) on its development in order to recognise the highly varying shapes of food products.

Research limitations/implications

This research was aimed at locating a new ordering process and proving a new principle, but for practical implementation this bin picking solution needs to be developed and tested further.

Originality/value

Identifying new ordering processes via food categorisation is unique and applying ASMs to bin picking opens a new industrial sector (food industry) for 3D handling.

Details

Industrial Robot: An International Journal, vol. 37 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of 13