To read this content please select one of the options below:

Computationally efficient algorithm for vision-based simultaneous localization and mapping of mobile robots

Chen-Chien Hsu (Department of Electrical Engineering, National Taiwan Normal University, Taipei, Taiwan)
Cheng-Kai Yang (Department of Electrical Engineering, National Taiwan Normal University, Taipei, Taiwan)
Yi-Hsing Chien (Department of Electrical Engineering, National Taiwan Normal University, Taipei, Taiwan)
Yin-Tien Wang (Department of Mechanical and Electro-Mechanical Engineering, Tamkang University, New Taipei City, Taiwan)
Wei-Yen Wang (Department of Electrical Engineering, National Taiwan Normal University, Taipei, Taiwan)
Chiang-Heng Chien (Department of Electrical Engineering, National Taiwan Normal University, Taipei, Taiwan)

Engineering Computations

ISSN: 0264-4401

Article publication date: 12 June 2017

261

Abstract

Purpose

FastSLAM is a popular method to solve the problem of simultaneous localization and mapping (SLAM). However, when the number of landmarks present in real environments increases, there are excessive comparisons of the measurement with all the existing landmarks in each particle. As a result, the execution speed will be too slow to achieve the objective of real-time navigation. Thus, this paper aims to improve the computational efficiency and estimation accuracy of conventional SLAM algorithms.

Design/methodology/approach

As an attempt to solve this problem, this paper presents a computationally efficient SLAM (CESLAM) algorithm, where odometer information is considered for updating the robot’s pose in particles. When a measurement has a maximum likelihood with the known landmark in the particle, the particle state is updated before updating the landmark estimates.

Findings

Simulation results show that the proposed CESLAM can overcome the problem of heavy computational burden while improving the accuracy of localization and mapping building. To practically evaluate the performance of the proposed method, a Pioneer 3-DX robot with a Kinect sensor is used to develop an RGB-D-based computationally efficient visual SLAM (CEVSLAM) based on Speeded-Up Robust Features (SURF). Experimental results confirm that the proposed CEVSLAM system is capable of successfully estimating the robot pose and building the map with satisfactory accuracy.

Originality/value

The proposed CESLAM algorithm overcomes the problem of the time-consuming process because of unnecessary comparisons in existing FastSLAM algorithms. Simulations show that accuracy of robot pose and landmark estimation is greatly improved by the CESLAM. Combining CESLAM and SURF, the authors establish a CEVSLAM to significantly improve the estimation accuracy and computational efficiency. Practical experiments by using a Kinect visual sensor show that the variance and average error by using the proposed CEVSLAM are smaller than those by using the other visual SLAM algorithms.

Keywords

Acknowledgements

This research is partially supported by the ‘Aim for the Top University Project’ and ‘Center of Learning Technology for Chinese’ of National Taiwan Normal University (NTNU), sponsored by the Ministry of Education, Taiwan, R.O.C. and the ‘International Research-Intensive Center of Excellence Program’ of NTNU and Ministry of Science and Technology, Taiwan, under Grants no. MOST 104-2911-I-003-301, MOST 104-2221-E-003-026 and MOST 104-2221-E-003-024.

Citation

Hsu, C.-C., Yang, C.-K., Chien, Y.-H., Wang, Y.-T., Wang, W.-Y. and Chien, C.-H. (2017), "Computationally efficient algorithm for vision-based simultaneous localization and mapping of mobile robots", Engineering Computations, Vol. 34 No. 4, pp. 1217-1239. https://doi.org/10.1108/EC-05-2015-0123

Publisher

:

Emerald Publishing Limited

Copyright © 2017, Emerald Publishing Limited

Related articles