Search results
1 – 4 of 4Michał R. Nowicki, Dominik Belter, Aleksander Kostusiak, Petr Cížek, Jan Faigl and Piotr Skrzypczyński
This paper aims to evaluate four different simultaneous localization and mapping (SLAM) systems in the context of localization of multi-legged walking robots equipped with compact…
Abstract
Purpose
This paper aims to evaluate four different simultaneous localization and mapping (SLAM) systems in the context of localization of multi-legged walking robots equipped with compact RGB-D sensors. This paper identifies problems related to in-motion data acquisition in a legged robot and evaluates the particular building blocks and concepts applied in contemporary SLAM systems against these problems. The SLAM systems are evaluated on two independent experimental set-ups, applying a well-established methodology and performance metrics.
Design/methodology/approach
Four feature-based SLAM architectures are evaluated with respect to their suitability for localization of multi-legged walking robots. The evaluation methodology is based on the computation of the absolute trajectory error (ATE) and relative pose error (RPE), which are performance metrics well-established in the robotics community. Four sequences of RGB-D frames acquired in two independent experiments using two different six-legged walking robots are used in the evaluation process.
Findings
The experiments revealed that the predominant problem characteristics of the legged robots as platforms for SLAM are the abrupt and unpredictable sensor motions, as well as oscillations and vibrations, which corrupt the images captured in-motion. The tested adaptive gait allowed the evaluated SLAM systems to reconstruct proper trajectories. The bundle adjustment-based SLAM systems produced best results, thanks to the use of a map, which enables to establish a large number of constraints for the estimated trajectory.
Research limitations/implications
The evaluation was performed using indoor mockups of terrain. Experiments in more natural and challenging environments are envisioned as part of future research.
Practical implications
The lack of accurate self-localization methods is considered as one of the most important limitations of walking robots. Thus, the evaluation of the state-of-the-art SLAM methods on legged platforms may be useful for all researchers working on walking robots’ autonomy and their use in various applications, such as search, security, agriculture and mining.
Originality/value
The main contribution lies in the integration of the state-of-the-art SLAM methods on walking robots and their thorough experimental evaluation using a well-established methodology. Moreover, a SLAM system designed especially for RGB-D sensors and real-world applications is presented in details.
Details
Keywords
The purpose of this paper is to describe a novel application of the well‐established 2D laser scan‐matching technique for self‐localization of a walking robot. The techniques…
Abstract
Purpose
The purpose of this paper is to describe a novel application of the well‐established 2D laser scan‐matching technique for self‐localization of a walking robot. The techniques described in this paper enable a walking robot with a 2D laser scanner to obtain precise maps of man‐made environments, which can be useful in search and reconnaissance missions, e.g. in warehouses, production plants, and other industrial areas.
Design/methodology/approach
The presented system combines two scan‐matching algorithms (PSM and PLICP) to deal with low‐quality range data from a compact laser scanner and to provide robust self‐localization in various types of man‐made environments. Data from proprioceptive sensors and simplifying assumptions holding in man‐made environments are exploited to compensate for the varying attitude of the walking robot, particularly in uneven terrain.
Findings
The experimental results suggest that neglecting either the poor initial pose guess obtained from the legged odometry, or the varying attitude angles of a walking robot's body, may lead to unacceptable results in self‐localization and scan‐based mapping. It is also demonstrated that using the PSM algorithm to compute the initial pose estimate for the more precise PLICP scan‐matching algorithm improves the results of self‐localization.
Research limitations/implications
So far, the presented self‐localization system was tested in limited‐scale indoor experiments. Experiments with more extended and realistic scenarios are scheduled as further work.
Practical implications
Applying techniques described in this paper, the author was able to obtain the robot pose and precise maps of man‐made environments, which can be useful in USAR and reconnaissance missions, also in warehouses, production plants, and other industrial areas.
Originality/value
The scan‐matching algorithms used in the presented research are not new, the contribution lies in combining them in order to overcome issues specific to a small‐size legged platform, using only common affordable hardware.
Details
Keywords
Dominik Belter and Piotr Skrzypczynski
The purpose of this paper is to describe a novel application of the recently introduced concept from computer vision to self‐localization of a walking robot in unstructured…
Abstract
Purpose
The purpose of this paper is to describe a novel application of the recently introduced concept from computer vision to self‐localization of a walking robot in unstructured environments. The technique described in this paper enables a walking robot with a monocular vision system (single camera) to obtain precise estimates of its pose with regard to the six degrees of freedom. This capability is essential in search and rescue missions in collapsed buildings, polluted industrial plants, etc.
Design/methodology/approach
The Parallel Tracking and Mapping (PTAM) algorithm and the Inertial Measurement Unit (IMU) are used to determine the 6‐d.o.f. pose of a walking robot. Bundle‐adjustment‐based tracking and structure reconstruction are applied to obtain precise camera poses from the monocular vision data. The inclination of the robot's platform is determined by using IMU. The self‐localization system is used together with the RRT‐based motion planner, which allows to walk autonomously on rough, previously unknown terrain. The presented system operates on‐line on the real hexapod robot. Efficiency and precision of the proposed solution are demonstrated by experimental data.
Findings
The PTAM‐based self‐localization system enables the robot to walk autonomously on rough terrain. The software operates on‐line and can be implemented on the robot's on‐board PC. Results of the experiments show that the position error is small enough to allow robust elevation mapping using the laser scanner. In spite of the unavoidable feet slippages, the walking robot which uses PTAM for self‐localization can precisely estimate its position and successfully recover from motion execution errors.
Research limitations/implications
So far the presented self‐localization system was tested in limited‐scale indoor experiments. Experiments with more realistic outdoor scenarios are scheduled as further work.
Practical implications
Precise self‐localization may be one of the most important factors enabling the use of walking robots in practical USAR missions. The results of research on precise self‐localization in 6‐d.o.f. may be also useful for autonomous robots in other application areas: construction, agriculture, military.
Originality/value
The vision‐based self‐localization algorithm used in the presented research is not new, but the contribution lies in its implementation/integration on a walking robot, and experimental evaluation in the demanding problem of precise self‐localization in rough terrain.
Details