Search results

1 – 10 of 472
To view the access options for this content please click here
Article
Publication date: 13 November 2018

Guoqing Li, Yunhai Geng and Wenzheng Zhang

This paper aims to introduce an efficient active-simultaneous localization and mapping (SLAM) approach for rover navigation, future planetary rover exploration mission…

Abstract

Purpose

This paper aims to introduce an efficient active-simultaneous localization and mapping (SLAM) approach for rover navigation, future planetary rover exploration mission requires the rover to automatically localize itself with high accuracy.

Design/methodology/approach

A three-dimensional (3D) feature detection method is first proposed to extract salient features from the observed point cloud, after that, the salient features are employed as the candidate destinations for re-visiting under SLAM structure, followed by a path planning algorithm integrated with SLAM, wherein the path length and map utility are leveraged to reduce the growth rate of state estimation uncertainty.

Findings

The proposed approach is able to extract distinguishable 3D landmarks for feature re-visiting, and can be naturally integrated with any SLAM algorithms in an efficient manner to improve the navigation accuracy.

Originality/value

This paper proposes a novel active-SLAM structure for planetary rover exploration mission, the salient feature extraction method and active revisit patch planning method are validated to improve the accuracy of pose estimation.

Details

Aircraft Engineering and Aerospace Technology, vol. 91 no. 1
Type: Research Article
ISSN: 1748-8842

Keywords

To view the access options for this content please click here
Article
Publication date: 1 February 2021

Shuhuan Wen, Xiaohan Lv, Hak Keung Lam, Shaokang Fan, Xiao Yuan and Ming Chen

This paper aims to use the Monodepth method to improve the prediction speed of identifying the obstacles and proposes a Probability Dueling DQN algorithm to optimize the…

Abstract

Purpose

This paper aims to use the Monodepth method to improve the prediction speed of identifying the obstacles and proposes a Probability Dueling DQN algorithm to optimize the path of the agent, which can reach the destination more quickly than the Dueling DQN algorithm. Then the path planning algorithm based on Probability Dueling DQN is combined with FastSLAM to accomplish the autonomous navigation and map the environment.

Design/methodology/approach

This paper proposes an active simultaneous localization and mapping (SLAM) framework for autonomous navigation under an indoor environment with static and dynamic obstacles. It integrates a path planning algorithm with visual SLAM to decrease navigation uncertainty and build an environment map.

Findings

The result shows that the proposed method offers good performance over existing Dueling DQN for navigation uncertainty under the indoor environment with different numbers and shapes of the static and dynamic obstacles in the real world field.

Originality/value

This paper proposes a novel active SLAM framework composed of Probability Dueling DQN that is the improved path planning algorithm based on Dueling DQN and FastSLAM. This framework is used with the Monodepth depth image prediction method with faster prediction speed to realize autonomous navigation in the indoor environment with different numbers and shapes of the static and dynamic obstacles.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 May 2019

Haoyao Chen, Hailin Huang, Ye Qin, Yanjie Li and Yunhui Liu

Multi-robot laser-based simultaneous localization and mapping (SLAM) in large-scale environments is an essential but challenging issue in mobile robotics, especially in…

Abstract

Purpose

Multi-robot laser-based simultaneous localization and mapping (SLAM) in large-scale environments is an essential but challenging issue in mobile robotics, especially in situations wherein no prior knowledge is available between robots. Moreover, the cumulative errors of every individual robot exert a serious negative effect on loop detection and map fusion. To address these problems, this paper aims to propose an efficient approach that combines laser and vision measurements.

Design/methodology/approach

A multi-robot visual laser-SLAM is developed to realize robust and efficient SLAM in large-scale environments; both vision and laser loop detections are integrated to detect robust loops. A method based on oriented brief (ORB) feature detection and bag of words (BoW) is developed, to ensure the robustness and computational effectiveness of the multi-robot SLAM system. A robust and efficient graph fusion algorithm is proposed to merge pose graphs from different robots.

Findings

The proposed method can detect loops more quickly and accurately than the laser-only SLAM, and it can fuse the submaps of each single robot to promote the efficiency, accuracy and robustness of the system.

Originality/value

Compared with the state of art of multi-robot SLAM approaches, the paper proposed a novel and more sophisticated approach. The vision-based and laser-based loops are integrated to realize a robust loop detection. The ORB features and BoW technologies are further utilized to gain real-time performance. Finally, random sample consensus and least-square methodologies are used to remove the outlier loops among robots.

Details

Assembly Automation, vol. 39 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article
Publication date: 18 October 2019

Shuhuan Wen, Xueheng Hu, Zhen Li, Hak Keung Lam, Fuchun Sun and Bin Fang

This paper aims to propose a novel active SLAM framework to realize avoid obstacles and finish the autonomous navigation in indoor environment.

Abstract

Purpose

This paper aims to propose a novel active SLAM framework to realize avoid obstacles and finish the autonomous navigation in indoor environment.

Design/methodology/approach

The improved fuzzy optimized Q-Learning (FOQL) algorithm is used to solve the avoidance obstacles problem of the robot in the environment. To reduce the motion deviation of the robot, fractional controller is designed. The localization of the robot is based on FastSLAM algorithm.

Findings

Simulation results of avoiding obstacles using traditional Q-learning algorithm, optimized Q-learning algorithm and FOQL algorithm are compared. The simulation results show that the improved FOQL algorithm has a faster learning speed than other two algorithms. To verify the simulation result, the FOQL algorithm is implemented on a NAO robot and the experimental results demonstrate that the improved fuzzy optimized Q-Learning obstacle avoidance algorithm is feasible and effective.

Originality/value

The improved fuzzy optimized Q-Learning (FOQL) algorithm is used to solve the avoidance obstacles problem of the robot in the environment. To reduce the motion deviation of the robot, fractional controller is designed. To verify the simulation result, the FOQL algorithm is implemented on a NAO robot and the experimental results demonstrate that the improved fuzzy optimized Q-Learning obstacle avoidance algorithm is feasible and effective.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 4 August 2020

Mehmet Caner Akay and Hakan Temeltaş

Heterogeneous teams consisting of unmanned ground vehicles and unmanned aerial vehicles are being used for different types of missions such as surveillance, tracking and…

Abstract

Purpose

Heterogeneous teams consisting of unmanned ground vehicles and unmanned aerial vehicles are being used for different types of missions such as surveillance, tracking and exploration. Exploration missions with heterogeneous robot teams (HeRTs) should acquire a common map for understanding the surroundings better. The purpose of this paper is to provide a unique approach with cooperative use of agents that provides a well-detailed observation over the environment where challenging details and complex structures are involved. Also, this method is suitable for real-time applications and autonomous path planning for exploration.

Design/methodology/approach

Lidar odometry and mapping and various similarity metrics such as Shannon entropy, Kullback–Leibler divergence, Jeffrey divergence, K divergence, Topsoe divergence, Jensen–Shannon divergence and Jensen divergence are used to construct a common height map of the environment. Furthermore, the authors presented the layering method that provides more accuracy and a better understanding of the common map.

Findings

In summary, with the experiments, the authors observed features located beneath the trees or the roofed top areas and above them without any need for global positioning system signal. Additionally, a more effective common map that enables planning trajectories for both vehicles is obtained with the determined similarity metric and the layering method.

Originality/value

In this study, the authors present a unique solution that implements various entropy-based similarity metrics with the aim of constructing common maps of the environment with HeRTs. To create common maps, Shannon entropy–based similarity metrics can be used, as it is the only one that holds the chain rule of conditional probability precisely. Seven distinct similarity metrics are compared, and the most effective one is chosen for getting a more comprehensive and valid common map. Moreover, different from all the studies in literature, the layering method is used to compute the similarities of each local map obtained by a HeRT. This method also provides the accuracy of the merged common map, as robots’ sight of view prevents the same observations of the environment in features such as a roofed area or trees. This novel approach can also be used in global positioning system-denied and closed environments. The results are verified with experiments.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 27 May 2020

Quentin Kevin Gautier, Thomas G. Garrison, Ferrill Rushton, Nicholas Bouck, Eric Lo, Peter Tueller, Curt Schurgers and Ryan Kastner

Digital documentation techniques of tunneling excavations at archaeological sites are becoming more common. These methods, such as photogrammetry and LiDAR (Light…

Abstract

Purpose

Digital documentation techniques of tunneling excavations at archaeological sites are becoming more common. These methods, such as photogrammetry and LiDAR (Light Detection and Ranging), are able to create precise three-dimensional models of excavations to complement traditional forms of documentation with millimeter to centimeter accuracy. However, these techniques require either expensive pieces of equipment or a long processing time that can be prohibitive during short field seasons in remote areas. This article aims to determine the effectiveness of various low-cost sensors and real-time algorithms to create digital scans of archaeological excavations.

Design/methodology/approach

The authors used a class of algorithms called SLAM (Simultaneous Localization and Mapping) along with depth-sensing cameras. While these algorithms have largely improved over recent years, the accuracy of the results still depends on the scanning conditions. The authors developed a prototype of a scanning device and collected 3D data at a Maya archaeological site and refined the instrument in a system of natural caves. This article presents an analysis of the resulting 3D models to determine the effectiveness of the various sensors and algorithms employed.

Findings

While not as accurate as commercial LiDAR systems, the prototype presented, employing a time-of-flight depth sensor and using a feature-based SLAM algorithm, is a rapid and effective way to document archaeological contexts at a fraction of the cost.

Practical implications

The proposed system is easy to deploy, provides real-time results and would be particularly useful in salvage operations as well as in high-risk areas where cultural heritage is threatened.

Originality/value

This article compares many different low-cost scanning solutions for underground excavations, along with presenting a prototype that can be easily replicated for documentation purposes.

Details

Journal of Cultural Heritage Management and Sustainable Development, vol. 10 no. 4
Type: Research Article
ISSN: 2044-1266

Keywords

To view the access options for this content please click here
Article
Publication date: 26 April 2013

Stefan Winkvist, Emma Rushforth and Ken Young

The purpose of this paper is to present a novel approach to the design of an autonomous Unmanned Aerial Vehicle (UAV) to aid with the internal inspection and…

Abstract

Purpose

The purpose of this paper is to present a novel approach to the design of an autonomous Unmanned Aerial Vehicle (UAV) to aid with the internal inspection and classification of tall or large structures. Focusing mainly on the challenge of robustly determining the position and velocity of the UAV, in three dimensional space, using on‐board Simultaneous Localisation and Mapping (SLAM). Although capable of autonomous flight, the UAV is primarily intended for semi‐autonomous operation, where the operator instructs the UAV where to go. However, if communications with the ground station are lost, it can backtrack along its path until communications are re‐established.

Design/methodology/approach

A UAV has been designed and built using primarily commercial‐off‐the‐shelf components. Software has been developed to allow the UAV to operate autonomously, using solely the on‐board computer and sensors. It is currently undergoing extensive flight tests to determine the performance and limitations of the system as a whole.

Findings

Initial test flights have proven the presented approach and resulting real‐time SLAM algorithms to function robustly in a range of large internals. The paper also briefly discusses the approach used by similar projects and the challenges faced.

Originality/value

The proposed novel algorithms allow for on‐board, real‐time, three‐dimensional SLAM in unknown and unstructured environments on a computationally constrained UAV.

To view the access options for this content please click here
Article
Publication date: 3 December 2018

Babing Ji and Qixin Cao

This paper aims to propose a new solution for real-time 3D perception with monocular camera. Most of the industrial robots’ solutions use active sensors to acquire 3D…

Abstract

Purpose

This paper aims to propose a new solution for real-time 3D perception with monocular camera. Most of the industrial robots’ solutions use active sensors to acquire 3D structure information, which limit their applications to indoor scenarios. By only using monocular camera, some state of art method provides up-to-scale 3D structure information, but scale information of corresponding objects is uncertain.

Design/methodology/approach

First, high-accuracy and scale-informed camera pose and sparse 3D map are provided by leveraging ORB-SLAM and marker. Second, for each frame captured by a camera, a specially designed depth estimation pipeline is used to compute corresponding 3D structure called depth map in real-time. Finally, depth map is integrated into volumetric scene model. A feedback module has been designed for users to visualize intermediate scene surface in real-time.

Findings

The system provides more robust tracking performance and compelling results. The implementation runs near 25 Hz on mainstream laptop based on parallel computation technique.

Originality/value

A new solution for 3D perception is using monocular camera by leveraging ORB-SLAM systems. Results in our system are visually comparable to active sensor systems such as elastic fusion in small scenes. The system is also both efficient and easy to implement, and algorithms and specific configurations involved are introduced in detail.

Details

Industrial Robot: An International Journal, vol. 45 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 April 2014

Annette Mossel, Michael Leichtfried, Christoph Kaltenriner and Hannes Kaufmann

The authors present a low-cost unmanned aerial vehicle (UAV) for autonomous flight and navigation in GPS-denied environments using an off-the-shelf smartphone as its core…

Abstract

Purpose

The authors present a low-cost unmanned aerial vehicle (UAV) for autonomous flight and navigation in GPS-denied environments using an off-the-shelf smartphone as its core on-board processing unit. Thereby, the approach is independent from additional ground hardware and the UAV core unit can be easily replaced with more powerful hardware that simplifies setup updates as well as maintenance. The paper aims to discuss these issues.

Design/methodology/approach

The UAV is able to map, locate and navigate in an unknown indoor environment fusing vision-based tracking with inertial and attitude measurements. The authors choose an algorithmic approach for mapping and localization that does not require GPS coverage of the target area; therefore autonomous indoor navigation is made possible.

Findings

The authors demonstrate the UAVs capabilities of mapping, localization and navigation in an unknown 2D marker environment. The promising results enable future research on 3D self-localization and dense mapping using mobile hardware as the only on-board processing unit.

Research limitations/implications

The proposed autonomous flight processing pipeline robustly tracks and maps planar markers that need to be distributed throughout the tracking volume.

Practical implications

Due to the cost-effective platform and the flexibility of the software architecture, the approach can play an important role in areas with poor infrastructure (e.g. developing countries) to autonomously perform tasks for search and rescue, inspection and measurements.

Originality/value

The authors provide a low-cost off-the-shelf flight platform that only requires a commercially available mobile device as core processing unit for autonomous flight in GPS-denied areas.

Details

International Journal of Pervasive Computing and Communications, vol. 10 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

To view the access options for this content please click here
Article
Publication date: 26 April 2013

Dominik Belter and Piotr Skrzypczynski

The purpose of this paper is to describe a novel application of the recently introduced concept from computer vision to self‐localization of a walking robot in…

Abstract

Purpose

The purpose of this paper is to describe a novel application of the recently introduced concept from computer vision to self‐localization of a walking robot in unstructured environments. The technique described in this paper enables a walking robot with a monocular vision system (single camera) to obtain precise estimates of its pose with regard to the six degrees of freedom. This capability is essential in search and rescue missions in collapsed buildings, polluted industrial plants, etc.

Design/methodology/approach

The Parallel Tracking and Mapping (PTAM) algorithm and the Inertial Measurement Unit (IMU) are used to determine the 6‐d.o.f. pose of a walking robot. Bundle‐adjustment‐based tracking and structure reconstruction are applied to obtain precise camera poses from the monocular vision data. The inclination of the robot's platform is determined by using IMU. The self‐localization system is used together with the RRT‐based motion planner, which allows to walk autonomously on rough, previously unknown terrain. The presented system operates on‐line on the real hexapod robot. Efficiency and precision of the proposed solution are demonstrated by experimental data.

Findings

The PTAM‐based self‐localization system enables the robot to walk autonomously on rough terrain. The software operates on‐line and can be implemented on the robot's on‐board PC. Results of the experiments show that the position error is small enough to allow robust elevation mapping using the laser scanner. In spite of the unavoidable feet slippages, the walking robot which uses PTAM for self‐localization can precisely estimate its position and successfully recover from motion execution errors.

Research limitations/implications

So far the presented self‐localization system was tested in limited‐scale indoor experiments. Experiments with more realistic outdoor scenarios are scheduled as further work.

Practical implications

Precise self‐localization may be one of the most important factors enabling the use of walking robots in practical USAR missions. The results of research on precise self‐localization in 6‐d.o.f. may be also useful for autonomous robots in other application areas: construction, agriculture, military.

Originality/value

The vision‐based self‐localization algorithm used in the presented research is not new, but the contribution lies in its implementation/integration on a walking robot, and experimental evaluation in the demanding problem of precise self‐localization in rough terrain.

Details

Industrial Robot: An International Journal, vol. 40 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of 472