Search results

21 – 30 of 150
Article
Publication date: 2 January 2024

Xiangdi Yue, Yihuan Zhang, Jiawei Chen, Junxin Chen, Xuanyi Zhou and Miaolei He

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and…

Abstract

Purpose

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and mapping (SLAM) techniques. This paper aims to provide a significant reference for researchers and engineers in robotic mapping.

Design/methodology/approach

This paper focused on the research state of LiDAR-based SLAM for robotic mapping as well as a literature survey from the perspective of various LiDAR types and configurations.

Findings

This paper conducted a comprehensive literature review of the LiDAR-based SLAM system based on three distinct LiDAR forms and configurations. The authors concluded that multi-robot collaborative mapping and multi-source fusion SLAM systems based on 3D LiDAR with deep learning will be new trends in the future.

Originality/value

To the best of the authors’ knowledge, this is the first thorough survey of robotic mapping from the perspective of various LiDAR types and configurations. It can serve as a theoretical and practical guide for the advancement of academic and industrial robot mapping.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 4 August 2020

Mehmet Caner Akay and Hakan Temeltaş

Heterogeneous teams consisting of unmanned ground vehicles and unmanned aerial vehicles are being used for different types of missions such as surveillance, tracking and…

119

Abstract

Purpose

Heterogeneous teams consisting of unmanned ground vehicles and unmanned aerial vehicles are being used for different types of missions such as surveillance, tracking and exploration. Exploration missions with heterogeneous robot teams (HeRTs) should acquire a common map for understanding the surroundings better. The purpose of this paper is to provide a unique approach with cooperative use of agents that provides a well-detailed observation over the environment where challenging details and complex structures are involved. Also, this method is suitable for real-time applications and autonomous path planning for exploration.

Design/methodology/approach

Lidar odometry and mapping and various similarity metrics such as Shannon entropy, Kullback–Leibler divergence, Jeffrey divergence, K divergence, Topsoe divergence, Jensen–Shannon divergence and Jensen divergence are used to construct a common height map of the environment. Furthermore, the authors presented the layering method that provides more accuracy and a better understanding of the common map.

Findings

In summary, with the experiments, the authors observed features located beneath the trees or the roofed top areas and above them without any need for global positioning system signal. Additionally, a more effective common map that enables planning trajectories for both vehicles is obtained with the determined similarity metric and the layering method.

Originality/value

In this study, the authors present a unique solution that implements various entropy-based similarity metrics with the aim of constructing common maps of the environment with HeRTs. To create common maps, Shannon entropy–based similarity metrics can be used, as it is the only one that holds the chain rule of conditional probability precisely. Seven distinct similarity metrics are compared, and the most effective one is chosen for getting a more comprehensive and valid common map. Moreover, different from all the studies in literature, the layering method is used to compute the similarities of each local map obtained by a HeRT. This method also provides the accuracy of the merged common map, as robots’ sight of view prevents the same observations of the environment in features such as a roofed area or trees. This novel approach can also be used in global positioning system-denied and closed environments. The results are verified with experiments.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 2 June 2023

Qamar Ul Islam, Haidi Ibrahim, Pan Kok Chin, Kevin Lim and Mohd Zaid Abdullah

Many popular simultaneous localization and mapping (SLAM) techniques have low accuracy, especially when localizing environments containing dynamically moving objects since their…

Abstract

Purpose

Many popular simultaneous localization and mapping (SLAM) techniques have low accuracy, especially when localizing environments containing dynamically moving objects since their presence can potentially cause inaccurate data associations. To address this issue, the proposed FADM-SLAM system aims to improve the accuracy of SLAM techniques in environments containing dynamically moving objects. It uses a pipeline of feature-based approaches accompanied by sparse optical flow and multi-view geometry as constraints to achieve this goal.

Design/methodology/approach

FADM-SLAM, which works with monocular, stereo and RGB-D sensors, combines an instance segmentation network incorporating an intelligent motion detection strategy (iM) with an optical flow technique to improve location accuracy. The proposed AS-SLAM system comprises four principal modules, which are the optical flow mask and iM, the ego motion estimation, dynamic point detection and the feature-based extraction framework.

Findings

Experiment results using the publicly available RGBD-Bonn data set indicate that FADM-SLAM outperforms established visual SLAM systems in highly dynamic conditions.

Originality/value

In summary, the first module generates the indication of dynamic objects by using the optical flow and iM with geometric-wise segmentation, which is then used by the second module to compute the starting point of a posture. The third module, meanwhile, first searches for the dynamic feature points in the environment, and second, eliminates them from further processing. An algorithm based on epipolar constraints is implemented to do this. In this way, only the static feature points are retained, which are then fed to the fourth module for extracting important features.

Details

Robotic Intelligence and Automation, vol. 43 no. 3
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 17 October 2016

Xianglong Kong, Wenqi Wu, Lilian Zhang, Xiaofeng He and Yujie Wang

This paper aims to present a method for improving the performance of the visual-inertial navigation system (VINS) by using a bio-inspired polarized light compass.

Abstract

Purpose

This paper aims to present a method for improving the performance of the visual-inertial navigation system (VINS) by using a bio-inspired polarized light compass.

Design/methodology/approach

The measurement model of each sensor module is derived, and a robust stochastic cloning extended Kalman filter (RSC-EKF) is implemented for data fusion. This fusion framework can not only handle multiple relative and absolute measurements, but can also deal with outliers, sensor outages of each measurement module.

Findings

The paper tests the approach on data sets acquired by a land vehicle moving in different environments and compares its performance against other methods. The results demonstrate the effectiveness of the proposed method for reducing the error growth of the VINS in the long run.

Originality/value

The main contribution of this paper lies in the design/implementation of the RSC-EKF for incorporating the homemade polarized light compass into visual-inertial navigation pipeline. The real-world tests in different environments demonstrate the effectiveness and feasibility of the proposed approach.

Details

Industrial Robot: An International Journal, vol. 43 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 15 June 2015

Boxin Zhao, Olaf Hellwich, Tianjiang Hu, Dianle Zhou, Yifeng Niu and Lincheng Shen

This study aims to investigate if smartphone sensors can be used in an unmanned aerial vehicle (UAV) localization system. With the development of technology, smartphones have been…

Abstract

Purpose

This study aims to investigate if smartphone sensors can be used in an unmanned aerial vehicle (UAV) localization system. With the development of technology, smartphones have been tentatively used in micro-UAVs due to their lightweight, inexpensiveness and flexibility. In this study, a Samsung Galaxy S3 smartphone is selected as an on-board sensor platform for UAV localization in Global Positioning System (GPS)-denied environments and two main issues are investigated: Are the phone sensors appropriate for UAV localization? If yes, what are the boundary conditions of employing them?

Design/methodology/approach

Efficient accuracy estimation methodologies for the phone sensors are proposed without using any expensive instruments. Using these methods, one can estimate his phone sensors accuracy at any time without special instruments. Then, a visual-inertial odometry scheme is introduced to evaluate the phone sensors-based path estimation performance.

Findings

Boundary conditions of using smartphone in a UAV navigation system are found. Both indoor and outdoor localization experiments are carried out and experimental results validate the effectiveness of the boundary conditions and the corresponding implemented scheme.

Originality/value

With the phone as a payload, UAVs can be further realized in smaller scale at lower cost, which will be used widely in the field of industrial robots.

Details

Industrial Robot: An International Journal, vol. 42 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 7 September 2023

Minghao Wang, Ming Cong, Dong Liu, Yu Du, Xiaojing Tian and Bing Li

The purpose of this study is to designed a robot odometry based on three dimensional (3D) laser point cloud data, inertial measurement unit (IMU) data and real-time kinematic…

Abstract

Purpose

The purpose of this study is to designed a robot odometry based on three dimensional (3D) laser point cloud data, inertial measurement unit (IMU) data and real-time kinematic (RTK) data in underground spatial features and gravity fluctuations environment. This method improves the mapping accuracy in two types of underground space: multi-layer space and large-scale scenarios.

Design/methodology/approach

An IMU–Laser–RTK fusion mapping algorithm based on Iterative Kalman Filter was proposed, and the observation equation and Jacobian matrix were derived. Aiming at the problem of inaccurate gravity estimation, the optimization of gravity is transformed into the optimization of SO(3), which avoids the problem of gravity over-parameterization.

Findings

Compared with the optimization method, the computational cost is reduced. Without relying on the wheel speed odometer, the robot synchronization localization and 3D environment modeling for multi-layer space are realized. The performance of the proposed algorithm is tested and compared in two types of underground space, and the robustness and accuracy in multi-layer space and large-scale scenarios are verified. The results show that the root mean square error of the proposed algorithm is 0.061 m, which achieves higher accuracy than other algorithms.

Originality/value

Based on the problem of large loop and low feature scale, this algorithm can better complete the map loop and self-positioning, and its root mean square error is more than double compared with other methods. The method proposed in this paper can better complete the autonomous positioning of the robot in the underground space with hierarchical feature degradation, and at the same time, an accurate 3D map can be constructed for subsequent research.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 September 2021

Yi Zhang Rui Huang

With the booming development of computer, optical and sensing technologies and cybernetics, the technical research in unmanned vehicle has been advanced to a new era. This trend…

Abstract

Purpose

With the booming development of computer, optical and sensing technologies and cybernetics, the technical research in unmanned vehicle has been advanced to a new era. This trend arouses great interest in simultaneous localization and mapping (SLAM). Especially, light detection and ranging (Lidar)-based SLAM system has the characteristics of high measuring accuracy and insensitivity to illumination conditions, which has been widely used in industry. However, SLAM has some intractable problems, including degradation under less structured or uncontrived environment. To solve this problem, this paper aims to propose an adaptive scheme with dynamic threshold to mitigate degradation.

Design/methodology/approach

We propose an adaptive strategy with a dynamic module is proposed to overcome degradation of point cloud. Besides, a distortion correction process is presented in the local map to reduce the impact of noise in the iterative optimization process. Our solution ensures adaptability to environmental changes.

Findings

Experimental results on both public data set and field tests demonstrated that the algorithm is robust and self-adaptive, which achieved higher localization accuracy and lower mapping error compared with existing methods.

Originality/value

Unlike other popular algorithms, we do not rely on multi-sensor fusion to improve the localization accuracy. Instead, the pure Lidar-based method with dynamic threshold and distortion correction module indeed improved the accuracy and robustness in localization results.

Details

Sensor Review, vol. 41 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 16 October 2018

Qifeng Yang, Daokui Qu, Fang Xu, Fengshan Zou, Guojian He and Mingze Sun

This paper aims to propose a series of approaches to solve the problem of the mobile robot motion control and autonomous navigation in large-scale outdoor GPS-denied environments.

Abstract

Purpose

This paper aims to propose a series of approaches to solve the problem of the mobile robot motion control and autonomous navigation in large-scale outdoor GPS-denied environments.

Design/methodology/approach

Based on the model of mobile robot with two driving wheels, a controller is designed and tested in obstacle-cluttered scenes in this paper. By using the priori “topology-geometry” map constructed based on the odometer data and the online matching algorithm of 3D-laser scanning points, a novel approach of outdoor localization with 3D-laser scanner is proposed to solve the problem of poor localization accuracy in GPS-denied environments. A path planning strategy based on geometric feature analysis and priority evaluation algorithm is also adopted to ensure the safety and reliability of mobile robot’s autonomous navigation and control.

Findings

A series of experiments are conducted with a self-designed mobile robot platform in large-scale outdoor environments, and the experimental results show the validity and effectiveness of the proposed approach.

Originality/value

The problem of motion control for a differential drive mobile robot is investigated in this paper first. At the same time, a novel approach of outdoor localization with 3D-laser scanner is proposed to solve the problem of poor localization accuracy in GPS-denied environments. A path planning strategy based on geometric feature analysis and priority evaluation algorithm is also adopted to ensure the safety and reliability of mobile robot’s autonomous navigation and control.

Details

Assembly Automation, vol. 39 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 8 February 2022

Yanwu Zhai, Haibo Feng, Haitao Zhou, Songyuan Zhang and Yili Fu

This paper aims to propose a method to solve the problem of localization and mapping of a two-wheeled inverted pendulum (TWIP) robot on the ground using the Stereo–inertial…

Abstract

Purpose

This paper aims to propose a method to solve the problem of localization and mapping of a two-wheeled inverted pendulum (TWIP) robot on the ground using the Stereo–inertial measurement unit (IMU) system. This method reparametrizes the pose according to the motion characteristics of TWIP and considers the impact of uneven ground on vision and IMU, which is more adaptable to the real environment.

Design/methodology/approach

When TWIP moves, it is constrained by the ground and swings back and forth to maintain balance. Therefore, the authors parameterize the robot pose as SE(2) pose plus pitch according to the motion characteristics of TWIP. However, the authors do not omit disturbances in other directions but perform error modeling, which is integrated into the visual constraints and IMU pre-integration constraints as an error term. Finally, the authors analyze the influence of the error term on the vision and IMU constraints during the optimization process. Compared to traditional algorithms, the algorithm is simpler and better adapt to the real environment.

Findings

The results of indoor and outdoor experiments show that, for the TWIP robot, the method has better positioning accuracy and robustness compared with the state-of-the-art.

Originality/value

The algorithm in this paper is proposed for the localization and mapping of a TWIP robot. Different from the traditional positioning method on SE(3), this paper parameterizes the robot pose as SE(2) pose plus pitch according to the motion of TWIP and the motion disturbances in other directions are integrated into visual constraints and IMU pre-integration constraints as error terms, which simplifies the optimization parameters, better adapts to the real environment and improves the accuracy of positioning.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 8 June 2020

Zhe Wang, Xisheng Li, Xiaojuan Zhang, Yanru Bai and Chengcai Zheng

The purpose of this study is to use visual and inertial sensors to achieve real-time location. How to provide an accurate location has become a popular research topic in the field…

Abstract

Purpose

The purpose of this study is to use visual and inertial sensors to achieve real-time location. How to provide an accurate location has become a popular research topic in the field of indoor navigation. Although the complementarity of vision and inertia has been widely applied in indoor navigation, many problems remain, such as inertial sensor deviation calibration, unsynchronized visual and inertial data acquisition and large amount of stored data.

Design/methodology/approach

First, this study demonstrates that the vanishing point (VP) evaluation function improves the precision of extraction, and the nearest ground corner point (NGCP) of the adjacent frame is estimated by pre-integrating the inertial sensor. The Sequential Similarity Detection Algorithm (SSDA) and Random Sample Consensus (RANSAC) algorithms are adopted to accurately match the adjacent NGCP in the estimated region of interest. Second, the model of visual pose is established by using the parameters of the camera itself, VP and NGCP. The model of inertial pose is established by pre-integrating. Third, location is calculated by fusing the model of vision and inertia.

Findings

In this paper, a novel method is proposed to fuse visual and inertial sensor to locate indoor environment. The authors describe the building of an embedded hardware platform to the best of their knowledge and compare the result with a mature method and POSAV310.

Originality/value

This paper proposes a VP evaluation function that is used to extract the most advantages in the intersection of a plurality of parallel lines. To improve the extraction speed of adjacent frame, the authors first proposed fusing the NGCP of the current frame and the calibrated pre-integration to estimate the NGCP of the next frame. The visual pose model was established using extinction VP and NGCP, calibration of inertial sensor. This theory offers the linear processing equation of gyroscope and accelerometer by the model of visual and inertial pose.

21 – 30 of 150