Search results

1 – 10 of 21
Article
Publication date: 28 May 2021

Guangbing Zhou, Jing Luo, Shugong Xu, Shunqing Zhang, Shige Meng and Kui Xiang

Indoor localization is a key tool for robot navigation in indoor environments. Traditionally, robot navigation depends on one sensor to perform autonomous localization. This paper…

Abstract

Purpose

Indoor localization is a key tool for robot navigation in indoor environments. Traditionally, robot navigation depends on one sensor to perform autonomous localization. This paper aims to enhance the navigation performance of mobile robots, a multiple data fusion (MDF) method is proposed for indoor environments.

Design/methodology/approach

Here, multiple sensor data i.e. collected information of inertial measurement unit, odometer and laser radar, are used. Then, an extended Kalman filter (EKF) is used to incorporate these multiple data and the mobile robot can perform autonomous localization according to the proposed EKF-based MDF method in complex indoor environments.

Findings

The proposed method has experimentally been verified in the different indoor environments, i.e. office, passageway and exhibition hall. Experimental results show that the EKF-based MDF method can achieve the best localization performance and robustness in the process of navigation.

Originality/value

Indoor localization precision is mostly related to the collected data from multiple sensors. The proposed method can incorporate these collected data reasonably and can guide the mobile robot to perform autonomous navigation (AN) in indoor environments. Therefore, the output of this paper would be used for AN in complex and unknown indoor environments.

Details

Assembly Automation, vol. 41 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 17 August 2012

Samuel B. Lazarus, Antonios Tsourdos, Brian A. White, Peter Silson, Al Savvaris, Camille‐Alain Rabbath and Nicolas Lèchevin

This paper aims to describe a recently proposed algorithm in terrain‐based cooperative UAV mapping of the unknown complex obstacle in a stationary environment where the complex…

Abstract

Purpose

This paper aims to describe a recently proposed algorithm in terrain‐based cooperative UAV mapping of the unknown complex obstacle in a stationary environment where the complex obstacles are represented as curved in nature. It also aims to use an extended Kalman filter (EKF) to estimate the fused position of the UAVs and to apply the 2‐D splinegon technique to build the map of the complex shaped obstacles. The path of the UAVs are dictated by the Dubins path planning algorithm. The focus is to achieve a guaranteed performance of sensor based mapping of the uncertain environments using multiple UAVs.

Design/methodology/approach

An extended Kalman filter is used to estimate the position of the UAVs, and the 2‐D splinegon technique is used to build the map of the complex obstacle where the path of the UAVs are dictated by the Dubins path planning algorithm.

Findings

The guaranteed performance is quantified by explicit bounds of the position estimate of the multiple UAVs for mapping of the complex obstacles using 2‐D splinegon technique. This is a newly proposed algorithm, the most efficient and a robust way in terrain based mapping of the complex obstacles. The proposed method can provide mathematically provable and performance guarantees that are achievable in practice.

Originality/value

The paper describes the main contribution in mapping the complex shaped curvilinear objects using the 2‐D splinegon technique. This is a new approach where the fused EKF estimated positions are used with the limited number of sensors' measurements in building the map of the complex obstacles.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 5 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 10 May 2013

Ling Chen, Sen Wang, Klaus McDonald‐Maier and Huosheng Hu

The main purpose of this paper is to investigate two key elements of localization and mapping of Autonomous Underwater Vehicle (AUV), i.e. to overview various sensors and…

2371

Abstract

Purpose

The main purpose of this paper is to investigate two key elements of localization and mapping of Autonomous Underwater Vehicle (AUV), i.e. to overview various sensors and algorithms used for underwater localization and mapping, and to make suggestions for future research.

Design/methodology/approach

The authors first review various sensors and algorithms used for AUVs in the terms of basic working principle, characters, their advantages and disadvantages. The statistical analysis is carried out by studying 35 AUV platforms according to the application circumstances of sensors and algorithms.

Findings

As real‐world applications have different requirements and specifications, it is necessary to select the most appropriate one by balancing various factors such as accuracy, cost, size, etc. Although highly accurate localization and mapping in an underwater environment is very difficult, more and more accurate and robust navigation solutions will be achieved with the development of both sensors and algorithms.

Research limitations/implications

This paper provides an overview of the state of art underwater localisation and mapping algorithms and systems. No experiments are conducted for verification.

Practical implications

The paper will give readers a clear guideline to find suitable underwater localisation and mapping algorithms and systems for their practical applications in hand.

Social implications

There is a wide range of audiences who will benefit from reading this comprehensive survey of autonomous localisation and mapping of UAVs.

Originality/value

The paper will provide useful information and suggestions to research students, engineers and scientists who work in the field of autonomous underwater vehicles.

Details

International Journal of Intelligent Unmanned Systems, vol. 1 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

Book part
Publication date: 5 October 2018

Xin Wang and Chris Gordon

This chapter presents a novel human arm gesture tracking and recognition technique based on fuzzy logic and nonlinear Kalman filtering with applications in crane guidance. A…

Abstract

This chapter presents a novel human arm gesture tracking and recognition technique based on fuzzy logic and nonlinear Kalman filtering with applications in crane guidance. A Kinect visual sensor and a Myo armband sensor are jointly utilised to perform data fusion to provide more accurate and reliable information on Euler angles, angular velocity, linear acceleration and electromyography data in real time. Dynamic equations for arm gesture movement are formulated with Newton–Euler equations based on Denavit–Hartenberg parameters. Nonlinear Kalman filtering techniques, including the extended Kalman filter and the unscented Kalman filter, are applied in order to perform reliable sensor fusion, and their tracking accuracies are compared. A Sugeno-type fuzzy inference system is proposed for arm gesture recognition. Hardware experiments have shown the efficacy of the proposed method for crane guidance applications.

Details

Fuzzy Hybrid Computing in Construction Engineering and Management
Type: Book
ISBN: 978-1-78743-868-2

Keywords

Article
Publication date: 5 October 2021

Umair Ali, Wasif Muhammad, Muhammad Jehanzed Irshad and Sajjad Manzoor

Self-localization of an underwater robot using global positioning sensor and other radio positioning systems is not possible, as an alternative onboard sensor-based self-location…

Abstract

Purpose

Self-localization of an underwater robot using global positioning sensor and other radio positioning systems is not possible, as an alternative onboard sensor-based self-location estimation provides another possible solution. However, the dynamic and unstructured nature of the sea environment and highly noise effected sensory information makes the underwater robot self-localization a challenging research topic. The state-of-art multi-sensor fusion algorithms are deficient in dealing of multi-sensor data, e.g. Kalman filter cannot deal with non-Gaussian noise, while parametric filter such as Monte Carlo localization has high computational cost. An optimal fusion policy with low computational cost is an important research question for underwater robot localization.

Design/methodology/approach

In this paper, the authors proposed a novel predictive coding-biased competition/divisive input modulation (PC/BC-DIM) neural network-based multi-sensor fusion approach, which has the capability to fuse and approximate noisy sensory information in an optimal way.

Findings

Results of low mean localization error (i.e. 1.2704 m) and computation cost (i.e. 2.2 ms) show that the proposed method performs better than existing previous techniques in such dynamic and unstructured environments.

Originality/value

To the best of the authors’ knowledge, this work provides a novel multisensory fusion approach to overcome the existing problems of non-Gaussian noise removal, higher self-localization estimation accuracy and reduced computational cost.

Details

Sensor Review, vol. 41 no. 5
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 16 April 2018

Hanieh Deilamsalehy and Timothy C. Havens

Estimating the pose – position and orientation – of a moving object such as a robot is a necessary task for many applications, e.g., robot navigation control, environment mapping…

Abstract

Purpose

Estimating the pose – position and orientation – of a moving object such as a robot is a necessary task for many applications, e.g., robot navigation control, environment mapping, and medical applications such as robotic surgery. The purpose of this paper is to introduce a novel method to fuse the information from several available sensors in order to improve the estimated pose from any individual sensor and calculate a more accurate pose for the moving platform.

Design/methodology/approach

Pose estimation is usually done by collecting the data obtained from several sensors mounted on the object/platform and fusing the acquired information. Assuming that the robot is moving in a three-dimensional (3D) world, its location is completely defined by six degrees of freedom (6DOF): three angles and three position coordinates. Some 3D sensors, such as IMUs and cameras, have been widely used for 3D localization. Yet, there are other sensors, like 2D Light Detection And Ranging (LiDAR), which can give a very precise estimation in a 2D plane but they are not employed for 3D estimation since the sensor is unable to obtain the full 6DOF. However, in some applications there is a considerable amount of time in which the robot is almost moving on a plane during the time interval between two sensor readings; e.g., a ground vehicle moving on a flat surface or a drone flying at an almost constant altitude to collect visual data. In this paper a novel method using a “fuzzy inference system” is proposed that employs a 2D LiDAR in a 3D localization algorithm in order to improve the pose estimation accuracy.

Findings

The method determines the trajectory of the robot and the sensor reliability between two readings and based on this information defines the weight of the 2D sensor in the final fused pose by adjusting “extended Kalman filter” parameters. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.

Originality/value

To the best of the authors’ knowledge this is the first time that a 2D LiDAR has been employed to improve the 3D pose estimation in an unknown environment without any previous knowledge. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.

Details

International Journal of Intelligent Unmanned Systems, vol. 6 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 2 January 2024

Xiangdi Yue, Yihuan Zhang, Jiawei Chen, Junxin Chen, Xuanyi Zhou and Miaolei He

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and…

Abstract

Purpose

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and mapping (SLAM) techniques. This paper aims to provide a significant reference for researchers and engineers in robotic mapping.

Design/methodology/approach

This paper focused on the research state of LiDAR-based SLAM for robotic mapping as well as a literature survey from the perspective of various LiDAR types and configurations.

Findings

This paper conducted a comprehensive literature review of the LiDAR-based SLAM system based on three distinct LiDAR forms and configurations. The authors concluded that multi-robot collaborative mapping and multi-source fusion SLAM systems based on 3D LiDAR with deep learning will be new trends in the future.

Originality/value

To the best of the authors’ knowledge, this is the first thorough survey of robotic mapping from the perspective of various LiDAR types and configurations. It can serve as a theoretical and practical guide for the advancement of academic and industrial robot mapping.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Content available
Article
Publication date: 22 July 2021

Chenguang Yang, Bin Xu, Shuai Li and Xuefeng Zhou

272

Abstract

Details

Assembly Automation, vol. 41 no. 3
Type: Research Article
ISSN: 0144-5154

Article
Publication date: 16 March 2015

Shengbo Sang, Ruiyong Zhai, Wendong Zhang, Qirui Sun and Zhaoying Zhou

This study aims to design a new low-cost localization platform for estimating the location and orientation of a pedestrian in a building. The micro-electro-mechanical systems…

Abstract

Purpose

This study aims to design a new low-cost localization platform for estimating the location and orientation of a pedestrian in a building. The micro-electro-mechanical systems (MEMS) sensor error compensation and the algorithm were improved to realize the localization and altitude accuracy.

Design/methodology/approach

The platform hardware was designed with common low-performance and inexpensive MEMS sensors, and with a barometric altimeter employed to augment altitude measurement. The inertial navigation system (INS) – extended Kalman filter (EKF) – zero-velocity updating (ZUPT) (INS-EKF-ZUPT [IEZ])-extended methods and pedestrian dead reckoning (PDR) (IEZ + PDR) algorithm were modified and improved with altitude determined by acceleration integration height and pressure altitude. The “AND” logic with acceleration and angular rate data were presented to update the stance phases.

Findings

The new platform was tested in real three-dimensional (3D) in-building scenarios, achieved with position errors below 0.5 m for 50-m-long route in corridor and below 0.1 m on stairs. The algorithm is robust enough for both the walking motion and the fast dynamic motion.

Originality/value

The paper presents a new self-developed, integrated platform. The IEZ-extended methods, the modified PDR (IEZ + PDR) algorithm and “AND” logic with acceleration and angular rate data can improve the high localization and altitude accuracy. It is a great support for the increasing 3D location demand in indoor cases for universal application with ordinary sensors.

Details

Sensor Review, vol. 35 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 26 October 2018

Song Hua, Huiyin Huang, Fangfang Yin and Chunling Wei

This paper aims to propose a constant-gain Kalman Filter algorithm based on the projection method and constant dimension projection, which ensures that the dimension of the…

Abstract

Purpose

This paper aims to propose a constant-gain Kalman Filter algorithm based on the projection method and constant dimension projection, which ensures that the dimension of the observation matrix obtained is maintained when there is a satellite with multiple sensors.

Design/methodology/approach

First, a time-invariant observation matrix is determined with the projection method, which does not require the Jacobi matrix to be calculated. Second, the constant-gain matrix replaces the EKF (extended Kalman filter) gain matrix, which requires online computation, considerably improving the stability and real-time properties of the algorithm.

Findings

The simulation results indicate that compared to the EKF algorithm, the constant-gain Kalman filter algorithm has a considerably lower computational burden and improved real-time properties and stability without a significant loss of accuracy. The algorithm based on the constant dimension projection has better real-time properties, simpler computations and greater fault tolerance than the conventional EKF algorithm when handling an attitude determination system with three or more star trackers.

Originality/value

In satellite attitude determination systems, the constant-gain Kalman Filter algorithm based on the projection method reduces the large computational burden and improve the real-time properties of the EKF algorithm.

Details

Aircraft Engineering and Aerospace Technology, vol. 90 no. 8
Type: Research Article
ISSN: 1748-8842

Keywords

1 – 10 of 21