Search results

1 – 10 of 303
Article
Publication date: 4 July 2018

Zhe Gao, Jun Huang, Xiaofei Yang and Ping An

This paper aims to calibrate the mounted parameters between the LIDAR and the motor in a low-cost 3D LIDAR device. It proposes the model of the aimed 3D LIDAR device and analyzes…

Abstract

Purpose

This paper aims to calibrate the mounted parameters between the LIDAR and the motor in a low-cost 3D LIDAR device. It proposes the model of the aimed 3D LIDAR device and analyzes the influence of all mounted parameters. The study aims to find a way more accurate and simple to calibrate those mounted parameters.

Design/methodology/approach

This method minimizes the coplanarity and area of the plane scanned to estimate the mounted parameters. Within the method, the authors build different cost function for rotation parameters and translation parameters; thus, the parameter estimation problem of 4-degree-of-freedom (DOF) is decoupled into 2-DOF estimation problem, achieving the calibration of these two types of parameters.

Findings

This paper proposes a calibration method for accurately estimating the mounted parameters between a 2D LIDAR and rotating platform, which realizes the estimation of 2-DOF rotation parameters and 2-DOF translation parameters without additional hardware.

Originality/value

Unlike previous plane-based calibration techniques, the main advantage of the proposed method is that the algorithm can estimate the most and more accurate parameters with no more hardware.

Details

Sensor Review, vol. 39 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 16 April 2018

Hanieh Deilamsalehy and Timothy C. Havens

Estimating the pose – position and orientation – of a moving object such as a robot is a necessary task for many applications, e.g., robot navigation control, environment mapping…

Abstract

Purpose

Estimating the pose – position and orientation – of a moving object such as a robot is a necessary task for many applications, e.g., robot navigation control, environment mapping, and medical applications such as robotic surgery. The purpose of this paper is to introduce a novel method to fuse the information from several available sensors in order to improve the estimated pose from any individual sensor and calculate a more accurate pose for the moving platform.

Design/methodology/approach

Pose estimation is usually done by collecting the data obtained from several sensors mounted on the object/platform and fusing the acquired information. Assuming that the robot is moving in a three-dimensional (3D) world, its location is completely defined by six degrees of freedom (6DOF): three angles and three position coordinates. Some 3D sensors, such as IMUs and cameras, have been widely used for 3D localization. Yet, there are other sensors, like 2D Light Detection And Ranging (LiDAR), which can give a very precise estimation in a 2D plane but they are not employed for 3D estimation since the sensor is unable to obtain the full 6DOF. However, in some applications there is a considerable amount of time in which the robot is almost moving on a plane during the time interval between two sensor readings; e.g., a ground vehicle moving on a flat surface or a drone flying at an almost constant altitude to collect visual data. In this paper a novel method using a “fuzzy inference system” is proposed that employs a 2D LiDAR in a 3D localization algorithm in order to improve the pose estimation accuracy.

Findings

The method determines the trajectory of the robot and the sensor reliability between two readings and based on this information defines the weight of the 2D sensor in the final fused pose by adjusting “extended Kalman filter” parameters. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.

Originality/value

To the best of the authors’ knowledge this is the first time that a 2D LiDAR has been employed to improve the 3D pose estimation in an unknown environment without any previous knowledge. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.

Details

International Journal of Intelligent Unmanned Systems, vol. 6 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 2 January 2024

Xiangdi Yue, Yihuan Zhang, Jiawei Chen, Junxin Chen, Xuanyi Zhou and Miaolei He

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and…

Abstract

Purpose

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and mapping (SLAM) techniques. This paper aims to provide a significant reference for researchers and engineers in robotic mapping.

Design/methodology/approach

This paper focused on the research state of LiDAR-based SLAM for robotic mapping as well as a literature survey from the perspective of various LiDAR types and configurations.

Findings

This paper conducted a comprehensive literature review of the LiDAR-based SLAM system based on three distinct LiDAR forms and configurations. The authors concluded that multi-robot collaborative mapping and multi-source fusion SLAM systems based on 3D LiDAR with deep learning will be new trends in the future.

Originality/value

To the best of the authors’ knowledge, this is the first thorough survey of robotic mapping from the perspective of various LiDAR types and configurations. It can serve as a theoretical and practical guide for the advancement of academic and industrial robot mapping.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 23 August 2022

Siyuan Huang, Limin Liu, Xiongjun Fu, Jian Dong, Fuyu Huang and Ping Lang

The purpose of this paper is to summarize the existing point cloud target detection algorithms based on deep learning, and provide reference for researchers in related fields. In…

Abstract

Purpose

The purpose of this paper is to summarize the existing point cloud target detection algorithms based on deep learning, and provide reference for researchers in related fields. In recent years, with its outstanding performance in target detection of 2D images, deep learning technology has been applied in light detection and ranging (LiDAR) point cloud data to improve the automation and intelligence level of target detection. However, there are still some difficulties and room for improvement in target detection from the 3D point cloud. In this paper, the vehicle LiDAR target detection method is chosen as the research subject.

Design/methodology/approach

Firstly, the challenges of applying deep learning to point cloud target detection are described; secondly, solutions in relevant research are combed in response to the above challenges. The currently popular target detection methods are classified, among which some are compared with illustrate advantages and disadvantages. Moreover, approaches to improve the accuracy of network target detection are introduced.

Findings

Finally, this paper also summarizes the shortcomings of existing methods and signals the prospective development trend.

Originality/value

This paper introduces some existing point cloud target detection methods based on deep learning, which can be applied to a driverless, digital map, traffic monitoring and other fields, and provides a reference for researchers in related fields.

Details

Sensor Review, vol. 42 no. 5
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 11 June 2021

Ruihao Lin, Junzhe Xu and Jianhua Zhang

Large-scale and precise three-dimensional (3D) map play an important role in autonomous driving and robot positioning. However, it is difficult to get accurate poses for mapping…

Abstract

Purpose

Large-scale and precise three-dimensional (3D) map play an important role in autonomous driving and robot positioning. However, it is difficult to get accurate poses for mapping. On one hand, the global positioning system (GPS) data are not always reliable owing to multipath effect and poor satellite visibility in many urban environments. In another hand, the LiDAR-based odometry has accumulative errors. This paper aims to propose a novel simultaneous localization and mapping (SLAM) system to obtain large-scale and precise 3D map.

Design/methodology/approach

The proposed SLAM system optimally integrates the GPS data and a LiDAR odometry. In this system, two core algorithms are developed. To effectively verify reliability of the GPS data, VGL (the abbreviation of Verify GPS data with LiDAR data) algorithm is proposed and the points from LiDAR are used by the algorithm. To obtain accurate poses in GPS-denied areas, this paper proposes EG-LOAM algorithm, a LiDAR odometry with local optimization strategy to eliminate the accumulative errors by means of reliable GPS data.

Findings

On the KITTI data set and the customized outdoor data set, the system is able to generate high-precision 3D map in both GPS-denied areas and areas covered by GPS. Meanwhile, the VGL algorithm is proved to be able to verify reliability of the GPS data with confidence and the EG-LOAM outperform the state-of-the-art baselines.

Originality/value

A novel SLAM system is proposed to obtain large-scale and precise 3D map. To improve the robustness of the system, the VGL algorithm and the EG-LOAM are designed. The whole system as well as the two algorithms have a satisfactory performance in experiments.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 4 June 2021

Guotao Xie, Jing Zhang, Junfeng Tang, Hongfei Zhao, Ning Sun and Manjiang Hu

To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions…

392

Abstract

Purpose

To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions. However, the accuracy of perception is closely related to the performance of sensors configured on the vehicle. To enhance sensors’ performance further to improve the accuracy of environmental perception, this paper aims to introduce an obstacle detection method based on the depth fusion of lidar and radar in challenging conditions, which could reduce the false rate resulting from sensors’ misdetection.

Design/methodology/approach

Firstly, a multi-layer self-calibration method is proposed based on the spatial and temporal relationships. Next, a depth fusion model is proposed to improve the performance of obstacle detection in challenging conditions. Finally, the study tests are carried out in challenging conditions, including straight unstructured road, unstructured road with rough surface and unstructured road with heavy dust or mist.

Findings

The experimental tests in challenging conditions demonstrate that the depth fusion model, comparing with the use of a single sensor, can filter out the false alarm of radar and point clouds of dust or mist received by lidar. So, the accuracy of objects detection is also improved under challenging conditions.

Originality/value

A multi-layer self-calibration method is conducive to improve the accuracy of the calibration and reduce the workload of manual calibration. Next, a depth fusion model based on lidar and radar can effectively get high precision by way of filtering out the false alarm of radar and point clouds of dust or mist received by lidar, which could improve ICVs’ performance in challenging conditions.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 4 October 2021

Zhe Liu, Zhijian Qiao, Chuanzhe Suo, Yingtian Liu and Kefan Jin

This paper aims to study the localization problem for autonomous industrial vehicles in the complex industrial environments. Aiming for practical applications, the pursuit is to…

Abstract

Purpose

This paper aims to study the localization problem for autonomous industrial vehicles in the complex industrial environments. Aiming for practical applications, the pursuit is to build a map-less localization system which can be used in the presence of dynamic obstacles, short-term and long-term environment changes.

Design/methodology/approach

The proposed system contains four main modules, including long-term place graph updating, global localization and re-localization, location tracking and pose registration. The first two modules fully exploit the deep-learning based three-dimensional point cloud learning techniques to achieve the map-less global localization task in large-scale environment. The location tracking module implements the particle filter framework with a newly designed perception model to track the vehicle location during movements. Finally, the pose registration module uses visual information to exclude the influence of dynamic obstacles and short-term changes and further introduces point cloud registration network to estimate the accurate vehicle pose.

Findings

Comprehensive experiments in real industrial environments demonstrate the effectiveness, robustness and practical applicability of the map-less localization approach.

Practical implications

This paper provides comprehensive experiments in real industrial environments.

Originality/value

The system can be used in the practical automated industrial vehicles for long-term localization tasks. The dynamic objects, short-/long-term environment changes and hardware limitations of industrial vehicles are all considered in the system design. Thus, this work moves a big step toward achieving real implementations of the autonomous localization in practical industrial scenarios.

Article
Publication date: 29 October 2019

Ravinder Singh and Kuldeep Singh Nagla

The purpose of this research is to provide the necessarily and resourceful information regarding range sensors to select the best fit sensor for robust autonomous navigation…

Abstract

Purpose

The purpose of this research is to provide the necessarily and resourceful information regarding range sensors to select the best fit sensor for robust autonomous navigation. Autonomous navigation is an emerging segment in the field of mobile robot in which the mobile robot navigates in the environment with high level of autonomy by lacking human interactions. Sensor-based perception is a prevailing aspect in the autonomous navigation of mobile robot along with localization and path planning. Various range sensors are used to get the efficient perception of the environment, but selecting the best-fit sensor to solve the navigation problem is still a vital assignment.

Design/methodology/approach

Autonomous navigation relies on the sensory information of various sensors, and each sensor relies on various operational parameters/characteristic for the reliable functioning. A simple strategy shown in this proposed study to select the best-fit sensor based on various parameters such as environment, 2 D/3D navigation, accuracy, speed, environmental conditions, etc. for the reliable autonomous navigation of a mobile robot.

Findings

This paper provides a comparative analysis for the diverse range sensors used in mobile robotics with respect to various aspects such as accuracy, computational load, 2D/3D navigation, environmental conditions, etc. to opt the best-fit sensors for achieving robust navigation of autonomous mobile robot.

Originality/value

This paper provides a straightforward platform for the researchers to select the best range sensor for the diverse robotics application.

Article
Publication date: 15 September 2023

Kaushal Jani

This article takes into account object identification, enhanced visual feature optimization, cost effectiveness and speed selection in response to terrain conditions. Neither…

49

Abstract

Purpose

This article takes into account object identification, enhanced visual feature optimization, cost effectiveness and speed selection in response to terrain conditions. Neither supervised machine learning nor manual engineering are used in this work. Instead, the OTV educates itself without instruction from humans or labeling. Beyond its link to stopping distance and lateral mobility, choosing the right speed is crucial. One of the biggest problems with autonomous operations is accurate perception. Obstacle avoidance is typically the focus of perceptive technology. The vehicle's shock is nonetheless controlled by the terrain's roughness at high speeds. The precision needed to recognize difficult terrain is far higher than the accuracy needed to avoid obstacles.

Design/methodology/approach

Robots that can drive unattended in an unfamiliar environment should be used for the Orbital Transfer Vehicle (OTV) for the clearance of space debris. In recent years, OTV research has attracted more attention and revealed several insights for robot systems in various applications. Improvements to advanced assistance systems like lane departure warning and intelligent speed adaptation systems are eagerly sought after by the industry, particularly space enterprises. OTV serves as a research basis for advancements in machine learning, computer vision, sensor data fusion, path planning, decision making and intelligent autonomous behavior from a computer science perspective. In the framework of autonomous OTV, this study offers a few perceptual technologies for autonomous driving in this study.

Findings

One of the most important steps in the functioning of autonomous OTVs and aid systems is the recognition of barriers, such as other satellites. Using sensors to perceive its surroundings, an autonomous car decides how to operate on its own. Driver-assistance systems like adaptive cruise control and stop-and-go must be able to distinguish between stationary and moving objects surrounding the OTV.

Originality/value

One of the most important steps in the functioning of autonomous OTVs and aid systems is the recognition of barriers, such as other satellites. Using sensors to perceive its surroundings, an autonomous car decides how to operate on its own. Driver-assistance systems like adaptive cruise control and stop-and-go must be able to distinguish between stationary and moving objects surrounding the OTV.

Details

International Journal of Intelligent Unmanned Systems, vol. 12 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 1 September 2021

Yi Zhang Rui Huang

With the booming development of computer, optical and sensing technologies and cybernetics, the technical research in unmanned vehicle has been advanced to a new era. This trend…

Abstract

Purpose

With the booming development of computer, optical and sensing technologies and cybernetics, the technical research in unmanned vehicle has been advanced to a new era. This trend arouses great interest in simultaneous localization and mapping (SLAM). Especially, light detection and ranging (Lidar)-based SLAM system has the characteristics of high measuring accuracy and insensitivity to illumination conditions, which has been widely used in industry. However, SLAM has some intractable problems, including degradation under less structured or uncontrived environment. To solve this problem, this paper aims to propose an adaptive scheme with dynamic threshold to mitigate degradation.

Design/methodology/approach

We propose an adaptive strategy with a dynamic module is proposed to overcome degradation of point cloud. Besides, a distortion correction process is presented in the local map to reduce the impact of noise in the iterative optimization process. Our solution ensures adaptability to environmental changes.

Findings

Experimental results on both public data set and field tests demonstrated that the algorithm is robust and self-adaptive, which achieved higher localization accuracy and lower mapping error compared with existing methods.

Originality/value

Unlike other popular algorithms, we do not rely on multi-sensor fusion to improve the localization accuracy. Instead, the pure Lidar-based method with dynamic threshold and distortion correction module indeed improved the accuracy and robustness in localization results.

Details

Sensor Review, vol. 41 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

1 – 10 of 303