Search results
1 – 10 of 560Indoor hallways are the most common and indispensable part of people’s daily life, commercial and industrial activities. This paper aims to achieve high-precision and dense 3D…
Abstract
Purpose
Indoor hallways are the most common and indispensable part of people’s daily life, commercial and industrial activities. This paper aims to achieve high-precision and dense 3D reconstruction of the narrow and long indoor hallway and proposes a 3D, dense 3D reconstruction, indoor hallway, rotating LiDAR reconstruction system based on rotating LiDAR.
Design/methodology/approach
This paper develops an orthogonal biaxial rotating LiDAR sensing device for low texture and narrow structures in hallways, which can capture panoramic point clouds containing rich features. A discrete interval scanning method is proposed considering the characteristics of the indoor hallway environment and rotating LiDAR. Considering the error model of LiDAR, this paper proposes a confidence-based point cloud fusion method to improve reconstruction accuracy.
Findings
In two different indoor hallway environments, the 3D reconstruction system proposed in this paper can obtain high-precision and dense reconstruction models. Meanwhile, the confidence-based point cloud fusion algorithm has been proven to improve the accuracy of 3D reconstruction.
Originality/value
A 3D reconstruction system was designed to obtain a high-precision and dense indoor hallway environment model. A discrete interval scanning method suitable for rotating LiDAR and hallway environments was proposed. A confidence-based point cloud fusion algorithm was designed to improve the accuracy of LiDAR 3D reconstruction. The entire system showed satisfactory performance in experiments.
Details
Keywords
Dan Zhang, Junji Yuan, Haibin Meng, Wei Wang, Rui He and Sen Li
In the context of fire incidents within buildings, efficient scene perception by firefighting robots is particularly crucial. Although individual sensors can provide specific…
Abstract
Purpose
In the context of fire incidents within buildings, efficient scene perception by firefighting robots is particularly crucial. Although individual sensors can provide specific types of data, achieving deep data correlation among multiple sensors poses challenges. To address this issue, this study aims to explore a fusion approach integrating thermal imaging cameras and LiDAR sensors to enhance the perception capabilities of firefighting robots in fire environments.
Design/methodology/approach
Prior to sensor fusion, accurate calibration of the sensors is essential. This paper proposes an extrinsic calibration method based on rigid body transformation. The collected data is optimized using the Ceres optimization algorithm to obtain precise calibration parameters. Building upon this calibration, a sensor fusion method based on coordinate projection transformation is proposed, enabling real-time mapping between images and point clouds. In addition, the effectiveness of the proposed fusion device data collection is validated in experimental smoke-filled fire environments.
Findings
The average reprojection error obtained by the extrinsic calibration method based on rigid body transformation is 1.02 pixels, indicating good accuracy. The fused data combines the advantages of thermal imaging cameras and LiDAR, overcoming the limitations of individual sensors.
Originality/value
This paper introduces an extrinsic calibration method based on rigid body transformation, along with a sensor fusion approach based on coordinate projection transformation. The effectiveness of this fusion strategy is validated in simulated fire environments.
Details
Keywords
Xiangdi Yue, Yihuan Zhang, Jiawei Chen, Junxin Chen, Xuanyi Zhou and Miaolei He
In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and…
Abstract
Purpose
In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and mapping (SLAM) techniques. This paper aims to provide a significant reference for researchers and engineers in robotic mapping.
Design/methodology/approach
This paper focused on the research state of LiDAR-based SLAM for robotic mapping as well as a literature survey from the perspective of various LiDAR types and configurations.
Findings
This paper conducted a comprehensive literature review of the LiDAR-based SLAM system based on three distinct LiDAR forms and configurations. The authors concluded that multi-robot collaborative mapping and multi-source fusion SLAM systems based on 3D LiDAR with deep learning will be new trends in the future.
Originality/value
To the best of the authors’ knowledge, this is the first thorough survey of robotic mapping from the perspective of various LiDAR types and configurations. It can serve as a theoretical and practical guide for the advancement of academic and industrial robot mapping.
Details
Keywords
Jiaxiang Hu, Xiaojun Shi, Chunyun Ma, Xin Yao and Yingxin Wang
The purpose of this paper is to propose a multi-feature, multi-metric and multi-loop tightly coupled LiDAR-visual-inertial odometry, M3LVI, for high-accuracy and robust state…
Abstract
Purpose
The purpose of this paper is to propose a multi-feature, multi-metric and multi-loop tightly coupled LiDAR-visual-inertial odometry, M3LVI, for high-accuracy and robust state estimation and mapping.
Design/methodology/approach
M3LVI is built atop a factor graph and composed of two subsystems, a LiDAR-inertial system (LIS) and a visual-inertial system (VIS). LIS implements multi-feature extraction on point cloud, and then multi-metric transformation estimation is implemented to realize LiDAR odometry. LiDAR-enhanced images and IMU pre-integration have been used in VIS to realize visual odometry, providing a reliable initial guess for LIS matching module. Location recognition is performed by a dual loop module combined with Bag of Words and LiDAR-Iris to correct accumulated drift. M³LVI also functions properly when one of the subsystems failed, which greatly increases the robustness in degraded environments.
Findings
Quantitative experiments were conducted on the KITTI data set and the campus data set to evaluate the M3LVI. The experimental results show the algorithm has higher pose estimation accuracy than existing methods.
Practical implications
The proposed method can greatly improve the positioning and mapping accuracy of AGV, and has an important impact on AGV material distribution, which is one of the most important applications of industrial robots.
Originality/value
M3LVI divides the original point cloud into six types, and uses multi-metric transformation estimation to estimate the state of robot and adopts factor graph optimization model to optimize the state estimation, which improves the accuracy of pose estimation. When one subsystem fails, the other system can complete the positioning work independently, which greatly increases the robustness in degraded environments.
Details
Keywords
This paper aims to provide an insight into light detection and ranging (lidar) technology and its growing applications in robotics.
Abstract
Purpose
This paper aims to provide an insight into light detection and ranging (lidar) technology and its growing applications in robotics.
Design/methodology/approach
Following a short introduction, this paper first describes the main lidar techniques and then provides details of a selection of recent academic and corporate research and development activities. This is followed by a discussion of existing and emerging applications. Finally, conclusions are drawn.
Findings
Lidar technology has been the topic of extensive development activity and several principles which differ from the original concept have been commercialised. Lidars are used in all manner of autonomous mobile robots (AMRs) across a broad sector of industries for navigation and have recently started to penetrate the domestic robot market. They have the potential to play a central role in the emerging families of driverless passenger cars and commercial vehicles. In the future, the markets for lidar are expected to expand dramatically as the technology continues to evolve and improve and autonomous vehicles, AMRs and drones become ever-more commonplace.
Originality/value
This paper illustrates the growing importance of lidar to robotics by providing details of the technology, developments and applications.
Details
Keywords
Guotao Xie, Jing Zhang, Junfeng Tang, Hongfei Zhao, Ning Sun and Manjiang Hu
To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions…
Abstract
Purpose
To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions. However, the accuracy of perception is closely related to the performance of sensors configured on the vehicle. To enhance sensors’ performance further to improve the accuracy of environmental perception, this paper aims to introduce an obstacle detection method based on the depth fusion of lidar and radar in challenging conditions, which could reduce the false rate resulting from sensors’ misdetection.
Design/methodology/approach
Firstly, a multi-layer self-calibration method is proposed based on the spatial and temporal relationships. Next, a depth fusion model is proposed to improve the performance of obstacle detection in challenging conditions. Finally, the study tests are carried out in challenging conditions, including straight unstructured road, unstructured road with rough surface and unstructured road with heavy dust or mist.
Findings
The experimental tests in challenging conditions demonstrate that the depth fusion model, comparing with the use of a single sensor, can filter out the false alarm of radar and point clouds of dust or mist received by lidar. So, the accuracy of objects detection is also improved under challenging conditions.
Originality/value
A multi-layer self-calibration method is conducive to improve the accuracy of the calibration and reduce the workload of manual calibration. Next, a depth fusion model based on lidar and radar can effectively get high precision by way of filtering out the false alarm of radar and point clouds of dust or mist received by lidar, which could improve ICVs’ performance in challenging conditions.
Details
Keywords
Serena Sofia, Federico Guglielmo Maetzke, Maria Crescimanno, Alessandro Coticchio, Donato Salvatore La Mela Veca and Antonino Galati
This article aims to compare the LiDAR handheld mobile laser scanner (HMLS) scans with traditional survey methods, as the tree gauge and the hypsometer, to study the efficiency of…
Abstract
Purpose
This article aims to compare the LiDAR handheld mobile laser scanner (HMLS) scans with traditional survey methods, as the tree gauge and the hypsometer, to study the efficiency of the new technology in relation to the accuracy of structural forest attributes estimation useful to support a sustainable forest management.
Design/methodology/approach
A case study was carried out in a high forest located in Tuscany (Italy), by considering 5 forest types, in 20 different survey plots. A comparative analysis between two survey methods will be shown in order to verify the potential limits and the viability of the LiDAR HMLS in the forest field.
Findings
This research demonstrates that LiDAR HMLS technology allows to obtain a large amount of valuable data on forest structural parameters in a short span of time with a high level of accuracy and with obvious impact in terms of organisational efficiency.
Practical implications
Findings could be useful for forest owners highlighting the importance of investing in science and technology to improve the overall efficiency of forest resources management.
Originality/value
This article adds to the current knowledge on the precision forestry topic by providing insight on the feasibility and effectiveness of using precision technologies for monitoring forest ecosystems and dynamics. In particular, this study fills the gap in the literature linked to the need to have practical examples of the use of innovative technologies in forestry.
Details
Keywords
Ruihao Lin, Junzhe Xu and Jianhua Zhang
Large-scale and precise three-dimensional (3D) map play an important role in autonomous driving and robot positioning. However, it is difficult to get accurate poses for mapping…
Abstract
Purpose
Large-scale and precise three-dimensional (3D) map play an important role in autonomous driving and robot positioning. However, it is difficult to get accurate poses for mapping. On one hand, the global positioning system (GPS) data are not always reliable owing to multipath effect and poor satellite visibility in many urban environments. In another hand, the LiDAR-based odometry has accumulative errors. This paper aims to propose a novel simultaneous localization and mapping (SLAM) system to obtain large-scale and precise 3D map.
Design/methodology/approach
The proposed SLAM system optimally integrates the GPS data and a LiDAR odometry. In this system, two core algorithms are developed. To effectively verify reliability of the GPS data, VGL (the abbreviation of Verify GPS data with LiDAR data) algorithm is proposed and the points from LiDAR are used by the algorithm. To obtain accurate poses in GPS-denied areas, this paper proposes EG-LOAM algorithm, a LiDAR odometry with local optimization strategy to eliminate the accumulative errors by means of reliable GPS data.
Findings
On the KITTI data set and the customized outdoor data set, the system is able to generate high-precision 3D map in both GPS-denied areas and areas covered by GPS. Meanwhile, the VGL algorithm is proved to be able to verify reliability of the GPS data with confidence and the EG-LOAM outperform the state-of-the-art baselines.
Originality/value
A novel SLAM system is proposed to obtain large-scale and precise 3D map. To improve the robustness of the system, the VGL algorithm and the EG-LOAM are designed. The whole system as well as the two algorithms have a satisfactory performance in experiments.
Details
Keywords
Zhe Gao, Jun Huang, Xiaofei Yang and Ping An
This paper aims to calibrate the mounted parameters between the LIDAR and the motor in a low-cost 3D LIDAR device. It proposes the model of the aimed 3D LIDAR device and analyzes…
Abstract
Purpose
This paper aims to calibrate the mounted parameters between the LIDAR and the motor in a low-cost 3D LIDAR device. It proposes the model of the aimed 3D LIDAR device and analyzes the influence of all mounted parameters. The study aims to find a way more accurate and simple to calibrate those mounted parameters.
Design/methodology/approach
This method minimizes the coplanarity and area of the plane scanned to estimate the mounted parameters. Within the method, the authors build different cost function for rotation parameters and translation parameters; thus, the parameter estimation problem of 4-degree-of-freedom (DOF) is decoupled into 2-DOF estimation problem, achieving the calibration of these two types of parameters.
Findings
This paper proposes a calibration method for accurately estimating the mounted parameters between a 2D LIDAR and rotating platform, which realizes the estimation of 2-DOF rotation parameters and 2-DOF translation parameters without additional hardware.
Originality/value
Unlike previous plane-based calibration techniques, the main advantage of the proposed method is that the algorithm can estimate the most and more accurate parameters with no more hardware.
Details
Keywords
Ruoxing Wang, Shoukun Wang, Junfeng Xue, Zhihua Chen and Jinge Si
This paper aims to investigate an autonomous obstacle-surmounting method based on a hybrid gait for the problem of crossing low-height obstacles autonomously by a six wheel-legged…
Abstract
Purpose
This paper aims to investigate an autonomous obstacle-surmounting method based on a hybrid gait for the problem of crossing low-height obstacles autonomously by a six wheel-legged robot. The autonomy of obstacle-surmounting is reflected in obstacle recognition based on multi-frame point cloud fusion.
Design/methodology/approach
In this paper, first, for the problem that the lidar on the robot cannot scan the point cloud of low-height obstacles, the lidar is driven to rotate by a 2D turntable to obtain the point cloud of low-height obstacles under the robot. Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping algorithm, fast ground segmentation algorithm and Euclidean clustering algorithm are used to recognize the point cloud of low-height obstacles and obtain low-height obstacle in-formation. Then, combined with the structural characteristics of the robot, the obstacle-surmounting action planning is carried out for two types of obstacle scenes. A segmented approach is used for action planning. Gait units are designed to describe each segment of the action. A gait matrix is used to describe the overall action. The paper also analyzes the stability and surmounting capability of the robot’s key pose and determines the robot’s surmounting capability and the value scheme of the surmounting control variables.
Findings
The experimental verification is carried out on the robot laboratory platform (BIT-6NAZA). The obstacle recognition method can accurately detect low-height obstacles. The robot can maintain a smooth posture to cross low-height obstacles, which verifies the feasibility of the adaptive obstacle-surmounting method.
Originality/value
The study can provide the theory and engineering foundation for the environmental perception of the unmanned platform. It provides environmental information to support follow-up work, for example, on the planning of obstacles and obstacles.
Details