Search results

1 – 5 of 5
Article
Publication date: 6 August 2024

Yingjie Yu, Shuai Chen, Xinpeng Yang, Changzhen Xu, Sen Zhang and Wendong Xiao

This paper proposes a self-supervised monocular depth estimation algorithm under multiple constraints, which can generate the corresponding depth map end-to-end based on RGB…

Abstract

Purpose

This paper proposes a self-supervised monocular depth estimation algorithm under multiple constraints, which can generate the corresponding depth map end-to-end based on RGB images. On this basis, based on the traditional visual simultaneous localisation and mapping (VSLAM) framework, a dynamic object detection framework based on deep learning is introduced, and dynamic objects in the scene are culled during mapping.

Design/methodology/approach

Typical SLAM algorithms or data sets assume a static environment and do not consider the potential consequences of accidentally adding dynamic objects to a 3D map. This shortcoming limits the applicability of VSLAM in many practical cases, such as long-term mapping. In light of the aforementioned considerations, this paper presents a self-supervised monocular depth estimation algorithm based on deep learning. Furthermore, this paper introduces the YOLOv5 dynamic detection framework into the traditional ORBSLAM2 algorithm for the purpose of removing dynamic objects.

Findings

Compared with Dyna-SLAM, the algorithm proposed in this paper reduces the error by about 13%, and compared with ORB-SLAM2 by about 54.9%. In addition, the algorithm in this paper can process a single frame of image at a speed of 15–20 FPS on GeForce RTX 2080s, far exceeding Dyna-SLAM in real-time performance.

Originality/value

This paper proposes a VSLAM algorithm that can be applied to dynamic environments. The algorithm consists of a self-supervised monocular depth estimation part under multiple constraints and the introduction of a dynamic object detection framework based on YOLOv5.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 17 July 2024

Run Yang, Jingru Li, Taiyun Zhu, Di Hu and Erbao Dong

Gas-insulated switchgear (GIS) stands as a pivotal component in power systems, susceptible to partial discharge occurrences. Nevertheless, manual inspection proves…

Abstract

Purpose

Gas-insulated switchgear (GIS) stands as a pivotal component in power systems, susceptible to partial discharge occurrences. Nevertheless, manual inspection proves labor-intensive, exhibits a low defect detection rate. Conventional inspection robots face limitations, unable to perform live line measurements or adapt effectively to diverse environmental conditions. This paper aims to introduce a novel solution: the GIS ultrasonic partial discharge detection robot (GBOT), designed to assume the role of substation personnel in inspection tasks.

Design/methodology/approach

GBOT is a mobile manipulator system divided into three subsystems: autonomous location and navigation, vision-guided and force-controlled manipulator and data detection and analysis. These subsystems collaborate, incorporating simultaneous localization and mapping, path planning, target recognition and signal processing, admittance control. This paper also introduces a path planning method designed to adapt to the substation environment. In addition, a flexible end effector is designed for full contact between the probe and the device.

Findings

The robot fulfills the requirements for substation GIS inspections. It can conduct efficient and low-cost path planning with narrow passages in the constructed substation map, realizes a sufficiently stable detection contact and perform high defect detection rate.

Practical implications

The robot mitigates the labor intensity of grid maintenance personnel, enhances inspection efficiency and safety and advances the intelligence and digitization of power equipment maintenance and monitoring. This research also provides valuable insights for the broader application of mobile manipulators in diverse fields.

Originality/value

The robot is a mobile manipulator system used in GIS detection, offering a viable alternative to grid personnel for equipment inspections. Comparing with the previous robotic systems, this system can work in live electrical detection, demonstrating robust environmental adaptability and superior efficiency.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 18 July 2024

Zhiyu Li, Hongguang Li, Yang Liu, Lingyun Jin and Congqing Wang

Autonomous flight of unmanned aerial vehicles (UAVs) in global position system (GPS)-denied environments has become an increasing research hotspot. This paper aims to realize the…

Abstract

Purpose

Autonomous flight of unmanned aerial vehicles (UAVs) in global position system (GPS)-denied environments has become an increasing research hotspot. This paper aims to realize the indoor fixed-point hovering control and autonomous flight for UAVs based on visual inertial simultaneous localization and mapping (SLAM) and sensor fusion algorithm based on extended Kalman filter.

Design/methodology/approach

The fundamental of the proposed method is using visual inertial SLAM to estimate the position information of the UAV and position-speed double-loop controller to control the UAV. The motion and observation models of the UAV and the fusion algorithm are given. Finally, experiments are performed to test the proposed algorithms.

Findings

A position-speed double-loop controller is proposed, by fusing the position information obtained by visual inertial SLAM with the data of airborne sensors. The experiment results of the indoor fixed-points hovering show that UAV flight control can be realized based on visual inertial SLAM in the absence of GPS.

Originality/value

A position-speed double-loop controller for UAV is designed and tested, which provides a more stable position estimation and enabled UAV to fly autonomously and hover in GPS-denied environment.

Details

Robotic Intelligence and Automation, vol. 44 no. 5
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 24 June 2024

Hongwei Wang, Chao Li, Wei Liang, Di Wang and Linhu Yao

In response to the navigation challenges faced by coal mine tunnel inspection robots in semistructured underground intersection environments, many current studies rely on…

Abstract

Purpose

In response to the navigation challenges faced by coal mine tunnel inspection robots in semistructured underground intersection environments, many current studies rely on structured map-based planning algorithms and trajectory tracking techniques. However, this approach is highly dependent on the accuracy of the global map, which can lead to deviations from the predetermined route or collisions with obstacles. To improve the environmental adaptability and navigation precision of the robot, this paper aims to propose an adaptive navigation system based on a two-dimensional (2D) LiDAR.

Design/methodology/approach

Leveraging the geometric features of coal mine tunnel environments, the clustering and fitting algorithms are used to construct a geometric model within the navigation system. This not only reduces the complexity of the navigation system but also optimizes local positioning. By constructing a local potential field, there is no need for path-fitting planning, thus enhancing the robot’s adaptability in intersection environments. The feasibility of the algorithm principles is validated through MATLAB and robot operating system simulations in this paper.

Findings

The experiments demonstrate that this method enables autonomous driving and optimized positioning capabilities in harsh environments, with high real-time performance and environmental adaptability, achieving a positioning error rate of less than 3%.

Originality/value

This paper presents an adaptive navigation system for a coal mine tunnel inspection robot using a 2D LiDAR sensor. The system improves robot attitude estimation and motion control accuracy to ensure safe and reliable navigation, especially at tunnel intersections.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 4 June 2024

Dan Zhang, Junji Yuan, Haibin Meng, Wei Wang, Rui He and Sen Li

In the context of fire incidents within buildings, efficient scene perception by firefighting robots is particularly crucial. Although individual sensors can provide specific…

Abstract

Purpose

In the context of fire incidents within buildings, efficient scene perception by firefighting robots is particularly crucial. Although individual sensors can provide specific types of data, achieving deep data correlation among multiple sensors poses challenges. To address this issue, this study aims to explore a fusion approach integrating thermal imaging cameras and LiDAR sensors to enhance the perception capabilities of firefighting robots in fire environments.

Design/methodology/approach

Prior to sensor fusion, accurate calibration of the sensors is essential. This paper proposes an extrinsic calibration method based on rigid body transformation. The collected data is optimized using the Ceres optimization algorithm to obtain precise calibration parameters. Building upon this calibration, a sensor fusion method based on coordinate projection transformation is proposed, enabling real-time mapping between images and point clouds. In addition, the effectiveness of the proposed fusion device data collection is validated in experimental smoke-filled fire environments.

Findings

The average reprojection error obtained by the extrinsic calibration method based on rigid body transformation is 1.02 pixels, indicating good accuracy. The fused data combines the advantages of thermal imaging cameras and LiDAR, overcoming the limitations of individual sensors.

Originality/value

This paper introduces an extrinsic calibration method based on rigid body transformation, along with a sensor fusion approach based on coordinate projection transformation. The effectiveness of this fusion strategy is validated in simulated fire environments.

Details

Sensor Review, vol. 44 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

1 – 5 of 5