To read this content please select one of the options below:

Integrating deep reinforcement learning and improved artificial potential field method for safe path planning for mobile robots

Sijie Tong (Department of Automation, University of Science and Technology of China, Hefei, China)
Qingchen Liu (Department of Automation, University of Science and Technology of China, Hefei, China)
Qichao Ma (Department of Automation, University of Science and Technology of China, Hefei, China)
Jiahu Qin (Department of Automation, University of Science and Technology of China, Hefei, China and Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China)

Robotic Intelligence and Automation

ISSN: 2754-6969

Article publication date: 30 August 2024

Issue publication date: 18 November 2024

108

Abstract

Purpose

This paper aims to address the safety concerns of path-planning algorithms in dynamic obstacle warehouse environments. It proposes a method that uses improved artificial potential fields (IAPF) as expert knowledge for an improved deep deterministic policy gradient (IDDPG) and designs a hierarchical strategy for robots through obstacle detection methods.

Design/methodology/approach

The IAPF algorithm is used as the expert experience of reinforcement learning (RL) to reduce the useless exploration in the early stage of RL training. A strategy-switching mechanism is introduced during training to adapt to various scenarios and overcome challenges related to sparse rewards. Sensor inputs, including light detection and ranging data, are integrated to detect obstacles around waypoints, guiding the robot toward the target point.

Findings

Simulation experiments demonstrate that the integrated use of IDDPG and the IAPF method significantly enhances the safety and training efficiency of path planning for mobile robots.

Originality/value

This method enhances safety by applying safety domain judgment rules to improve APF’s security and designing an obstacle detection method for better danger anticipation. It also boosts training efficiency through using IAPF as expert experience for DDPG and the classification storage and sampling design for the RL experience pool. Additionally, adjustments to the actor network’s update frequency expedite convergence.

Keywords

Acknowledgements

This work was supported in part by the Science and Technology Major Project of Anhui Province under Grant 202203A06020011, in part by Opening Fund of State Key Laboratory of Fire Science (SKLFS) under Grant HZ2023-KF01, and in part by USTC Research Funds of the Double First-Class Initiative under Grant YD2100002013.

Citation

Tong, S., Liu, Q., Ma, Q. and Qin, J. (2024), "Integrating deep reinforcement learning and improved artificial potential field method for safe path planning for mobile robots", Robotic Intelligence and Automation, Vol. 44 No. 6, pp. 871-886. https://doi.org/10.1108/RIA-01-2024-0011

Publisher

:

Emerald Publishing Limited

Copyright © 2024, Emerald Publishing Limited

Related articles