To read this content please select one of the options below:

Quadrotor navigation in dynamic environments with deep reinforcement learning

Jinbao Fang (Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, China)
Qiyu Sun (Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, China)
Yukun Chen (Shanghai World Foreign Language Academy, Shanghai, China)
Yang Tang (Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, China)

Assembly Automation

ISSN: 0144-5154

Article publication date: 7 April 2021

Issue publication date: 22 July 2021

270

Abstract

Purpose

This work aims to combine the cloud robotics technologies with deep reinforcement learning to build a distributed training architecture and accelerate the learning procedure of autonomous systems. Especially, a distributed training architecture for navigating unmanned aerial vehicles (UAVs) in complicated dynamic environments is proposed.

Design/methodology/approach

This study proposes a distributed training architecture named experience-sharing learner-worker (ESLW) for deep reinforcement learning to navigate UAVs in dynamic environments, which is inspired by cloud-based techniques. With the ESLW architecture, multiple worker nodes operating in different environments can generate training data in parallel, and then the learner node trains a policy through the training data collected by the worker nodes. Besides, this study proposes an extended experience replay (EER) strategy to ensure the method can be applied to experience sequences to improve training efficiency. To learn more about dynamic environments, convolutional long short-term memory (ConvLSTM) modules are adopted to extract spatiotemporal information from training sequences.

Findings

Experimental results demonstrate that the ESLW architecture and the EER strategy accelerate the convergence speed and the ConvLSTM modules specialize in extract sequential information when navigating UAVs in dynamic environments.

Originality/value

Inspired by the cloud robotics technologies, this study proposes a distributed ESLW architecture for navigating UAVs in dynamic environments. Besides, the EER strategy is proposed to speed up training processes of experience sequences, and the ConvLSTM modules are added to networks to make full use of the sequential experiences.

Keywords

Acknowledgements

The work is supported by the the National Natural Science Foundation of China (Basic Science Center Program: Grant No. 61988101) and International (Regional) Cooperation and Exchange Project (Grant No. 61720106008), the Program of Shanghai Academic Research Leader under Grant No. 20XD1401300, the Programme of Introducing Talents of Discipline to Universities (the 111 Project) under Grant No. B17017.

Citation

Fang, J., Sun, Q., Chen, Y. and Tang, Y. (2021), "Quadrotor navigation in dynamic environments with deep reinforcement learning", Assembly Automation, Vol. 41 No. 3, pp. 254-262. https://doi.org/10.1108/AA-11-2020-0183

Publisher

:

Emerald Publishing Limited

Copyright © 2021, Emerald Publishing Limited

Related articles