To read this content please select one of the options below:

Reinforcement learning control for a flapping-wing micro aerial vehicle with output constraint

Haifeng Huang (School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China and Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing, China)
Xiaoyang Wu (School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China and Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing, China)
Tingting Wang (School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China and Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing, China)
Yongbin Sun (School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China and Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing, China)
Qiang Fu (School of Intelligence Science and Technology, University of Science and Technology Beijing, Beijing, China and Institute of Artificial Intelligence, University of Science and Technology Beijing, Beijing, China)

Assembly Automation

ISSN: 0144-5154

Article publication date: 27 October 2022

Issue publication date: 6 December 2022

264

Abstract

Purpose

This paper aims to study the application of reinforcement learning (RL) in the control of an output-constrained flapping-wing micro aerial vehicle (FWMAV) with system uncertainty.

Design/methodology/approach

A six-degrees-of-freedom hummingbird model is used without consideration of the inertial effects of the wings. A RL algorithm based on actor–critic framework is applied, which consists of an actor network with unknown policy gradient and a critic network with unknown value function. Considering the good performance of neural network (NN) in fitting nonlinearity and its optimum characteristics, an actor–critic NN optimization algorithm is designed, in which the actor and critic NNs are used to generate a policy and approximate the cost functions, respectively. In addition, to ensure the safe and stable flight of the FWMAV, a barrier Lyapunov function is used to make the flight states constrained in predefined regions. Based on the Lyapunov stability theory, the stability of the system is analyzed, and finally, the feasibility of RL in the control of a FWMAV is verified through simulation.

Findings

The proposed RL control scheme works well in ensuring the trajectory tracking of the FWMAV in the presence of output constraint and system uncertainty.

Originality/value

A novel RL algorithm based on actor–critic framework is applied to the control of a FWMAV with system uncertainty. For the stable and safe flight of the FWMAV, the output constraint problem is considered and solved by barrier Lyapunov function-based control.

Keywords

Acknowledgements

This work was supported by the National Natural Science Foundation of China under Grant 62073031, Grant 62173031 and Grant 62103040.

Citation

Huang, H., Wu, X., Wang, T., Sun, Y. and Fu, Q. (2022), "Reinforcement learning control for a flapping-wing micro aerial vehicle with output constraint", Assembly Automation, Vol. 42 No. 6, pp. 730-741. https://doi.org/10.1108/AA-05-2022-0140

Publisher

:

Emerald Publishing Limited

Copyright © 2022, Emerald Publishing Limited

Related articles