The purpose of this paper is to facilitate autonomous landing of a multi-rotor unmanned aerial vehicle (UAV) on a moving/tilting platform using a robust vision-based approach.
Autonomous landing of a multi-rotor UAV on a moving or tilting platform of unknown orientation in a GPS-denied and vision-compromised environment presents a challenge to common autopilot systems. The paper proposes a robust visual data processing system based on targets’ Oriented FAST and Rotated BRIEF features to estimate the UAV’s three-dimensional pose in real time.
The system is able to visually locate and identify the unique landing platform based on a cooperative marker with an error rate of 1° or less for all roll, pitch and yaw angles.
The proposed vision-based system aims at on-board use and increased reliability without a significant change to the computational load of the UAV.
The simplicity of the training procedure gives the process the flexibility needed to use a marker of any unknown/irregular shape or dimension. The process can be easily tweaked to respond to different cooperative markers. The on-board computationally inexpensive process can be added to off-the-shelf autopilots.
This research is supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) under the Discovery Grant program, and MITACS under the Globalink and Graduate Fellowship program.
Gupta, K., Emran, B.J. and Najjaran, H. (2019), "Vision-based pose estimation of a multi-rotor unmanned aerial vehicle", International Journal of Intelligent Unmanned Systems, Vol. 7 No. 3, pp. 120-132. https://doi.org/10.1108/IJIUS-10-2018-0030
Emerald Publishing Limited
Copyright © 2019, Emerald Publishing Limited