Concurrent-learning-based visual servo tracking and scene identification of mobile robots
Article publication date: 4 March 2019
Issue publication date: 14 August 2019
The purpose of this paper is to present a visual servo tracking strategy for the wheeled mobile robot, where the unknown feature depth information can be identified simultaneously in the visual servoing process.
By using reference, desired and current images, system errors are constructed by measurable signals that are obtained by decomposing Euclidean homographies. Subsequently, by taking the advantage of the concurrent learning framework, both historical and current system data are used to construct an adaptive updating mechanism for recovering the unknown feature depth. Then, the kinematic controller is designed for the mobile robot to achieve the visual servo trajectory tracking task. Lyapunov techniques and LaSalle’s invariance principle are used to prove that system errors and the depth estimation error converge to zero synchronously.
The concurrent learning-based visual servo tracking and identification technology is found to be reliable, accurate and efficient with both simulation and comparative experimental results. Both trajectory tracking and depth estimation errors converge to zero successfully.
On the basis of the concurrent learning framework, an adaptive control strategy is developed for the mobile robot to successfully identify the unknown scene depth while accomplishing the visual servo trajectory tracking task.
Qiu, Y., Li, B., Shi, W. and Chen, Y. (2019), "Concurrent-learning-based visual servo tracking and scene identification of mobile robots", Assembly Automation, Vol. 39 No. 3, pp. 460-468. https://doi.org/10.1108/AA-02-2018-024
Emerald Publishing Limited
Copyright © 2018, Emerald Publishing Limited