To read this content please select one of the options below:

Boosting visual servoing performance through RGB-based methods

Haolin Fei (School of Engineering, Lancaster University, Lancaster, UK)
Ziwei Wang (School of Engineering, Lancaster University, Lancaster, UK)
Stefano Tedeschi (The Welding Institute Ltd., Cambridge, UK, and)
Andrew Kennedy (School of Engineering, Lancaster University, Lancaster, UK)

Robotic Intelligence and Automation

ISSN: 2754-6969

Article publication date: 13 July 2023

Issue publication date: 21 August 2023




This paper aims to evaluate and compare the performance of different computer vision algorithms in the context of visual servoing for augmented robot perception and autonomy.


The authors evaluated and compared three different approaches: a feature-based approach, a hybrid approach and a machine-learning-based approach. To evaluate the performance of the approaches, experiments were conducted in a simulated environment using the PyBullet physics simulator. The experiments included different levels of complexity, including different numbers of distractors, varying lighting conditions and highly varied object geometry.


The experimental results showed that the machine-learning-based approach outperformed the other two approaches in terms of accuracy and robustness. The approach could detect and locate objects in complex scenes with high accuracy, even in the presence of distractors and varying lighting conditions. The hybrid approach showed promising results but was less robust to changes in lighting and object appearance. The feature-based approach performed well in simple scenes but struggled in more complex ones.


This paper sheds light on the superiority of a hybrid algorithm that incorporates a deep neural network in a feature detector for image-based visual servoing, which demonstrates stronger robustness in object detection and location against distractors and lighting conditions.



Fei, H., Wang, Z., Tedeschi, S. and Kennedy, A. (2023), "Boosting visual servoing performance through RGB-based methods", Robotic Intelligence and Automation, Vol. 43 No. 4, pp. 468-475.



Emerald Publishing Limited

Copyright © 2023, Emerald Publishing Limited

Related articles