Search results1 – 2 of 2
This paper aims to present a new vision-based approach for both the identification and the estimation of the relative distance between the unmanned aerial vehicle (UAV…
This paper aims to present a new vision-based approach for both the identification and the estimation of the relative distance between the unmanned aerial vehicle (UAV) and power pylon. Autonomous power line inspection using small UAVs, has been the focus of many research works over the past couple of decades. Automatic detection of power pylons is a primary requirement to achieve such autonomous systems. It is still a challenging task due to the complex geometry and cluttered background of these structures.
The identification solution proposed, avoids the complexity of classic object recognition techniques. Instead of searching the whole image for the pylon template, low-level geometric priors with robust colour attributes are combined to remove the pylon background. The depth estimation, on the other hand, is based on a new concept which exploits the ego-motion of the inspection UAV to estimate its distance from the pylon using just a monocular camera.
An algorithm is tested on a quadrotor UAV, using different kinds of metallic power pylons. Both simulation and real-world experiments, conducted in different backgrounds and illumination conditions, show very promising results.
In the real tests carried out, the Inertial Navigation System (INS) of the vehicle was used to estimate its ego-motion. A more reliable solution should be considered for longer distances, by either fusing INS and global positioning system data or using visual navigation techniques such as visual odometry.
A simple yet efficient solution is proposed that allows the UAV to reliably identify the pylon, with still a low processing cost. Considering a monocular solution is a major advantage, given the limited payload and processing power of such small vehicles.
In visual-based applications, lighting conditions have a considerable impact on quality of the acquired images. Extremely low or high illuminated environments are a real…
In visual-based applications, lighting conditions have a considerable impact on quality of the acquired images. Extremely low or high illuminated environments are a real issue for a majority of cameras due to limitations in their dynamic range. Indeed, over or under exposure might result in loss of essential information because of pixel saturation or noise. This can be critical in computer vision applications. High dynamic range (HDR) imaging technology is known to improve image rendering in such conditions. The purpose of this paper is to investigate the level of performance that can be achieved for feature detection and tracking operations in images acquired with a HDR image sensor.
In this study, four different feature detection techniques are selected and tracking algorithm is based on the pyramidal implementation of Kanade-Lucas-Tomasi (KLT) feature tracker. Tracking algorithm is run over image sequences acquired with a HDR image sensor and with a high resolution 5 Megapixel image sensor to comparatively assess them.
The authors demonstrate that tracking performance is greatly improved on image sequences acquired with HDR sensor. Number and percentage of finally tracked features are several times higher than what can be achieved with a 5 Megapixel image sensor.
The specific interest of this work focuses on the evaluation of tracking persistence of a set of initial detected features over image sequences taken in different scenes. This includes extreme illumination indoor and outdoor environments subject to direct sunlight exposure, backlighting, as well as dim light and dark scenarios.