Search results

1 – 10 of 150
Article
Publication date: 23 August 2011

Cailing Wang, Chunxia Zhao and Jingyu Yang

Positioning is a key task in most field robotics applications but can be very challenging in GPS‐denied or high‐slip environments. The purpose of this paper is to describe a…

Abstract

Purpose

Positioning is a key task in most field robotics applications but can be very challenging in GPS‐denied or high‐slip environments. The purpose of this paper is to describe a visual odometry strategy using only one camera in country roads.

Design/methodology/approach

This monocular odometery system uses as input only those images provided by a single camera mounted on the roof of the vehicle and the framework is composed of three main parts: image motion estimation, ego‐motion computation and visual odometry. The image motion is estimated based on a hyper‐complex wavelet phase‐derived optical flow field. The ego‐motion of the vehicle is computed by a blocked RANdom SAmple Consensus algorithm and a maximum likelihood estimator based on a 4‐degrees of freedom motion model. These as instantaneous ego‐motion measurements are used to update the vehicle trajectory according to a dead‐reckoning model and unscented Kalman filter.

Findings

The authors' proposed framework and algorithms are validated on videos from a real automotive platform. Furthermore, the recovered trajectory is superimposed onto a digital map, and the localization results from this method are compared to the ground truth measured with a GPS/INS joint system. These experimental results indicate that the framework and the algorithms are effective.

Originality/value

The effective framework and algorithms for visual odometry using only one camera in country roads are introduced in this paper.

Details

Industrial Robot: An International Journal, vol. 38 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 15 December 2022

Jiaxiang Hu, Xiaojun Shi, Chunyun Ma, Xin Yao and Yingxin Wang

The purpose of this paper is to propose a multi-feature, multi-metric and multi-loop tightly coupled LiDAR-visual-inertial odometry, M3LVI, for high-accuracy and robust state…

Abstract

Purpose

The purpose of this paper is to propose a multi-feature, multi-metric and multi-loop tightly coupled LiDAR-visual-inertial odometry, M3LVI, for high-accuracy and robust state estimation and mapping.

Design/methodology/approach

M3LVI is built atop a factor graph and composed of two subsystems, a LiDAR-inertial system (LIS) and a visual-inertial system (VIS). LIS implements multi-feature extraction on point cloud, and then multi-metric transformation estimation is implemented to realize LiDAR odometry. LiDAR-enhanced images and IMU pre-integration have been used in VIS to realize visual odometry, providing a reliable initial guess for LIS matching module. Location recognition is performed by a dual loop module combined with Bag of Words and LiDAR-Iris to correct accumulated drift. M³LVI also functions properly when one of the subsystems failed, which greatly increases the robustness in degraded environments.

Findings

Quantitative experiments were conducted on the KITTI data set and the campus data set to evaluate the M3LVI. The experimental results show the algorithm has higher pose estimation accuracy than existing methods.

Practical implications

The proposed method can greatly improve the positioning and mapping accuracy of AGV, and has an important impact on AGV material distribution, which is one of the most important applications of industrial robots.

Originality/value

M3LVI divides the original point cloud into six types, and uses multi-metric transformation estimation to estimate the state of robot and adopts factor graph optimization model to optimize the state estimation, which improves the accuracy of pose estimation. When one subsystem fails, the other system can complete the positioning work independently, which greatly increases the robustness in degraded environments.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 14 June 2013

Christian Ivancsits and Min‐Fan Ricky Lee

This paper aims to address three major issues in the development of a vision‐based navigation system for small unmanned aerial vehicles (UAVs) which can be characterized as…

1043

Abstract

Purpose

This paper aims to address three major issues in the development of a vision‐based navigation system for small unmanned aerial vehicles (UAVs) which can be characterized as follows: technical constraints, robust image feature matching and an efficient and precise method for visual navigation.

Design/methodology/approach

The authors present and evaluate methods for their solution such as wireless networked control, highly distinctive feature descriptors (HDF) and a visual odometry system.

Findings

Proposed feature descriptors achieve significant improvements in computation time by detaching the explicit scale invariance of the widely used scale invariant feature transform. The feasibility of wireless networked real‐time control for vision‐based navigation is evaluated in terms of latency and data throughput. The visual odometry system uses a single camera to reconstruct the camera path and the structure of the environment, and achieved and error of 1.65 percent w.r.t total path length on a circular trajectory of 9.43 m.

Originality/value

The originality/value lies in the contribution of the presented work to the solution of visual odometry for small unmanned aerial vehicles.

Article
Publication date: 1 February 2021

Yufei Ma, Shuangxin Wang, Dingli Yu and Kaihua Zhu

This paper aims to enable the unmanned aerial vehicles to inspect the surface condition of wind turbine in close range when the global positioning system signal is not reliable…

Abstract

Purpose

This paper aims to enable the unmanned aerial vehicles to inspect the surface condition of wind turbine in close range when the global positioning system signal is not reliable, and further improve its intelligence. So a visual-inertial odometry with point and line features is developed.

Design/methodology/approach

Visual front-end combining point and line features, as well as its purification strategies, are first presented to improve the robustness of feature tracking in low-textured scene and rapidity of segment detector. Additionally, the inertial measurement is integrated between keyframes as constrain to reduce tracking error existed in visual-only system. Second, the graph-based visual-inertial back-end is constructed. To parameterize line features effectively, the infinite line representation not sensitive to outdoor light is employed, in which Plücker and Cayley are selected for line re-projection and nonlinear optimization. Furthermore, Jacobians of the line re-projection errors are analytically derived for better accuracy.

Findings

Experiments are performed in various scenes of the wind farm. The results demonstrate that the tight-coupled visual-inertial odometry with point and line features is more precise on all the samples than conventional algorithms in complex wind farm environments. Additionally, the constructed line feature map can be used in the following research for autonomous navigation.

Originality/value

The proposed visual-inertial odometry works robustly in strong electromagnetic interference, low-textured and illumination-change wind farm.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 7 April 2023

Sixing Liu, Yan Chai, Rui Yuan and Hong Miao

Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments…

Abstract

Purpose

Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments. Existing algorithms are based on laser or visual odometry; however, the lidar sensing range is small, the amount of data features is small, the camera is vulnerable to external conditions and the localization and map building cannot be performed stably and accurately using a single sensor. This paper aims to propose a laser three dimensions tightly coupled map building method that incorporates visual information, and uses laser point cloud information and image information to complement each other to improve the overall performance of the algorithm.

Design/methodology/approach

The visual feature points are first matched at the front end of the method, and the mismatched point pairs are removed using the bidirectional random sample consensus (RANSAC) algorithm. The laser point cloud is then used to obtain its depth information, while the two types of feature points are fed into the pose estimation module for a tightly coupled local bundle adjustment solution using a heuristic simulated annealing algorithm. Finally, the visual bag-of-words model is fused in the laser point cloud information to establish a threshold to construct a loopback framework to further reduce the cumulative drift error of the system over time.

Findings

Experiments on publicly available data sets show that the proposed method in this paper can match its real trajectory well. For various scenes, the map can be constructed by using the complementary laser and vision sensors, with high accuracy and robustness. At the same time, the method is verified in a real environment using an autonomous walking acquisition platform, and the system loaded with the method can run well for a long time and take into account the environmental adaptability of multiple scenes.

Originality/value

A multi-sensor data tight coupling method is proposed to fuse laser and vision information for optimal solution of the positional attitude. A bidirectional RANSAC algorithm is used for the removal of visual mismatched point pairs. Further, oriented fast and rotated brief feature points are used to build a bag-of-words model and construct a real-time loopback framework to reduce error accumulation. According to the experimental validation results, the accuracy and robustness of the single-sensor SLAM algorithm can be improved.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 14 May 2018

Chang Chen and Hua Zhu

This study aims to present a visual-inertial simultaneous localization and mapping (SLAM) method for accurate positioning and navigation of mobile robots in the event of global…

Abstract

Purpose

This study aims to present a visual-inertial simultaneous localization and mapping (SLAM) method for accurate positioning and navigation of mobile robots in the event of global positioning system (GPS) signal failure in buildings, trees and other obstacles.

Design/methodology/approach

In this framework, a feature extraction method distributes features on the image under texture-less scenes. The assumption of constant luminosity is improved, and the features are tracked by the optical flow to enhance the stability of the system. The camera data and inertial measurement unit data are tightly coupled to estimate the pose by nonlinear optimization.

Findings

The method is successfully performed on the mobile robot and steadily extracts the features on low texture environments and tracks features. The end-to-end error is 1.375 m with respect to the total length of 762 m. The authors achieve better relative pose error, scale and CPU load than ORB-SLAM2 on EuRoC data sets.

Originality/value

The main contribution of this study is the theoretical derivation and experimental application of a new visual-inertial SLAM method that has excellent accuracy and stability on weak texture scenes.

Details

Industrial Robot: An International Journal, vol. 45 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 16 October 2018

Shaoyan Xu, Tao Wang, Congyan Lang, Songhe Feng and Yi Jin

Typical feature-matching algorithms use only unary constraints on appearances to build correspondences where little structure information is used. Ignoring structure information…

Abstract

Purpose

Typical feature-matching algorithms use only unary constraints on appearances to build correspondences where little structure information is used. Ignoring structure information makes them sensitive to various environmental perturbations. The purpose of this paper is to propose a novel graph-based method that aims to improve matching accuracy by fully exploiting the structure information.

Design/methodology/approach

Instead of viewing a frame as a simple collection of keypoints, the proposed approach organizes a frame as a graph by treating each keypoint as a vertex, where structure information is integrated in edges between vertices. Subsequently, the matching process of finding keypoint correspondence is formulated in a graph matching manner.

Findings

The authors compare it with several state-of-the-art visual simultaneous localization and mapping algorithms on three datasets. Experimental results reveal that the ORB-G algorithm provides more accurate and robust trajectories in general.

Originality/value

Instead of viewing a frame as a simple collection of keypoints, the proposed approach organizes a frame as a graph by treating each keypoint as a vertex, where structure information is integrated in edges between vertices. Subsequently, the matching process of finding keypoint correspondence is formulated in a graph matching manner.

Details

Industrial Robot: An International Journal, vol. 45 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 6 May 2014

Edgar A. Martínez-García, Luz Abril Torres-Méndez and Mohan Rajesh Elara

The purpose of this paper is to establish analytical and numerical solutions of a navigational law to estimate displacements of hyper-static multi-legged mobile robots, which…

Abstract

Purpose

The purpose of this paper is to establish analytical and numerical solutions of a navigational law to estimate displacements of hyper-static multi-legged mobile robots, which combines: monocular vision (optical flow of regional invariants) and legs dynamics.

Design/methodology/approach

In this study the authors propose a Euler-Lagrange equation that control legs’ joints to control robot's displacements. Robot's rotation and translational velocities are feedback by motion features of visual invariant descriptors. A general analytical solution of a derivative navigation law is proposed for hyper-static robots. The feedback is formulated with the local speed rate obtained from optical flow of visual regional invariants. The proposed formulation includes a data association algorithm aimed to correlate visual invariant descriptors detected in sequential images through monocular vision. The navigation law is constrained by a set of three kinematic equilibrium conditions for navigational scenarios: constant acceleration, constant velocity, and instantaneous acceleration.

Findings

The proposed data association method concerns local motions of multiple invariants (enhanced MSER) by minimizing the norm of multidimensional optical flow feature vectors. Kinematic measurements are used as observable arguments in the general dynamic control equation; while the legs joints dynamics model is used to formulate the controllable arguments.

Originality/value

The given analysis does not combine sensor data of any kind, but only monocular passive vision. The approach automatically detects environmental invariant descriptors with an enhanced version of the MSER method. Only optical flow vectors and robot's multi-leg dynamics are used to formulate descriptive rotational and translational motions for self-positioning.

Details

International Journal of Intelligent Unmanned Systems, vol. 2 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 3 December 2018

Babing Ji and Qixin Cao

This paper aims to propose a new solution for real-time 3D perception with monocular camera. Most of the industrial robots’ solutions use active sensors to acquire 3D structure…

Abstract

Purpose

This paper aims to propose a new solution for real-time 3D perception with monocular camera. Most of the industrial robots’ solutions use active sensors to acquire 3D structure information, which limit their applications to indoor scenarios. By only using monocular camera, some state of art method provides up-to-scale 3D structure information, but scale information of corresponding objects is uncertain.

Design/methodology/approach

First, high-accuracy and scale-informed camera pose and sparse 3D map are provided by leveraging ORB-SLAM and marker. Second, for each frame captured by a camera, a specially designed depth estimation pipeline is used to compute corresponding 3D structure called depth map in real-time. Finally, depth map is integrated into volumetric scene model. A feedback module has been designed for users to visualize intermediate scene surface in real-time.

Findings

The system provides more robust tracking performance and compelling results. The implementation runs near 25 Hz on mainstream laptop based on parallel computation technique.

Originality/value

A new solution for 3D perception is using monocular camera by leveraging ORB-SLAM systems. Results in our system are visually comparable to active sensor systems such as elastic fusion in small scenes. The system is also both efficient and easy to implement, and algorithms and specific configurations involved are introduced in detail.

Details

Industrial Robot: An International Journal, vol. 45 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 19 June 2017

Qian Sun, Ming Diao, Yibing Li and Ya Zhang

The purpose of this paper is to propose a binocular visual odometry algorithm based on the Random Sample Consensus (RANSAC) in visual navigation systems.

Abstract

Purpose

The purpose of this paper is to propose a binocular visual odometry algorithm based on the Random Sample Consensus (RANSAC) in visual navigation systems.

Design/methodology/approach

The authors propose a novel binocular visual odometry algorithm based on features from accelerated segment test (FAST) extractor and an improved matching method based on the RANSAC. Firstly, features are detected by utilizing the FAST extractor. Secondly, the detected features are roughly matched by utilizing the distance ration of the nearest neighbor and the second nearest neighbor. Finally, wrong matched feature pairs are removed by using the RANSAC method to reduce the interference of error matchings.

Findings

The performance of this new algorithm has been examined by an actual experiment data. The results shown that not only the robustness of feature detection and matching can be enhanced but also the positioning error can be significantly reduced by utilizing this novel binocular visual odometry algorithm. The feasibility and effectiveness of the proposed matching method and the improved binocular visual odometry algorithm were also verified in this paper.

Practical implications

This paper presents an improved binocular visual odometry algorithm which has been tested by real data. This algorithm can be used for outdoor vehicle navigation.

Originality/value

A binocular visual odometer algorithm based on FAST extractor and RANSAC methods is proposed to improve the positioning accuracy and robustness. Experiment results have verified the effectiveness of the present visual odometer algorithm.

Details

Industrial Robot: An International Journal, vol. 44 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of 150