Search results

1 – 10 of 24
Article
Publication date: 23 August 2011

Cailing Wang, Chunxia Zhao and Jingyu Yang

Positioning is a key task in most field robotics applications but can be very challenging in GPS‐denied or high‐slip environments. The purpose of this paper is to describe a…

Abstract

Purpose

Positioning is a key task in most field robotics applications but can be very challenging in GPS‐denied or high‐slip environments. The purpose of this paper is to describe a visual odometry strategy using only one camera in country roads.

Design/methodology/approach

This monocular odometery system uses as input only those images provided by a single camera mounted on the roof of the vehicle and the framework is composed of three main parts: image motion estimation, ego‐motion computation and visual odometry. The image motion is estimated based on a hyper‐complex wavelet phase‐derived optical flow field. The ego‐motion of the vehicle is computed by a blocked RANdom SAmple Consensus algorithm and a maximum likelihood estimator based on a 4‐degrees of freedom motion model. These as instantaneous ego‐motion measurements are used to update the vehicle trajectory according to a dead‐reckoning model and unscented Kalman filter.

Findings

The authors' proposed framework and algorithms are validated on videos from a real automotive platform. Furthermore, the recovered trajectory is superimposed onto a digital map, and the localization results from this method are compared to the ground truth measured with a GPS/INS joint system. These experimental results indicate that the framework and the algorithms are effective.

Originality/value

The effective framework and algorithms for visual odometry using only one camera in country roads are introduced in this paper.

Details

Industrial Robot: An International Journal, vol. 38 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 2 June 2023

Qamar Ul Islam, Haidi Ibrahim, Pan Kok Chin, Kevin Lim and Mohd Zaid Abdullah

Many popular simultaneous localization and mapping (SLAM) techniques have low accuracy, especially when localizing environments containing dynamically moving objects since their…

Abstract

Purpose

Many popular simultaneous localization and mapping (SLAM) techniques have low accuracy, especially when localizing environments containing dynamically moving objects since their presence can potentially cause inaccurate data associations. To address this issue, the proposed FADM-SLAM system aims to improve the accuracy of SLAM techniques in environments containing dynamically moving objects. It uses a pipeline of feature-based approaches accompanied by sparse optical flow and multi-view geometry as constraints to achieve this goal.

Design/methodology/approach

FADM-SLAM, which works with monocular, stereo and RGB-D sensors, combines an instance segmentation network incorporating an intelligent motion detection strategy (iM) with an optical flow technique to improve location accuracy. The proposed AS-SLAM system comprises four principal modules, which are the optical flow mask and iM, the ego motion estimation, dynamic point detection and the feature-based extraction framework.

Findings

Experiment results using the publicly available RGBD-Bonn data set indicate that FADM-SLAM outperforms established visual SLAM systems in highly dynamic conditions.

Originality/value

In summary, the first module generates the indication of dynamic objects by using the optical flow and iM with geometric-wise segmentation, which is then used by the second module to compute the starting point of a posture. The third module, meanwhile, first searches for the dynamic feature points in the environment, and second, eliminates them from further processing. An algorithm based on epipolar constraints is implemented to do this. In this way, only the static feature points are retained, which are then fed to the fourth module for extracting important features.

Details

Robotic Intelligence and Automation, vol. 43 no. 3
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 24 September 2019

Erliang Yao, Hexin Zhang, Haitao Song and Guoliang Zhang

To realize stable and precise localization in the dynamic environments, the authors propose a fast and robust visual odometry (VO) approach with a low-cost Inertial Measurement…

Abstract

Purpose

To realize stable and precise localization in the dynamic environments, the authors propose a fast and robust visual odometry (VO) approach with a low-cost Inertial Measurement Unit (IMU) in this study.

Design/methodology/approach

The proposed VO incorporates the direct method with the indirect method to track the features and to optimize the camera pose. It initializes the positions of tracked pixels with the IMU information. Besides, the tracked pixels are refined by minimizing the photometric errors. Due to the small convergence radius of the indirect method, the dynamic pixels are rejected. Subsequently, the camera pose is optimized by minimizing the reprojection errors. The frames with little dynamic information are selected to create keyframes. Finally, the local bundle adjustment is performed to refine the poses of the keyframes and the positions of 3-D points.

Findings

The proposed VO approach is evaluated experimentally in dynamic environments with various motion types, suggesting that the proposed approach achieves more accurate and stable location than the conventional approach. Moreover, the proposed VO approach works well in the environments with the motion blur.

Originality/value

The proposed approach fuses the indirect method and the direct method with the IMU information, which improves the localization in dynamic environments significantly.

Details

Industrial Robot: the international journal of robotics research and application, vol. 46 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 18 January 2016

Huajun Liu, Cailing Wang and Jingyu Yang

– This paper aims to present a novel scheme of multiple vanishing points (VPs) estimation and corresponding lanes identification.

Abstract

Purpose

This paper aims to present a novel scheme of multiple vanishing points (VPs) estimation and corresponding lanes identification.

Design/methodology/approach

The scheme proposed here includes two main stages: VPs estimation and lane identification. VPs estimation based on vanishing direction hypothesis and Bayesian posterior probability estimation in the image Hough space is a foremost contribution, and then VPs are estimated through an optimal objective function. In lane identification stage, the selected linear samples supervised by estimated VPs are clustered based on the gradient direction of linear features to separate lanes, and finally all the lanes are identified through an identification function.

Findings

The scheme and algorithms are tested on real data sets collected from an intelligent vehicle. It is more efficient and more accurate than recent similar methods for structured road, and especially multiple VPs identification and estimation of branch road can be achieved and lanes of branch road can be identified for complex scenarios based on Bayesian posterior probability verification framework. Experimental results demonstrate VPs, and lanes are practical for challenging structured and semi-structured complex road scenarios.

Originality/value

A Bayesian posterior probability verification framework is proposed to estimate multiple VPs and corresponding lanes for road scene understanding of structured or semi-structured road monocular images on intelligent vehicles.

Details

Industrial Robot: An International Journal, vol. 43 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 2 January 2024

Xiangdi Yue, Yihuan Zhang, Jiawei Chen, Junxin Chen, Xuanyi Zhou and Miaolei He

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and…

Abstract

Purpose

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and mapping (SLAM) techniques. This paper aims to provide a significant reference for researchers and engineers in robotic mapping.

Design/methodology/approach

This paper focused on the research state of LiDAR-based SLAM for robotic mapping as well as a literature survey from the perspective of various LiDAR types and configurations.

Findings

This paper conducted a comprehensive literature review of the LiDAR-based SLAM system based on three distinct LiDAR forms and configurations. The authors concluded that multi-robot collaborative mapping and multi-source fusion SLAM systems based on 3D LiDAR with deep learning will be new trends in the future.

Originality/value

To the best of the authors’ knowledge, this is the first thorough survey of robotic mapping from the perspective of various LiDAR types and configurations. It can serve as a theoretical and practical guide for the advancement of academic and industrial robot mapping.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 17 March 2014

Giulio Reina, Mauro Bellone, Luigi Spedicato and Nicola Ivan Giannoccaro

This research aims to address the issue of safe navigation for autonomous vehicles in highly challenging outdoor environments. Indeed, robust navigation of autonomous mobile…

Abstract

Purpose

This research aims to address the issue of safe navigation for autonomous vehicles in highly challenging outdoor environments. Indeed, robust navigation of autonomous mobile robots over long distances requires advanced perception means for terrain traversability assessment.

Design/methodology/approach

The use of visual systems may represent an efficient solution. This paper discusses recent findings in terrain traversability analysis from RGB-D images. In this context, the concept of point as described only by its Cartesian coordinates is reinterpreted in terms of local description. As a result, a novel descriptor for inferring the traversability of a terrain through its 3D representation, referred to as the unevenness point descriptor (UPD), is conceived. This descriptor features robustness and simplicity.

Findings

The UPD-based algorithm shows robust terrain perception capabilities in both indoor and outdoor environment. The algorithm is able to detect obstacles and terrain irregularities. The system performance is validated in field experiments in both indoor and outdoor environments.

Research limitations/implications

The UPD enhances the interpretation of 3D scene to improve the ambient awareness of unmanned vehicles. The larger implications of this method reside in its applicability for path planning purposes.

Originality/value

This paper describes a visual algorithm for traversability assessment based on normal vectors analysis. The algorithm is simple and efficient providing fast real-time implementation, since the UPD does not require any data processing or previously generated digital elevation map to classify the scene. Moreover, it defines a local descriptor, which can be of general value for segmentation purposes of 3D point clouds and allows the underlining geometric pattern associated with each single 3D point to be fully captured and difficult scenarios to be correctly handled.

Details

Sensor Review, vol. 34 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 1 September 2005

Mario Peña‐Cabrera, Ismael Lopez‐Juarez, Reyes Rios‐Cabrera and Jorge Corona‐Castuera

Outcome with a novel methodology for online recognition and classification of pieces in robotic assembly tasks and its application into an intelligent manufacturing cell.

1813

Abstract

Purpose

Outcome with a novel methodology for online recognition and classification of pieces in robotic assembly tasks and its application into an intelligent manufacturing cell.

Design/methodology/approach

The performance of industrial robots working in unstructured environments can be improved using visual perception and learning techniques. The object recognition is accomplished using an artificial neural network (ANN) architecture which receives a descriptive vector called CFD&POSE as the input. Experimental results were done within a manufacturing cell and assembly parts.

Findings

Find this vector represents an innovative methodology for classification and identification of pieces in robotic tasks, obtaining fast recognition and pose estimation information in real time. The vector compresses 3D object data from assembly parts and it is invariant to scale, rotation and orientation, and it also supports a wide range of illumination levels.

Research limitations/implications

Provides vision guidance in assembly tasks, current work addresses the use of ANN's for assembly and object recognition separately, future work is oriented to use the same neural controller for all different sensorial modes.

Practical implications

Intelligent manufacturing cells developed with multimodal sensor capabilities, might use this methodology for future industrial applications including robotics fixtureless assembly. The approach in combination with the fast learning capability of ART networks indicates the suitability for industrial robot applications as it is demonstrated through experimental results.

Originality/value

This paper introduces a novel method which uses collections of 2D images to obtain a very fast feature data – ”current frame descriptor vector” – of an object by using image projections and canonical forms geometry grouping for invariant object recognition.

Details

Assembly Automation, vol. 25 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 11 March 2014

Suyong Yeon, ChangHyun Jun, Hyunga Choi, Jaehyeon Kang, Youngmok Yun and Nakju Lett Doh

– The authors aim to propose a novel plane extraction algorithm for geometric 3D indoor mapping with range scan data.

Abstract

Purpose

The authors aim to propose a novel plane extraction algorithm for geometric 3D indoor mapping with range scan data.

Design/methodology/approach

The proposed method utilizes a divide-and-conquer step to efficiently handle huge amounts of point clouds not in a whole group, but in forms of separate sub-groups with similar plane parameters. This method adopts robust principal component analysis to enhance estimation accuracy.

Findings

Experimental results verify that the method not only shows enhanced performance in the plane extraction, but also broadens the domain of interest of the plane registration to an information-poor environment (such as simple indoor corridors), while the previous method only adequately works in an information-rich environment (such as a space with many features).

Originality/value

The proposed algorithm has three advantages over the current state-of-the-art method in that it is fast, utilizes more inlier sensor data that does not become contaminated by severe sensor noise and extracts more accurate plane parameters.

Details

Industrial Robot: An International Journal, vol. 41 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 18 May 2015

Oualid Araar, Nabil Aouf and Jose Luis Vallejo Dietz

This paper aims to present a new vision-based approach for both the identification and the estimation of the relative distance between the unmanned aerial vehicle (UAV) and power…

Abstract

Purpose

This paper aims to present a new vision-based approach for both the identification and the estimation of the relative distance between the unmanned aerial vehicle (UAV) and power pylon. Autonomous power line inspection using small UAVs, has been the focus of many research works over the past couple of decades. Automatic detection of power pylons is a primary requirement to achieve such autonomous systems. It is still a challenging task due to the complex geometry and cluttered background of these structures.

Design/methodology/approach

The identification solution proposed, avoids the complexity of classic object recognition techniques. Instead of searching the whole image for the pylon template, low-level geometric priors with robust colour attributes are combined to remove the pylon background. The depth estimation, on the other hand, is based on a new concept which exploits the ego-motion of the inspection UAV to estimate its distance from the pylon using just a monocular camera.

Findings

An algorithm is tested on a quadrotor UAV, using different kinds of metallic power pylons. Both simulation and real-world experiments, conducted in different backgrounds and illumination conditions, show very promising results.

Research limitations/implications

In the real tests carried out, the Inertial Navigation System (INS) of the vehicle was used to estimate its ego-motion. A more reliable solution should be considered for longer distances, by either fusing INS and global positioning system data or using visual navigation techniques such as visual odometry.

Originality/value

A simple yet efficient solution is proposed that allows the UAV to reliably identify the pylon, with still a low processing cost. Considering a monocular solution is a major advantage, given the limited payload and processing power of such small vehicles.

Details

Industrial Robot: An International Journal, vol. 42 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 18 September 2023

Yong Qin and Haidong Yu

This paper aims to provide a better understanding of the challenges and potential solutions in Visual Simultaneous Localization and Mapping (SLAM), laying the foundation for its…

Abstract

Purpose

This paper aims to provide a better understanding of the challenges and potential solutions in Visual Simultaneous Localization and Mapping (SLAM), laying the foundation for its applications in autonomous navigation, intelligent driving and other related domains.

Design/methodology/approach

In analyzing the latest research, the review presents representative achievements, including methods to enhance efficiency, robustness and accuracy. Additionally, the review provides insights into the future development direction of Visual SLAM, emphasizing the importance of improving system robustness when dealing with dynamic environments. The research methodology of this review involves a literature review and data set analysis, enabling a comprehensive understanding of the current status and prospects in the field of Visual SLAM.

Findings

This review aims to comprehensively evaluate the latest advances and challenges in the field of Visual SLAM. By collecting and analyzing relevant research papers and classic data sets, it reveals the current issues faced by Visual SLAM in complex environments and proposes potential solutions. The review begins by introducing the fundamental principles and application areas of Visual SLAM, followed by an in-depth discussion of the challenges encountered when dealing with dynamic objects and complex environments. To enhance the performance of SLAM algorithms, researchers have made progress by integrating different sensor modalities, improving feature extraction and incorporating deep learning techniques, driving advancements in the field.

Originality/value

To the best of the authors’ knowledge, the originality of this review lies in its in-depth analysis of current research hotspots and predictions for future development, providing valuable references for researchers in this field.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of 24