Search results

1 – 10 of 261
Article
Publication date: 19 July 2019

Sana Bougharriou, Fayçal Hamdaoui and Abdellatif Mtibaa

This paper aims to study distance determination in vehicles, which could allow an in-car system to provide feedback and alert drivers, by either prompting the driver to take…

Abstract

Purpose

This paper aims to study distance determination in vehicles, which could allow an in-car system to provide feedback and alert drivers, by either prompting the driver to take preventative action or prepare the vehicle’s safety systems for an imminent collision. The success of a new system's deploying allows drivers to oppose the huge number of accidents and the material losses and costs associated with car accidents.

Design/methodology/approach

In this context, this paper presents estimation distance between camera and frontal vehicles based on camera calibration by combining three main steps: vanishing point extraction, lanes detection and vehicles detection in the field of 3 D real scene. This algorithm was implemented in MATLAB, and it was applied on scenes containing several vehicles in highway urban area. The method starts with the camera calibration. Then, the distance information can be calculated.

Findings

Based on experiment performance, this new method achieves robustness especially for detecting and estimating distances for multiple vehicles in a single scene. Also, this method demonstrates a higher accuracy detection rate of 0.869 in an execution time of 2.382 ms.

Originality/value

The novelty of the proposed method consists firstly on the use of an adaptive segmentation to reject the false points of interests. Secondly, the use of vanishing point has reduced the cost of using memory. Indeed, the part of the image above the vanishing point will not be processed and therefore will be deleted. The last benefit is the application of this new method on structured roads.

Details

Engineering Computations, vol. 36 no. 9
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 2 January 2018

N. Aswini, E. Krishna Kumar and S.V. Uma

The purpose of this paper is to provide an overview of unmanned aerial vehicle (UAV) developments, types, the major functional components of UAV, challenges, and trends of UAVs…

1070

Abstract

Purpose

The purpose of this paper is to provide an overview of unmanned aerial vehicle (UAV) developments, types, the major functional components of UAV, challenges, and trends of UAVs, and among the various challenges, the authors are concentrating more on obstacle sensing methods. This also highlights the scope of on-board vision-based obstacle sensing for miniature UAVs.

Design/methodology/approach

The paper initially discusses the basic functional elements of UAV, then considers the different challenges faced by UAV designers. The authors have narrowed down the study on obstacle detection and sensing methods for autonomous operation.

Findings

Among the various existing obstacle sensing techniques, on-board vision-based obstacle detection has better scope in the future requirements of miniature UAVs to make it completely autonomous.

Originality/value

The paper gives original review points by doing a thorough literature survey on various obstacle sensing techniques used for UAVs.

Details

International Journal of Intelligent Unmanned Systems, vol. 6 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 12 May 2020

Jing Bai, Yuchang Zhang, Xiansheng Qin, Zhanxi Wang and Chen Zheng

The purpose of this paper is to present a visual detection approach to predict the poses of target objects placed in arbitrary positions before completing the corresponding tasks…

Abstract

Purpose

The purpose of this paper is to present a visual detection approach to predict the poses of target objects placed in arbitrary positions before completing the corresponding tasks in mobile robotic manufacturing systems.

Design/methodology/approach

A hybrid visual detection approach that combines monocular vision and laser ranging is proposed based on an eye-in-hand vision system. The laser displacement sensor is adopted to achieve normal alignment for an arbitrary plane and obtain depth information. The monocular camera measures the two-dimensional image information. In addition, a robot hand-eye relationship calibration method is presented in this paper.

Findings

First, a hybrid visual detection approach for mobile robotic manufacturing systems is proposed. This detection approach is based on an eye-in-hand vision system consisting of one monocular camera and three laser displacement sensors and it can achieve normal alignment for an arbitrary plane and spatial positioning of the workpiece. Second, based on this vision system, a robot hand-eye relationship calibration method is presented and it was successfully applied to a mobile robotic manufacturing system designed by the authors’ team. As a result, the relationship between the workpiece coordinate system and the end-effector coordinate system could be established accurately.

Practical implications

This approach can quickly and accurately establish the relationship between the coordinate system of the workpiece and that of the end-effector. The normal alignment accuracy of the hand-eye vision system was less than 0.5° and the spatial positioning accuracy could reach 0.5 mm.

Originality/value

This approach can achieve normal alignment for arbitrary planes and spatial positioning of the workpiece and it can quickly establish the pose relationship between the workpiece and end-effector coordinate systems. Moreover, the proposed approach can significantly improve the work efficiency, flexibility and intelligence of mobile robotic manufacturing systems.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Abstract

Details

Traffic Safety and Human Behavior
Type: Book
ISBN: 978-0-08-045029-2

Abstract

Details

Traffic Safety and Human Behavior
Type: Book
ISBN: 978-1-78635-222-4

Article
Publication date: 15 February 2022

Xiaojun Wu, Peng Li, Jinghui Zhou and Yunhui Liu

Scattered parts are laid randomly during the manufacturing process and have difficulty to recognize and manipulate. This study aims to complete the grasp of the scattered parts by…

Abstract

Purpose

Scattered parts are laid randomly during the manufacturing process and have difficulty to recognize and manipulate. This study aims to complete the grasp of the scattered parts by a manipulator with a camera and learning method.

Design/methodology/approach

In this paper, a cascaded convolutional neural network (CNN) method for robotic grasping based on monocular vision and small data set of scattered parts is proposed. This method can be divided into three steps: object detection, monocular depth estimation and keypoint estimation. In the first stage, an object detection network is improved to effectively locate the candidate parts. Then, it contains a neural network structure and corresponding training method to learn and reason high-resolution input images to obtain depth estimation. The keypoint estimation in the third step is expressed as a cumulative form of multi-scale prediction from a network to use an red green blue depth (RGBD) map that is acquired from the object detection and depth map estimation. Finally, a grasping strategy is studied to achieve successful and continuous grasping. In the experiments, different workpieces are used to validate the proposed method. The best grasping success rate is more than 80%.

Findings

By using the CNN-based method to extract the key points of the scattered parts and calculating the possibility of grasp, the successful rate is increased.

Practical implications

This method and robotic systems can be used in picking and placing of most industrial automatic manufacturing or assembly processes.

Originality/value

Unlike standard parts, scattered parts are randomly laid and have difficulty recognizing and grasping for the robot. This study uses a cascaded CNN network to extract the keypoints of the scattered parts, which are also labeled with the possibility of successful grasping. Experiments are conducted to demonstrate the grasping of those scattered parts.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 16 October 2018

Zhaohui Zheng, Yong Ma, Hong Zheng, Yu Gu and Mingyu Lin

The welding areas of the workpiece must be consistent with high precision to ensure the welding success during the welding of automobile parts. The purpose of this paper is to…

Abstract

Purpose

The welding areas of the workpiece must be consistent with high precision to ensure the welding success during the welding of automobile parts. The purpose of this paper is to design an automatic high-precision locating and grasping system for robotic arm guided by 2D monocular vision to meet the requirements of automatic operation and high-precision welding.

Design/methodology/approach

A nonlinear multi-parallel surface calibration method based on adaptive k-segment master curve algorithm is proposed, which improves the efficiency of the traditional single camera calibration algorithm and accuracy of calibration. At the same time, the multi-dimension feature of target based on k-mean clustering constraint is proposed to improve the robustness and precision of registration.

Findings

A method of automatic locating and grasping based on 2D monocular vision is provided for robot arm, which includes camera calibration method and target locating method.

Practical implications

The system has been integrated into the welding robot of an automobile company in China.

Originality/value

A method of automatic locating and grasping based on 2D monocular vision is proposed, which makes the robot arm have automatic grasping function, and improves the efficiency and precision of automatic grasp of robot arm.

Details

Industrial Robot: An International Journal, vol. 45 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 19 October 2018

Shuanggao Li, Zhengping Deng, Qi Zeng and Xiang Huang

The assembly of large component in out-field is an important part for the usage and maintenance of aircrafts, which is mostly manually accomplished at present, as the commonly…

Abstract

Purpose

The assembly of large component in out-field is an important part for the usage and maintenance of aircrafts, which is mostly manually accomplished at present, as the commonly used large-volume measurement systems are usually inapplicable. This paper aims to propose a novel coaxial alignment method for large aircraft component assembly using distributed monocular vision.

Design/methodology/approach

For each of the mating holes on the components, a monocular vision module is applied to measure the poses of holes, which together shape a distributed monocular vision system. A new unconstrained hole pose optimization model is developed considering the complicated wearing on hole edges, and it is solved by a iterative reweighted particle swarm optimization (IR-PSO) method. Based on the obtained poses of holes, a Plücker line coordinates-based method is proposed for the relative posture evaluation between the components, and the analytical solution of posture parameters is derived. The required movements for coaxial alignment are finally calculated using the kinematics model of parallel mechanism.

Findings

The IR-PSO method derived more accurate hole pose arguments than the state-of-the-art method under complicated wearing situation of holes, and is much more efficient due to the elimination of constraints. The accuracy of the Plücker line coordinates-based relative posture evaluation (PRPE) method is competitive with the singular value decomposition (SVD) method, but it does not rely on the corresponding of point set; thus, it is more appropriate for coaxial alignment.

Practical implications

An automatic coaxial alignment system (ACAS) has been developed for the assembly of a large pilotless aircraft, and a coaxial error of 0.04 mm is realized.

Originality/value

The IR-PSO method can be applied for pose optimization of other cylindrical object, and the analytical solution of Plücker line coordinates-based axes registration is derived for the first time.

Details

Assembly Automation, vol. 38 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 23 August 2011

Cailing Wang, Chunxia Zhao and Jingyu Yang

Positioning is a key task in most field robotics applications but can be very challenging in GPS‐denied or high‐slip environments. The purpose of this paper is to describe a…

Abstract

Purpose

Positioning is a key task in most field robotics applications but can be very challenging in GPS‐denied or high‐slip environments. The purpose of this paper is to describe a visual odometry strategy using only one camera in country roads.

Design/methodology/approach

This monocular odometery system uses as input only those images provided by a single camera mounted on the roof of the vehicle and the framework is composed of three main parts: image motion estimation, ego‐motion computation and visual odometry. The image motion is estimated based on a hyper‐complex wavelet phase‐derived optical flow field. The ego‐motion of the vehicle is computed by a blocked RANdom SAmple Consensus algorithm and a maximum likelihood estimator based on a 4‐degrees of freedom motion model. These as instantaneous ego‐motion measurements are used to update the vehicle trajectory according to a dead‐reckoning model and unscented Kalman filter.

Findings

The authors' proposed framework and algorithms are validated on videos from a real automotive platform. Furthermore, the recovered trajectory is superimposed onto a digital map, and the localization results from this method are compared to the ground truth measured with a GPS/INS joint system. These experimental results indicate that the framework and the algorithms are effective.

Originality/value

The effective framework and algorithms for visual odometry using only one camera in country roads are introduced in this paper.

Details

Industrial Robot: An International Journal, vol. 38 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 26 April 2013

Dominik Belter and Piotr Skrzypczynski

The purpose of this paper is to describe a novel application of the recently introduced concept from computer vision to self‐localization of a walking robot in unstructured…

Abstract

Purpose

The purpose of this paper is to describe a novel application of the recently introduced concept from computer vision to self‐localization of a walking robot in unstructured environments. The technique described in this paper enables a walking robot with a monocular vision system (single camera) to obtain precise estimates of its pose with regard to the six degrees of freedom. This capability is essential in search and rescue missions in collapsed buildings, polluted industrial plants, etc.

Design/methodology/approach

The Parallel Tracking and Mapping (PTAM) algorithm and the Inertial Measurement Unit (IMU) are used to determine the 6‐d.o.f. pose of a walking robot. Bundle‐adjustment‐based tracking and structure reconstruction are applied to obtain precise camera poses from the monocular vision data. The inclination of the robot's platform is determined by using IMU. The self‐localization system is used together with the RRT‐based motion planner, which allows to walk autonomously on rough, previously unknown terrain. The presented system operates on‐line on the real hexapod robot. Efficiency and precision of the proposed solution are demonstrated by experimental data.

Findings

The PTAM‐based self‐localization system enables the robot to walk autonomously on rough terrain. The software operates on‐line and can be implemented on the robot's on‐board PC. Results of the experiments show that the position error is small enough to allow robust elevation mapping using the laser scanner. In spite of the unavoidable feet slippages, the walking robot which uses PTAM for self‐localization can precisely estimate its position and successfully recover from motion execution errors.

Research limitations/implications

So far the presented self‐localization system was tested in limited‐scale indoor experiments. Experiments with more realistic outdoor scenarios are scheduled as further work.

Practical implications

Precise self‐localization may be one of the most important factors enabling the use of walking robots in practical USAR missions. The results of research on precise self‐localization in 6‐d.o.f. may be also useful for autonomous robots in other application areas: construction, agriculture, military.

Originality/value

The vision‐based self‐localization algorithm used in the presented research is not new, but the contribution lies in its implementation/integration on a walking robot, and experimental evaluation in the demanding problem of precise self‐localization in rough terrain.

Details

Industrial Robot: An International Journal, vol. 40 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of 261