Search results

1 – 10 of over 2000
Article
Publication date: 27 April 2020

Yongxiang Wu, Yili Fu and Shuguo Wang

This paper aims to design a deep neural network for object instance segmentation and six-dimensional (6D) pose estimation in cluttered scenes and apply the proposed method in…

447

Abstract

Purpose

This paper aims to design a deep neural network for object instance segmentation and six-dimensional (6D) pose estimation in cluttered scenes and apply the proposed method in real-world robotic autonomous grasping of household objects.

Design/methodology/approach

A novel deep learning method is proposed for instance segmentation and 6D pose estimation in cluttered scenes. An iterative pose refinement network is integrated with the main network to obtain more robust final pose estimation results for robotic applications. To train the network, a technique is presented to generate abundant annotated synthetic data consisting of RGB-D images and object masks in a fast manner without any hand-labeling. For robotic grasping, the offline grasp planning based on eigengrasp planner is performed and combined with the online object pose estimation.

Findings

The experiments on the standard pose benchmarking data sets showed that the method achieves better pose estimation and time efficiency performance than state-of-art methods with depth-based ICP refinement. The proposed method is also evaluated on a seven DOFs Kinova Jaco robot with an Intel Realsense RGB-D camera, the grasping results illustrated that the method is accurate and robust enough for real-world robotic applications.

Originality/value

A novel 6D pose estimation network based on the instance segmentation framework is proposed and a neural work-based iterative pose refinement module is integrated into the method. The proposed method exhibits satisfactory pose estimation and time efficiency for the robotic grasping.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 20 May 2022

Zhonglai Tian, Hongtai Cheng, Zhenjun Du, Zongbei Jiang and Yeping Wang

The purpose of this paper is to estimate the contact-consistent object poses during contact-rich manipulation tasks based only on visual sensors.

Abstract

Purpose

The purpose of this paper is to estimate the contact-consistent object poses during contact-rich manipulation tasks based only on visual sensors.

Design/methodology/approach

The method follows a four-step procedure. Initially, the raw object poses are retrieved using the available object pose estimation method and filtered using Kalman filter with nominal model; second, a group of particles are randomly generated for each pose and evaluated the corresponding object contact state using the contact simulation software. A probability guided particle averaging method is proposed to balance the accuracy and safety issues; third, the independently estimated contact states are fused in a hidden Markov model to remove the abnormal contact state observations; finally, the object poses are refined by averaging the contact state consistent particles.

Findings

The experiments are performed to evaluate the effectiveness of the proposed methods. The results show that the method can achieve smooth and accurate pose estimation results and the estimated contact states are consistent with ground truth.

Originality/value

This paper proposes a method to obtain contact-consistent poses and contact states of objects using only visual sensors. The method tries to recover the true contact state from inaccurate visual information by fusing contact simulations results and contact consistency assumptions. The method can be used to extract pose and contact information from object manipulation tasks by just observing the demonstration, which can provide a new way for the robot to learn complex manipulation tasks.

Details

Assembly Automation, vol. 42 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 28 May 2021

Zhengtuo Wang, Yuetong Xu, Guanhua Xu, Jianzhong Fu, Jiongyan Yu and Tianyi Gu

In this work, the authors aim to provide a set of convenient methods for generating training data, and then develop a deep learning method based on point clouds to estimate the…

Abstract

Purpose

In this work, the authors aim to provide a set of convenient methods for generating training data, and then develop a deep learning method based on point clouds to estimate the pose of target for robot grasping.

Design/methodology/approach

This work presents a deep learning method PointSimGrasp on point clouds for robot grasping. In PointSimGrasp, a point cloud emulator is introduced to generate training data and a pose estimation algorithm, which, based on deep learning, is designed. After trained with the emulation data set, the pose estimation algorithm could estimate the pose of target.

Findings

In experiment part, an experimental platform is built, which contains a six-axis industrial robot, a binocular structured-light sensor and a base platform with adjustable inclination. A data set that contains three subsets is set up on the experimental platform. After trained with the emulation data set, the PointSimGrasp is tested on the experimental data set, and an average translation error of about 2–3 mm and an average rotation error of about 2–5 degrees are obtained.

Originality/value

The contributions are as follows: first, a deep learning method on point clouds is proposed to estimate 6D pose of target; second, a convenient training method for pose estimation algorithm is presented and a point cloud emulator is introduced to generate training data; finally, an experimental platform is built, and the PointSimGrasp is tested on the platform.

Details

Assembly Automation, vol. 41 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 17 May 2022

Lin Li, Xi Chen and Tie Zhang

Many metal workpieces have the characteristics of less texture, symmetry and reflectivity, which presents a challenge to existing pose estimation methods. The purpose of this…

Abstract

Purpose

Many metal workpieces have the characteristics of less texture, symmetry and reflectivity, which presents a challenge to existing pose estimation methods. The purpose of this paper is to propose a pose estimation method for grasping metal workpieces by industrial robots.

Design/methodology/approach

Dual-hypothesis robust point matching registration network (RPM-Net) is proposed to estimate pose from point cloud. The proposed method uses the Point Cloud Library (PCL) to segment workpiece point cloud from scenes and a trained-well robust point matching registration network to estimate pose through dual-hypothesis point cloud registration.

Findings

In the experiment section, an experimental platform is built, which contains a six-axis industrial robot, a binocular structured-light sensor. A data set that contains three subsets is set up on the experimental platform. After training with the emulation data set, the dual-hypothesis RPM-Net is tested on the experimental data set, and the success rates of the three real data sets are 94.0%, 92.0% and 96.0%, respectively.

Originality/value

The contributions are as follows: first, dual-hypothesis RPM-Net is proposed which can realize the pose estimation of discrete and less-textured metal workpieces from point cloud, and second, a method of making training data sets is proposed using only CAD models with the visualization algorithm of the PCL.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 31 May 2023

Ziqi Chai, Chao Liu and Zhenhua Xiong

Template matching is one of the most suitable choices for full six degrees of freedom pose estimation in many practical industrial applications. However, the increasing number of…

128

Abstract

Purpose

Template matching is one of the most suitable choices for full six degrees of freedom pose estimation in many practical industrial applications. However, the increasing number of templates while dealing with a wide range of viewpoint changes results in a long runtime, which may not meet the real-time requirements. This paper aims to improve matching efficiency while maintaining sample resolution and matching accuracy.

Design/methodology/approach

A multi-pyramid-based hierarchical template matching strategy is proposed. Three pyramids are established at the sphere subdivision, radius and in-plane rotation levels during the offline template render stage. Then, a hierarchical template matching is performed from the highest to the lowest level in each pyramid, narrowing the global search space and expanding the local search space. The initial search parameters at the top level can be determined by the preprocessing of the YOLOv3 object detection network to further improve real-time performance.

Findings

Experimental results show that this matching strategy takes only 100 ms under 100k templates without loss of accuracy, promising for real industrial applications. The authors further validated the approach by applying it to a real robot grasping task.

Originality/value

The matching framework in this paper improves the template matching efficiency by two orders of magnitude and is validated using a common template definition and viewpoint sampling methods. In addition, it can be easily adapted to other template definitions and viewpoint sampling methods.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 6 September 2022

Kaimeng Wang and Te Tang

This paper aims to present a new approach for robot programming by demonstration, which generates robot programs by tracking 6 dimensional (6D) pose of the demonstrator’s hand…

Abstract

Purpose

This paper aims to present a new approach for robot programming by demonstration, which generates robot programs by tracking 6 dimensional (6D) pose of the demonstrator’s hand using a single red green blue (RGB) camera without requiring any additional sensors.

Design/methodology/approach

The proposed method learns robot grasps and trajectories directly from a single human demonstration by tracking the movements of both human hands and objects. To recover the 6D pose of an object from a single RGB image, a deep learning–based method is used to detect the keypoints of the object first and then solve a perspective-n-point problem. This method is first extended to estimate the 6D pose of the nonrigid hand by separating fingers into multiple rigid bones linked with hand joints. The accurate robot grasp can be generated according to the relative positions between hands and objects in the 2 dimensional space. Robot end-effector trajectories are generated from hand movements and then refined by objects’ start and end positions.

Findings

Experiments are conducted on a FANUC LR Mate 200iD robot to verify the proposed approach. The results show the feasibility of generating robot programs by observing human demonstration once using a single RGB camera.

Originality/value

The proposed approach provides an efficient and low-cost robot programming method with a single RGB camera. A new 6D hand pose estimation approach, which is used to generate robot grasps and trajectories, is developed.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Open Access
Article
Publication date: 25 March 2021

Bartłomiej Kulecki, Kamil Młodzikowski, Rafał Staszak and Dominik Belter

The purpose of this paper is to propose and evaluate the method for grasping a defined set of objects in an unstructured environment. To this end, the authors propose the method…

2038

Abstract

Purpose

The purpose of this paper is to propose and evaluate the method for grasping a defined set of objects in an unstructured environment. To this end, the authors propose the method of integrating convolutional neural network (CNN)-based object detection and the category-free grasping method. The considered scenario is related to mobile manipulating platforms that move freely between workstations and manipulate defined objects. In this application, the robot is not positioned with respect to the table and manipulated objects. The robot detects objects in the environment and uses grasping methods to determine the reference pose of the gripper.

Design/methodology/approach

The authors implemented the whole pipeline which includes object detection, grasp planning and motion execution on the real robot. The selected grasping method uses raw depth images to find the configuration of the gripper. The authors compared the proposed approach with a representative grasping method that uses a 3D point cloud as an input to determine the grasp for the robotic arm equipped with a two-fingered gripper. To measure and compare the efficiency of these methods, the authors measured the success rate in various scenarios. Additionally, they evaluated the accuracy of object detection and pose estimation modules.

Findings

The performed experiments revealed that the CNN-based object detection and the category-free grasping methods can be integrated to obtain the system which allows grasping defined objects in the unstructured environment. The authors also identified the specific limitations of neural-based and point cloud-based methods. They show how the determined properties influence the performance of the whole system.

Research limitations/implications

The authors identified the limitations of the proposed methods and the improvements are envisioned as part of future research.

Practical implications

The evaluation of the grasping and object detection methods on the mobile manipulating robot may be useful for all researchers working on the autonomy of similar platforms in various applications.

Social implications

The proposed method increases the autonomy of robots in applications in the small industry which is related to repetitive tasks in a noisy and potentially risky environment. This allows reducing the human workload in these types of environments.

Originality/value

The main contribution of this research is the integration of the state-of-the-art methods for grasping objects with object detection methods and evaluation of the whole system on the industrial robot. Moreover, the properties of each subsystem are identified and measured.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 24 September 2019

Kun Wei, Yong Dai and Bingyin Ren

This paper aims to propose an identification method based on monocular vision for cylindrical parts in cluttered scene, which solves the issue that iterative closest point (ICP…

Abstract

Purpose

This paper aims to propose an identification method based on monocular vision for cylindrical parts in cluttered scene, which solves the issue that iterative closest point (ICP) algorithm fails to obtain global optimal solution, as the deviation from scene point cloud to target CAD model is huge in nature.

Design/methodology/approach

The images of the parts are captured at three locations by a camera amounted on a robotic end effector to reconstruct initial scene point cloud. Color signatures of histogram of orientations (C-SHOT) local feature descriptors are extracted from the model and scene point cloud. Random sample consensus (RANSAC) algorithm is used to perform the first initial matching of point sets. Then, the second initial matching is conducted by proposed remote closest point (RCP) algorithm to make the model get close to the scene point cloud. Levenberg Marquardt (LM)-ICP is used to complete fine registration to obtain accurate pose estimation.

Findings

The experimental results in bolt-cluttered scene demonstrate that the accuracy of pose estimation obtained by the proposed method is higher than that obtained by two other methods. The position error is less than 0.92 mm and the orientation error is less than 0.86°. The average recognition rate is 96.67 per cent and the identification time of the single bolt does not exceed 3.5 s.

Practical implications

The presented approach can be applied or integrated into automatic sorting production lines in the factories.

Originality/value

The proposed method improves the efficiency and accuracy of the identification and classification of cylindrical parts using a robotic arm.

Article
Publication date: 18 January 2016

Jianhua Su, Zhi-Yong Liu, Hong Qiao and Chuankai Liu

Picking up pistons in arbitrary poses is an important step on car engine assembly line. The authors usually use vision system to estimate the pose of the pistons and then guide a…

Abstract

Purpose

Picking up pistons in arbitrary poses is an important step on car engine assembly line. The authors usually use vision system to estimate the pose of the pistons and then guide a stable grasp. However, a piston in some poses, e.g. the mouth of the piston faces forward, is hardly to be directly grasped by the gripper. Thus, we need to reorient the piston to achieve a desired pose, i.e. let its mouth face upward, for grasping.

Design/methodology/approach

This paper aims to present a vision-based picking system that can grasp pistons in arbitrary poses. The whole picking process is divided into two stages. At localization stage, a hierarchical approach is proposed to estimate the piston’s pose from image which usually involves both heavy noise and edge distortions. At grasping stage, multi-step robotic manipulations are designed to enable the piston to follow a nominal trajectory to reach to the minimum of the distance between the piston’s center and the support plane. That is, under the design input, the piston would be pushed to achieve a desired orientation.

Findings

A target piston in arbitrary poses would be picked from the conveyor belt by the gripper with the proposed method.

Practical implications

The designed robotic bin-picking system using vision is an advantage in terms of flexibility in automobile manufacturing industry.

Originality/value

The authors develop a methodology that uses a pneumatic gripper and 2D vision information for picking up multiple pistons in arbitrary poses. The rough pose of the parts are detected based on a hierarchical approach for detection of multiple ellipses in the environment that usually involve edge distortions. The pose uncertainties of the piston are eliminated by multi-step robotic manipulations.

Details

Industrial Robot: An International Journal, vol. 43 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 19 June 2009

Beata J. Grzyb, Eris Chinellato, Antonio Morales and Angel P. del Pobil

The purpose of this paper is to present a novel multimodal approach to the problem of planning and performing a reliable grasping action on unmodeled objects.

Abstract

Purpose

The purpose of this paper is to present a novel multimodal approach to the problem of planning and performing a reliable grasping action on unmodeled objects.

Design/methodology/approach

The robotic system is composed of three main components. The first is a conceptual manipulation framework based on grasping primitives. The second component is a visual processing module that uses stereo images and biologically inspired algorithms to accurately estimate pose, size, and shape of an unmodeled target object. A grasp action is planned and executed by the third component of the system, a reactive controller that uses tactile feedback to compensate possible inaccuracies and thus complete the grasp even in difficult or unexpected conditions.

Findings

Theoretical analysis and experimental results have shown that the proposed approach to grasping based on the concurrent use of complementary sensory modalities, is very promising and suitable even for changing, dynamic environments.

Research limitations/implications

Additional setups with more complicate shapes are being investigated, and each module is being improved both in hardware and software.

Originality/value

This paper introduces a novel, robust, and flexible grasping system based on multimodal integration.

Details

Industrial Robot: An International Journal, vol. 36 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 2000