PP-GraspNet: 6-DoF grasp generation in clutter using a new grasp representation method
ISSN: 0143-991x
Article publication date: 2 January 2023
Issue publication date: 13 April 2023
Abstract
Purpose
The grasping task of robots in dense cluttered scenes from a single-view has not been solved perfectly, and there is still a problem of low grasping success rate. This study aims to propose an end-to-end grasp generation method to solve this problem.
Design/methodology/approach
A new grasp representation method is proposed, which cleverly uses the normal vector of the table surface to derive the grasp baseline vectors, and maps the grasps to the pointed points (PP), so that there is no need to add orthogonal constraints between vectors when using a neural network to predict rotation matrixes of grasps.
Findings
Experimental results show that the proposed method is beneficial to the training of the neural network, and the model trained on synthetic data set can also have high grasping success rate and completion rate in real-world tasks.
Originality/value
The main contribution of this paper is that the authors propose a new grasp representation method, which maps the 6-DoF grasps to a PP and an angle related to the tabletop normal vector, thereby eliminating the need to add orthogonal constraints between vectors when directly predicting grasps using neural networks. The proposed method can generate hundreds of grasps covering the whole surface in about 0.3 s. The experimental results show that the proposed method has obvious superiority compared with other methods.
Keywords
Citation
Li, E., Feng, H. and Fu, Y. (2023), "PP-GraspNet: 6-DoF grasp generation in clutter using a new grasp representation method", Industrial Robot, Vol. 50 No. 3, pp. 496-504. https://doi.org/10.1108/IR-08-2022-0196
Publisher
:Emerald Publishing Limited
Copyright © 2022, Emerald Publishing Limited