Search results

1 – 2 of 2
Article
Publication date: 27 June 2023

Zhonglai Tian, Hongtai Cheng, Liangliang Zhao and Jingdong Zhao

The purpose of this paper is to design a multifingered dexterous hand grasping planning method that can efficiently perform grasping tasks on multiple dexterous hand platforms.

Abstract

Purpose

The purpose of this paper is to design a multifingered dexterous hand grasping planning method that can efficiently perform grasping tasks on multiple dexterous hand platforms.

Design/methodology/approach

The grasping process is divided into two stages: offline and online. In the offline stage, the grasping solution form is improved based on the forward kinematic model of the dexterous hand. A comprehensive evaluation method of grasping quality is designed to obtain the optimal grasping solution offline data set. In the online stage, a safe and efficient selection strategy of the optimal grasping solution is proposed, which can quickly obtain the optimal grasping solution without collision.

Findings

The experiments verified that the method can be applied to different multifingered dexterous hands, and the average grasping success rate for objects with different structures is 91.7%, indicating a good grasping effect.

Originality/value

Using a forward kinematic model to generate initial grasping points can improve the generality of grasping planning methods and the quality of initial grasping solutions. The offline data set of optimized grasping solutions can be generated faster by the comprehensive evaluation method of grasping quality. Through the simple and fast obstacle avoidance strategy, the safe optimal grasping solution can be quickly obtained when performing a grasping task. The proposed method can be applied to automatic assembly scenarios where the end effector is a multifingered dexterous hand, which provides a technical solution for the promotion of multifingered dexterous hands in industrial scenarios.

Details

Robotic Intelligence and Automation, vol. 43 no. 4
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 20 May 2022

Zhonglai Tian, Hongtai Cheng, Zhenjun Du, Zongbei Jiang and Yeping Wang

The purpose of this paper is to estimate the contact-consistent object poses during contact-rich manipulation tasks based only on visual sensors.

Abstract

Purpose

The purpose of this paper is to estimate the contact-consistent object poses during contact-rich manipulation tasks based only on visual sensors.

Design/methodology/approach

The method follows a four-step procedure. Initially, the raw object poses are retrieved using the available object pose estimation method and filtered using Kalman filter with nominal model; second, a group of particles are randomly generated for each pose and evaluated the corresponding object contact state using the contact simulation software. A probability guided particle averaging method is proposed to balance the accuracy and safety issues; third, the independently estimated contact states are fused in a hidden Markov model to remove the abnormal contact state observations; finally, the object poses are refined by averaging the contact state consistent particles.

Findings

The experiments are performed to evaluate the effectiveness of the proposed methods. The results show that the method can achieve smooth and accurate pose estimation results and the estimated contact states are consistent with ground truth.

Originality/value

This paper proposes a method to obtain contact-consistent poses and contact states of objects using only visual sensors. The method tries to recover the true contact state from inaccurate visual information by fusing contact simulations results and contact consistency assumptions. The method can be used to extract pose and contact information from object manipulation tasks by just observing the demonstration, which can provide a new way for the robot to learn complex manipulation tasks.

Details

Assembly Automation, vol. 42 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 2 of 2