To read this content please select one of the options below:

A novel human-robot skill transfer method for contact-rich manipulation task

Jiale Dong (School of Automation Science and Engineering, South China University of Technology, Guangzhou, China)
Weiyong Si (Bristol Robotics Laboratory, University of the West of England, Bristol, UK)
Chenguang Yang (Bristol Robotics Laboratory, University of the West of England, Bristol, UK)

Robotic Intelligence and Automation

ISSN: 2754-6969

Article publication date: 7 June 2023

Issue publication date: 23 June 2023

139

Abstract

Purpose

The purpose of this paper is to enhance the robot’s ability to complete multi-step contact tasks in unknown or dynamic environments, as well as the generalization ability of the same task in different environments.

Design/methodology/approach

This paper proposes a framework that combines learning from demonstration (LfD), behavior tree (BT) and broad learning system (BLS). First, the original dynamic motion primitive is modified to have a better generalization ability for representing motion primitives. Then, a BT based on tasks is constructed, which will select appropriate motion primitives according to the environment state and robot ontology state, and then the BLS will generate specific parameters of the motion primitives based on the state. The weights of the BLS can also be optimized after each successful execution.

Findings

The authors carried out the tasks of cleaning the desktop and assembling the shaft hole on Baxter and Elite robots, respectively, and both tasks were successfully completed, which proved the effectiveness of the framework.

Originality/value

This paper proposes a framework that combines LfD, BT and BLS. To the best of the authors’ knowledge, no similar methods were found in other people’s work. Therefore, the authors believe that this work is original.

Keywords

Citation

Dong, J., Si, W. and Yang, C. (2023), "A novel human-robot skill transfer method for contact-rich manipulation task", Robotic Intelligence and Automation, Vol. 43 No. 3, pp. 327-337. https://doi.org/10.1108/RIA-01-2023-0002

Publisher

:

Emerald Publishing Limited

Copyright © 2023, Emerald Publishing Limited

Related articles