Policy improvement of the dynamical movement primitives and stiffness primitives framework for robot learning variable stiffness manipulation
Abstract
Purpose
Human beings are able to adjust their arm stiffness in daily life tasks. This paper aims to enable a robot to learn these human-like variable stiffness motor skills autonomously.
Design/methodology/approach
The paper presents a reinforcement learning method to enable a robot to learn variable stiffness motor skills autonomously. Firstly, the variable stiffness motor skills are encoded by the previously proposed dynamical movement primitives and stiffness primitives (DMP-SP) framework, which is able to generate both motion and stiffness curves for robots. The admittance controller is then used to make a robot follow the motion and stiffness curves. The authors then use the policy improvement with path integrals (PI2) algorithm to optimize the robot motion and stiffness curves iteratively.
Findings
The performance of the proposed method is evaluated on an UR10 robot by two different tasks: a) via-point task, b) sweeping the floor. The results show that after training, the robot is capable of accomplishing the tasks safely and compliantly.
Practical implications
The method can help the robots walk out of the isolated environment and accelerate their integration into human being’s daily life.
Originality/value
This paper uses reinforcement learning method to improve DMP-SP framework, thus allowing the robots to learn variable stiffness motor skills autonomously with no need for extra sensors.
Keywords
Citation
Ren, D. and Bian, F. (2024), "Policy improvement of the dynamical movement primitives and stiffness primitives framework for robot learning variable stiffness manipulation", Industrial Robot, Vol. ahead-of-print No. ahead-of-print. https://doi.org/10.1108/IR-04-2024-0168
Publisher
:Emerald Publishing Limited
Copyright © 2024, Emerald Publishing Limited