Search results

1 – 10 of 16
Article
Publication date: 19 October 2020

Nailong Liu, Xiaodong Zhou, Zhaoming Liu, Hongwei Wang and Long Cui

This paper aims to enable the robot to obtain human-like compliant manipulation skills for the peg-in-hole (PiH) assembly task by learning from demonstration.

Abstract

Purpose

This paper aims to enable the robot to obtain human-like compliant manipulation skills for the peg-in-hole (PiH) assembly task by learning from demonstration.

Design/methodology/approach

A modified dynamic movement primitives (DMPs) model with a novel hybrid force/position feedback in Cartesian space for the robotic PiH problem is proposed by learning from demonstration. To ensure a compliant interaction during the PiH insertion process, a Cartesian impedance control approach is used to track the trajectory generated by the modified DMPs.

Findings

The modified DMPs allow the robot to imitate the trajectory of demonstration efficiently and to generate a smoother trajectory. By taking advantage of force feedback, the robot shows compliant behavior and could adjust its pose actively to avoid a jam. This feedback mechanism significantly improves the dynamic performance of the interactive process. Both the simulation and the PiH experimental results show the feasibility and effectiveness of the proposed model.

Originality/value

The trajectory and the compliant manipulation skill of the human operator can be learned simultaneously by the new model. This method adopted a modified DMPs model in Cartesian space to generate a trajectory with a lower speed at the beginning of the motion, which can reduce the magnitude of the contact force.

Article
Publication date: 20 October 2014

Fares J. Abu-Dakka, Bojan Nemec, Aljaž Kramberger, Anders Glent Buch, Norbert Krüger and Ales Ude

– The purpose of this paper is to propose a new algorithm based on programming by demonstration and exception strategies to solve assembly tasks such as peg-in-hole.

1116

Abstract

Purpose

The purpose of this paper is to propose a new algorithm based on programming by demonstration and exception strategies to solve assembly tasks such as peg-in-hole.

Design/methodology/approach

Data describing the demonstrated tasks are obtained by kinesthetic guiding. The demonstrated trajectories are transferred to new robot workspaces using three-dimensional (3D) vision. Noise introduced by vision when transferring the task to a new configuration could cause the execution to fail, but such problems are resolved through exception strategies.

Findings

This paper demonstrated that the proposed approach combined with exception strategies outperforms traditional approaches for robot-based assembly. Experimental evaluation was carried out on Cranfield Benchmark, which constitutes a standardized assembly task in robotics. This paper also performed statistical evaluation based on experiments carried out on two different robotic platforms.

Practical implications

The developed framework can have an important impact for robot assembly processes, which are among the most important applications of industrial robots. Our future plans involve implementation of our framework in a commercially available robot controller.

Originality/value

This paper proposes a new approach to the robot assembly based on the Learning by Demonstration (LbD) paradigm. The proposed framework enables to quickly program new assembly tasks without the need for detailed analysis of the geometric and dynamic characteristics of workpieces involved in the assembly task. The algorithm provides an effective disturbance rejection, improved stability and increased overall performance. The proposed exception strategies increase the success rate of the algorithm when the task is transferred to new areas of the workspace, where it is necessary to deal with vision noise and altered dynamic characteristics of the task.

Details

Industrial Robot: An International Journal, vol. 41 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 19 March 2021

Zhenyu Lu and Ning Wang

Dynamic movement primitives (DMPs) is a general robotic skill learning from demonstration method, but it is usually used for single robotic manipulation. For cloud-based robotic…

Abstract

Purpose

Dynamic movement primitives (DMPs) is a general robotic skill learning from demonstration method, but it is usually used for single robotic manipulation. For cloud-based robotic skill learning, the authors consider trajectories/skills changed by the environment, rebuild the DMPs model and propose a new DMPs-based skill learning framework removing the influence of the changing environment.

Design/methodology/approach

The authors proposed methods for two obstacle avoidance scenes: point obstacle and non-point obstacle. For the case with point obstacles, an accelerating term is added to the original DMPs function. The unknown parameters in this term are estimated by interactive identification and fitting step of the forcing function. Then a pure skill despising the influence of obstacles is achieved. Using identified parameters, the skill can be applied to new tasks with obstacles. For the non-point obstacle case, a space matching method is proposed by building a matching function from the universal space without obstacle to the space condensed by obstacles. Then the original trajectory will change along with transformation of the space to get a general trajectory for the new environment.

Findings

The proposed two methods are certified by two experiments, one of which is taken based on Omni joystick to record operator’s manipulation motions. Results show that the learned skills allow robots to execute tasks such as autonomous assembling in a new environment.

Originality/value

This is a new innovation for DMPs-based cloud robotic skill learning from multi-scene tasks and generalizing new skills following the changes of the environment.

Details

Assembly Automation, vol. 41 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 17 June 2021

Zeguo Yang, Mantian Li, Fusheng Zha, Xin Wang, Pengfei Wang and Wei Guo

This paper aims to introduce an imitation learning framework for a wheeled mobile manipulator based on dynamical movement primitives (DMPs). A novel mobile manipulator with the…

Abstract

Purpose

This paper aims to introduce an imitation learning framework for a wheeled mobile manipulator based on dynamical movement primitives (DMPs). A novel mobile manipulator with the capability to learn from demonstration is introduced. Then, this study explains the whole process for a wheeled mobile manipulator to learn a demonstrated task and generalize to new situations. Two visual tracking controllers are designed for recording human demonstrations and monitoring robot operations. The study clarifies how human demonstrations can be learned and generalized to new situations by a wheel mobile manipulator.

Design/methodology/approach

The kinematic model of a mobile manipulator is analyzed. An RGB-D camera is applied to record the demonstration trajectories and observe robot operations. To avoid human demonstration behaviors going out of sight of the camera, a visual tracking controller is designed based on the kinematic model of the mobile manipulator. The demonstration trajectories are then represented by DMPs and learned by the mobile manipulator with corresponding models. Another tracking controller is designed based on the kinematic model of the mobile manipulator to monitor and modify the robot operations.

Findings

To verify the effectiveness of the imitation learning framework, several daily tasks are demonstrated and learned by the mobile manipulator. The results indicate that the presented approach shows good performance for a wheeled mobile manipulator to learn tasks through human demonstrations. The only thing a robot-user needs to do is to provide demonstrations, which highly facilitates the application of mobile manipulators.

Originality/value

The research fulfills the need for a wheeled mobile manipulator to learn tasks via demonstrations instead of manual planning. Similar approaches can be applied to mobile manipulators with different architecture.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 4 March 2024

Yonghua Huang, Tuanjie Li, Yuming Ning and Yan Zhang

This paper aims to solve the problem of the inability to apply learning methods for robot motion skills based on dynamic movement primitives (DMPs) in tasks with explicit…

Abstract

Purpose

This paper aims to solve the problem of the inability to apply learning methods for robot motion skills based on dynamic movement primitives (DMPs) in tasks with explicit environmental constraints, while ensuring the reliability of the robot system.

Design/methodology/approach

The authors propose a novel DMP that takes into account environmental constraints to enhance the generality of the robot motion skill learning method. First, based on the real-time state of the robot and environmental constraints, the task space is divided into different regions and different control strategies are used in each region. Second, to ensure the effectiveness of the generalized skills (trajectories), the control barrier function is extended to DMP to enforce constraint conditions. Finally, a skill modeling and learning algorithm flow is proposed that takes into account environmental constraints within DMPs.

Findings

By designing numerical simulation and prototype demonstration experiments to study skill learning and generalization under constrained environments. The experimental results demonstrate that the proposed method is capable of generating motion skills that satisfy environmental constraints. It ensures that robots remain in a safe position throughout the execution of generation skills, thereby avoiding any adverse impact on the surrounding environment.

Originality/value

This paper explores further applications of generalized motion skill learning methods on robots, enhancing the efficiency of robot operations in constrained environments, particularly in non-point-constrained environments. The improved methods are applicable to different types of robots.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 15 March 2023

Jinzhong Li, Ming Cong, Dong Liu and Yu Du

Under the development trend of intelligent manufacturing, the unstructured environment requires the robot to have a good generalization performance to adapt to the scene changes…

154

Abstract

Purpose

Under the development trend of intelligent manufacturing, the unstructured environment requires the robot to have a good generalization performance to adapt to the scene changes. The purpose of this paper aims to present a learning from demonstration (LfD) method (task parameterized [TP]-dynamic movement primitives [DMP]-GMR) that combines DMPs and TP-LfD to improve generalization performance and solve object manipulation tasks.

Design/methodology/approach

The dynamic time warping algorithm is applied to processing demonstration data to obtain a more standard learning model in the proposed method. The DMPs are used to model the basic trajectory learning model. The Gaussian mixture model is introduced to learn the force term of DMPs and solve the problem of learning from multiple demonstration trajectories. The robot can learn more local geometric features and generalize the learned model to unknown situations by adding task parameters.

Findings

An evaluation criterion based on curve similarity calculated by the Frechet distance was constructed to evaluate the model’s interpolation and extrapolation performance. The model’s generalization performance was assessed on 2D virtual data sets, and first, the results show that the proposed method has better interpolation and extrapolation performance than other methods.

Originality/value

The proposed model was applied to the axle-hole assembly task on real robots, and the robot’s posture in grasping and placing the axle part was taken as the task parameter of the model. The experiment results show that The proposed model is competitive with other models.

Details

Robotic Intelligence and Automation, vol. 43 no. 2
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 5 April 2021

Shifeng Lin and Ning Wang

In multi-robot cooperation, the cloud can share sensor data, which can help robots better perceive the environment. For cloud robotics, robot grasping is an important ability that…

Abstract

Purpose

In multi-robot cooperation, the cloud can share sensor data, which can help robots better perceive the environment. For cloud robotics, robot grasping is an important ability that must be mastered. Usually, the information source of grasping mainly comes from visual sensors. However, due to the uncertainty of the working environment, the information acquisition of the vision sensor may encounter the situation of being blocked by unknown objects. This paper aims to propose a solution to the problem in robot grasping when the vision sensor information is blocked by sharing the information of multi-vision sensors in the cloud.

Design/methodology/approach

First, the random sampling consensus algorithm and principal component analysis (PCA) algorithms are used to detect the desktop range. Then, the minimum bounding rectangle of the occlusion area is obtained by the PCA algorithm. The candidate camera view range is obtained by plane segmentation. Then the candidate camera view range is combined with the manipulator workspace to obtain the camera posture and drive the arm to take pictures of the desktop occlusion area. Finally, the Gaussian mixture model (GMM) is used to approximate the shape of the object projection and for every single Gaussian model, the grabbing rectangle is generated and evaluated to get the most suitable one.

Findings

In this paper, a variety of cloud robotic being blocked are tested. Experimental results show that the proposed algorithm can capture the image of the occluded desktop and grab the objects in the occluded area successfully.

Originality/value

In the existing work, there are few research studies on using active multi-sensor to solve the occlusion problem. This paper presents a new solution to the occlusion problem. The proposed method can be applied to the multi-cloud robotics working environment through cloud sharing, which helps the robot to perceive the environment better. In addition, this paper proposes a method to obtain the object-grabbing rectangle based on GMM shape approximation of point cloud projection. Experiments show that the proposed methods can work well.

Details

Assembly Automation, vol. 41 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 20 June 2019

Qiming Chen, Hong Cheng, Rui Huang, Jing Qiu and Xinhua Chen

Lower-limb exoskeleton systems enable people with spinal cord injury to regain some degree of locomotion ability, as the expected motion curve needs to adapt with changing…

Abstract

Purpose

Lower-limb exoskeleton systems enable people with spinal cord injury to regain some degree of locomotion ability, as the expected motion curve needs to adapt with changing scenarios, i.e. stair heights, distance to the stairs. The authors’ approach enables exoskeleton systems to adapt to different scenarios in stair ascent task safely.

Design/methodology/approach

In this paper, the authors learn the locomotion from predefined trajectories and walk upstairs by re-planning the trajectories according to external forces posed on exoskeleton systems. Moreover, instead of using complex sensors as inputs for re-planning in real-time, the approach can obtain forces acting on exoskeleton through dynamic model of human-exoskeleton system learned by an online machine learning approach without accurate parameters.

Findings

The proposed approach is validated in both simulation environment and a real walking assistance exoskeleton system. Experimental results prove that the proposed approach achieves better performance than the traditional predefined gait approach.

Originality/value

First, the approach obtain the external forces by a learned dynamic model of human-exoskeleton system, which reduces the cost of exoskeletons and avoids the heavy task of translating sensor input into actuator output. Second, the approach enables exoskeleton accomplish stair ascent task safely in different scenarios.

Details

Industrial Robot: the international journal of robotics research and application, vol. 46 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 6 September 2022

Kaimeng Wang and Te Tang

This paper aims to present a new approach for robot programming by demonstration, which generates robot programs by tracking 6 dimensional (6D) pose of the demonstrator’s hand…

Abstract

Purpose

This paper aims to present a new approach for robot programming by demonstration, which generates robot programs by tracking 6 dimensional (6D) pose of the demonstrator’s hand using a single red green blue (RGB) camera without requiring any additional sensors.

Design/methodology/approach

The proposed method learns robot grasps and trajectories directly from a single human demonstration by tracking the movements of both human hands and objects. To recover the 6D pose of an object from a single RGB image, a deep learning–based method is used to detect the keypoints of the object first and then solve a perspective-n-point problem. This method is first extended to estimate the 6D pose of the nonrigid hand by separating fingers into multiple rigid bones linked with hand joints. The accurate robot grasp can be generated according to the relative positions between hands and objects in the 2 dimensional space. Robot end-effector trajectories are generated from hand movements and then refined by objects’ start and end positions.

Findings

Experiments are conducted on a FANUC LR Mate 200iD robot to verify the proposed approach. The results show the feasibility of generating robot programs by observing human demonstration once using a single RGB camera.

Originality/value

The proposed approach provides an efficient and low-cost robot programming method with a single RGB camera. A new 6D hand pose estimation approach, which is used to generate robot grasps and trajectories, is developed.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 15 August 2016

Aljaž Kramberger, Rok Piltaver, Bojan Nemec, Matjaž Gams and Aleš Ude

In this paper, the authors aim to propose a method for learning robotic assembly sequences, where precedence constraints and object relative size and location constraints can be…

Abstract

Purpose

In this paper, the authors aim to propose a method for learning robotic assembly sequences, where precedence constraints and object relative size and location constraints can be learned by demonstration and autonomous robot exploration.

Design/methodology/approach

To successfully plan the operations involved in assembly tasks, the planner needs to know the constraints of the desired task. In this paper, the authors propose a methodology for learning such constraints by demonstration and autonomous exploration. The learning of precedence constraints and object relative size and location constraints, which are needed to construct a planner for automated assembly, were investigated. In the developed system, the learning of symbolic constraints is integrated with low-level control algorithms, which is essential to enable active robot learning.

Findings

The authors demonstrated that the proposed reasoning algorithms can be used to learn previously unknown assembly constraints that are needed to implement a planner for automated assembly. Cranfield benchmark, which is a standardized benchmark for testing algorithms for robot assembly, was used to evaluate the proposed approaches. The authors evaluated the learning performance both in simulation and on a real robot.

Practical implications

The authors' approach reduces the amount of programming that is needed to set up new assembly cells and consequently the overall set up time when new products are introduced into the workcell.

Originality/value

In this paper, the authors propose a new approach for learning assembly constraints based on programming by demonstration and active robot exploration to reduce the computational complexity of the underlying search problems. The authors developed algorithms for success/failure detection of assembly operations based on the comparison of expected signals (forces and torques, positions and orientations of the assembly parts) with the actual signals sensed by a robot. In this manner, all precedence and object size and location constraints can be learned, thereby providing the necessary input for the optimal planning of the entire assembly process.

Details

Industrial Robot: An International Journal, vol. 43 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of 16