Search results

1 – 10 of over 19000
Article
Publication date: 20 March 2017

Abhishek Jha and Shital S. Chiddarwar

This paper aims to present a new learning from demonstration-based trajectory planner that generalizes and extracts relevant features of the desired motion for an industrial robot.

477

Abstract

Purpose

This paper aims to present a new learning from demonstration-based trajectory planner that generalizes and extracts relevant features of the desired motion for an industrial robot.

Design/methodology/approach

The proposed trajectory planner is based on the concept of human arm motion imitation by the robot end-effector. The teleoperation-based real-time control architecture is used for direct and effective imitation learning. Using this architecture, a self-sufficient trajectory planner is designed which has inbuilt mapping strategy and direct learning ability. The proposed approach is also compared with the conventional robot programming approach.

Findings

The developed planner was implemented on the 5 degrees-of-freedom industrial robot SCORBOT ER-4u for an object manipulation task. The experimental results revealed that despite morphological differences, the robot imitated the demonstrated trajectory with more than 90 per cent geometric similarity and 60 per cent of the demonstrations were successfully learned by the robot with good positioning accuracy. The proposed planner shows an upper hand over the existing approach in robustness and operational ease.

Research limitations/implications

The approach assumes that the human demonstrator has the requisite expertise of the task demonstration and robot teleoperation. Moreover, the kinematic capabilities and the workspace conditions of the robot are known a priori.

Practical implications

The real-time implementation of the proposed methodology is possible and can be successfully used for industrial automation with very little knowledge of robot programming. The proposed approach reduces the complexities involved in robot programming by direct learning of the task from the demonstration given by the teacher.

Originality/value

This paper discusses a new framework blended with teleoperation and kinematic considerations of the Cartesian space, as well joint space of human and industrial robot and optimization for the robot programming by demonstration.

Details

Industrial Robot: An International Journal, vol. 44 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 6 September 2022

Kaimeng Wang and Te Tang

This paper aims to present a new approach for robot programming by demonstration, which generates robot programs by tracking 6 dimensional (6D) pose of the demonstrator’s hand…

Abstract

Purpose

This paper aims to present a new approach for robot programming by demonstration, which generates robot programs by tracking 6 dimensional (6D) pose of the demonstrator’s hand using a single red green blue (RGB) camera without requiring any additional sensors.

Design/methodology/approach

The proposed method learns robot grasps and trajectories directly from a single human demonstration by tracking the movements of both human hands and objects. To recover the 6D pose of an object from a single RGB image, a deep learning–based method is used to detect the keypoints of the object first and then solve a perspective-n-point problem. This method is first extended to estimate the 6D pose of the nonrigid hand by separating fingers into multiple rigid bones linked with hand joints. The accurate robot grasp can be generated according to the relative positions between hands and objects in the 2 dimensional space. Robot end-effector trajectories are generated from hand movements and then refined by objects’ start and end positions.

Findings

Experiments are conducted on a FANUC LR Mate 200iD robot to verify the proposed approach. The results show the feasibility of generating robot programs by observing human demonstration once using a single RGB camera.

Originality/value

The proposed approach provides an efficient and low-cost robot programming method with a single RGB camera. A new 6D hand pose estimation approach, which is used to generate robot grasps and trajectories, is developed.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 9 January 2009

J. Norberto Pires, Germano Veiga and Ricardo Araújo

The purpose of this paper is to report a collection of developments that enable users to program industrial robots using speech, several device interfaces, force control and code…

Abstract

Purpose

The purpose of this paper is to report a collection of developments that enable users to program industrial robots using speech, several device interfaces, force control and code generation techniques.

Design/methodology/approach

The reported system is explained in detail and a few practical examples are given that demonstrate its usefulness for small to medium‐sized enterprises (SMEs), where robots and humans need to cooperate to achieve a common goal (coworker scenario). The paper also explores the user interface software adapted for use by non‐experts.

Findings

The programmingbydemonstration (PbD) system presented proved to be very efficient with the task of programming entirely new features to an industrial robotic system. The system uses a speech interface for user command, and a force‐controlled guiding system for teaching the robot the details about the task being programmed. With only a small set of implemented robot instructions it was fairly easy to teach the robot system a new task, generate the robot code and execute it immediately.

Research limitations/implications

Although a particular robot controller was used, the system is in many aspects general, since the options adopted are mainly based on standards. It can obviously be implemented with other robot controllers without significant changes. In fact, most of the features were ported to run with Motoman robots with success.

Practical implications

It is important to stress that the robot program built in this section was obtained without writing a single line of code, but instead just by moving the robot to the desired positions and adding the required robot instructions using speech. Even the upload task of the obtained module to the robot controller is commanded by speech, along with its execution/termination. Consequently, teaching the robotic system a new feature is accessible for any type of user with only minor training.

Originality/value

This type of PbD systems will constitute a major advantage for SMEs, since most of those companies do not have the necessary engineering resources to make changes or add new functionalities to their robotic manufacturing systems. Even at the system integrator level these systems are very useful for avoiding the need for specific knowledge about all the controllers with which they work: complexity is hidden beyond the speech interfaces and portable interface devices, with specific and user‐friendly APIs making the connection between the programmer and the system.

Details

Industrial Robot: An International Journal, vol. 36 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 2 June 2020

Zhongxiang Zhou, Liang Ji, Rong Xiong and Yue Wang

In robot programming by demonstration (PbD) of small parts assembly tasks, the accuracy of parts poses estimated by vision-based techniques in demonstration stage is far from…

Abstract

Purpose

In robot programming by demonstration (PbD) of small parts assembly tasks, the accuracy of parts poses estimated by vision-based techniques in demonstration stage is far from enough to ensure a successful execution. This paper aims to develop an inference method to improve the accuracy of poses and assembly relations between parts by integrating visual observation with computer-aided design (CAD) model.

Design/methodology/approach

In this paper, the authors propose a spatial information inference method called probabilistic assembly graph with optional CAD model, shorted as PAGC*, to achieve this task. Then an assembly relation extraction method from CAD model is designed, where different assembly relation descriptions in CAD model are summarized into two fundamental relations that are colinear and coplanar. The relation similarity, distance similarity and rotation similarity are adopted as the similar part matching criterions between the CAD model and the observation. The knowledge of part in CAD is used to correct that of the corresponding part in observation. The likelihood maximization estimation is used to infer the accurate poses and assembly relations based on the probabilistic assembly graph.

Findings

In the experiments, both simulated data and real-world data are applied to evaluate the performance of the PAGC* model. The experimental results show the superiority of PAGC* in accuracy compared with assembly graph (AG) and probabilistic assembly graph without CAD model (PAG).

Originality/value

The paper provides a new approach to get the accurate pose of each part in demonstration stage of the robot PbD system. By integrating information from visual observation with prior knowledge from CAD model, PAGC* ensures the success in execution stage of the PbD system.

Details

Assembly Automation, vol. 40 no. 5
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 21 March 2016

Alberto Brunete, Carlos Mateo, Ernesto Gambao, Miguel Hernando, Jukka Koskinen, Jari M Ahola, Tuomas Seppälä and Tapio Heikkila

This paper aims to propose a new technique for programming robotized machining tasks based on intuitive human–machine interaction. This will enable operators to create robot…

Abstract

Purpose

This paper aims to propose a new technique for programming robotized machining tasks based on intuitive human–machine interaction. This will enable operators to create robot programs for small-batch production in a fast and easy way, reducing the required time to accomplish the programming tasks.

Design/methodology/approach

This technique makes use of online walk-through path guidance using an external force/torque sensor, and simple and intuitive visual programming, by a demonstration method and symbolic task-level programming.

Findings

Thanks to this technique, the operator can easily program robots without learning every robot-specific language and can design new tasks for industrial robots based on manual guidance.

Originality/value

The main contribution of the paper is a new procedure to program machining tasks based on manual guidance (walk-through teaching method) and user-friendly visual programming. Up to now, the acquisition of paths and the task programming were done in separate steps and in separate machines. The authors propose a procedure for using a tablet as the only user interface to acquire paths and to make a program to use this path for machining tasks.

Details

Industrial Robot: An International Journal, vol. 43 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 17 August 2021

João Pedro C. de Souza, António M. Amorim, Luís F. Rocha, Vítor H. Pinto and António Paulo Moreira

The purpose of this paper is to present a programming by demonstration (PbD) system based on 3D stereoscopic vision and inertial sensing that provides a cost-effective pose…

Abstract

Purpose

The purpose of this paper is to present a programming by demonstration (PbD) system based on 3D stereoscopic vision and inertial sensing that provides a cost-effective pose tracking system, even during error-prone situations, such as camera occlusions.

Design/methodology/approach

The proposed PbD system is based on the 6D Mimic innovative solution, whose six degrees of freedom marker hardware had to be revised and restructured to accommodate an IMU sensor. Additionally, a new software pipeline was designed to include this new sensing device, seeking the improvement of the overall system’s robustness in stereoscopic vision occlusion situations.

Findings

The IMU component and the new software pipeline allow the 6D Mimic system to successfully maintain the pose tracking when the main tracking tool, i.e. the stereoscopic vision, fails. Therefore, the system improves in terms of reliability, robustness, and accuracy which were verified by real experiments.

Practical implications

Based on this proposal, the 6D Mimic system reaches a reliable and low-cost PbD methodology. Therefore, the robot can accurately replicate, on an industrial scale, the artisan level performance of highly skilled shop-floor operators.

Originality/value

To the best of the authors’ knowledge, the sensor fusion between stereoscopic images and IMU applied to robot PbD is a novel approach. The system is entirely designed aiming to reduce costs and taking advantage of an offline processing step for data analysis, filtering and fusion, enhancing the reliability of the PbD system.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 15 August 2016

Aljaž Kramberger, Rok Piltaver, Bojan Nemec, Matjaž Gams and Aleš Ude

In this paper, the authors aim to propose a method for learning robotic assembly sequences, where precedence constraints and object relative size and location constraints can be…

Abstract

Purpose

In this paper, the authors aim to propose a method for learning robotic assembly sequences, where precedence constraints and object relative size and location constraints can be learned by demonstration and autonomous robot exploration.

Design/methodology/approach

To successfully plan the operations involved in assembly tasks, the planner needs to know the constraints of the desired task. In this paper, the authors propose a methodology for learning such constraints by demonstration and autonomous exploration. The learning of precedence constraints and object relative size and location constraints, which are needed to construct a planner for automated assembly, were investigated. In the developed system, the learning of symbolic constraints is integrated with low-level control algorithms, which is essential to enable active robot learning.

Findings

The authors demonstrated that the proposed reasoning algorithms can be used to learn previously unknown assembly constraints that are needed to implement a planner for automated assembly. Cranfield benchmark, which is a standardized benchmark for testing algorithms for robot assembly, was used to evaluate the proposed approaches. The authors evaluated the learning performance both in simulation and on a real robot.

Practical implications

The authors' approach reduces the amount of programming that is needed to set up new assembly cells and consequently the overall set up time when new products are introduced into the workcell.

Originality/value

In this paper, the authors propose a new approach for learning assembly constraints based on programming by demonstration and active robot exploration to reduce the computational complexity of the underlying search problems. The authors developed algorithms for success/failure detection of assembly operations based on the comparison of expected signals (forces and torques, positions and orientations of the assembly parts) with the actual signals sensed by a robot. In this manner, all precedence and object size and location constraints can be learned, thereby providing the necessary input for the optimal planning of the entire assembly process.

Details

Industrial Robot: An International Journal, vol. 43 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 17 August 2015

Daniele Massa, Massimo Callegari and Cristina Cristalli

This paper aims to deal with the problem of programming robots in industrial contexts, where the need of easy programming is increasing, while robustness and safety remain…

1812

Abstract

Purpose

This paper aims to deal with the problem of programming robots in industrial contexts, where the need of easy programming is increasing, while robustness and safety remain fundamental aspects.

Design/methodology/approach

A novel approach of robot programming can be identified with the manual guidance that permits to the operator to freely move the robot through its task; the task can then be taught using Programming by Demonstration methods or simple reproduction.

Findings

In this work, the different ways to achieve manual guidance are discussed and an implementation using a force/torque sensor is provided. Experimental results and a use case are also presented.

Practical implications

The use case shows how this methodology can be used with an industrial robot. An implementation in industrial contexts should be adjusted accordingly to ISO safety standards as described in the paper.

Originality/value

This paper presents a complete state-of-the-art of the problem and shows a real practical use case where the approach presented could be used to speed up the teaching process.

Details

Industrial Robot: An International Journal, vol. 42 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 8 March 2010

Pedro Neto, J. Norberto Pires and A. Paulo Moreira

Most industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. This is a tedious and time‐consuming task that requires…

1309

Abstract

Purpose

Most industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. This is a tedious and time‐consuming task that requires some technical expertise, and hence new approaches to robot programming are required. The purpose of this paper is to present a robotic system that allows users to instruct and program a robot with a high‐level of abstraction from the robot language.

Design/methodology/approach

The paper presents in detail a robotic system that allows users, especially non‐expert programmers, to instruct and program a robot just showing it what it should do, in an intuitive way. This is done using the two most natural human interfaces (gestures and speech), a force control system and several code generation techniques. Special attention will be given to the recognition of gestures, where the data extracted from a motion sensor (three‐axis accelerometer) embedded in the Wii remote controller was used to capture human hand behaviours. Gestures (dynamic hand positions) as well as manual postures (static hand positions) are recognized using a statistical approach and artificial neural networks.

Findings

It is shown that the robotic system presented is suitable to enable users without programming expertise to rapidly create robot programs. The experimental tests showed that the developed system can be customized for different users and robotic platforms.

Research limitations/implications

The proposed system is tested on two different robotic platforms. Since the options adopted are mainly based on standards, it can be implemented with other robot controllers without significant changes. Future work will focus on improving the recognition rate of gestures and continuous gesture recognition.

Practical implications

The key contribution of this paper is that it offers a practical method to program robots by means of gestures and speech, improving work efficiency and saving time.

Originality/value

This paper presents an alternative to the typical robot teaching process, extending the concept of human‐robot interaction and co‐worker scenario. Since most companies do not have engineering resources to make changes or add new functionalities to their robotic manufacturing systems, this system constitutes a major advantage for small‐ to medium‐sized enterprises.

Details

Industrial Robot: An International Journal, vol. 37 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 20 October 2014

Fares J. Abu-Dakka, Bojan Nemec, Aljaž Kramberger, Anders Glent Buch, Norbert Krüger and Ales Ude

– The purpose of this paper is to propose a new algorithm based on programming by demonstration and exception strategies to solve assembly tasks such as peg-in-hole.

1108

Abstract

Purpose

The purpose of this paper is to propose a new algorithm based on programming by demonstration and exception strategies to solve assembly tasks such as peg-in-hole.

Design/methodology/approach

Data describing the demonstrated tasks are obtained by kinesthetic guiding. The demonstrated trajectories are transferred to new robot workspaces using three-dimensional (3D) vision. Noise introduced by vision when transferring the task to a new configuration could cause the execution to fail, but such problems are resolved through exception strategies.

Findings

This paper demonstrated that the proposed approach combined with exception strategies outperforms traditional approaches for robot-based assembly. Experimental evaluation was carried out on Cranfield Benchmark, which constitutes a standardized assembly task in robotics. This paper also performed statistical evaluation based on experiments carried out on two different robotic platforms.

Practical implications

The developed framework can have an important impact for robot assembly processes, which are among the most important applications of industrial robots. Our future plans involve implementation of our framework in a commercially available robot controller.

Originality/value

This paper proposes a new approach to the robot assembly based on the Learning by Demonstration (LbD) paradigm. The proposed framework enables to quickly program new assembly tasks without the need for detailed analysis of the geometric and dynamic characteristics of workpieces involved in the assembly task. The algorithm provides an effective disturbance rejection, improved stability and increased overall performance. The proposed exception strategies increase the success rate of the algorithm when the task is transferred to new areas of the workspace, where it is necessary to deal with vision noise and altered dynamic characteristics of the task.

Details

Industrial Robot: An International Journal, vol. 41 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 19000