Search results

1 – 10 of over 1000
Article
Publication date: 15 October 2020

Enbo Li, Haibo Feng, Yanwu Zhai, Zhou Haitao, Li Xu and Yili Fu

One of the development trends of robots is to enable robots to have the ability of anthropomorphic manipulation. Grasping is the first step of manipulation. For mobile manipulator…

Abstract

Purpose

One of the development trends of robots is to enable robots to have the ability of anthropomorphic manipulation. Grasping is the first step of manipulation. For mobile manipulator robots, grasping a target during the movement process is extremely challenging, which requires the robots to make rapid motion planning for arms under uncertain dynamic disturbances. However, there are many situations require robots to grasp a target quickly while they move, such as emergency rescue. The purpose of this paper is to propose a method for target dynamic grasping during the movement of a robot.

Design/methodology/approach

An off-line learning from demonstrations method is applied to learn a basic reach model for arm and a motion model for fingers. An on-line dynamic adjustment method of arm speed for active and passive grasping mode is designed.

Findings

The experimental results of the robot movement on flat, slope and speed bumps ground show that the proposed method can effectively solve the problem of fast planning under uncertain disturbances caused by robot movement. The method performs well in the task of target dynamic grasping during the robot movement.

Originality/value

The main contribution of this paper is to propose a method to solve the problem of rapid motion planning of the robot arm under uncertain disturbances while the robot is grasping a target in the process of robot movement. The proposed method significantly improves the grasping efficiency of the robot in emergency situations. Experimental results show that the proposed method can effectively solve the problem.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Open Access
Article
Publication date: 25 March 2021

Bartłomiej Kulecki, Kamil Młodzikowski, Rafał Staszak and Dominik Belter

The purpose of this paper is to propose and evaluate the method for grasping a defined set of objects in an unstructured environment. To this end, the authors propose the method…

2079

Abstract

Purpose

The purpose of this paper is to propose and evaluate the method for grasping a defined set of objects in an unstructured environment. To this end, the authors propose the method of integrating convolutional neural network (CNN)-based object detection and the category-free grasping method. The considered scenario is related to mobile manipulating platforms that move freely between workstations and manipulate defined objects. In this application, the robot is not positioned with respect to the table and manipulated objects. The robot detects objects in the environment and uses grasping methods to determine the reference pose of the gripper.

Design/methodology/approach

The authors implemented the whole pipeline which includes object detection, grasp planning and motion execution on the real robot. The selected grasping method uses raw depth images to find the configuration of the gripper. The authors compared the proposed approach with a representative grasping method that uses a 3D point cloud as an input to determine the grasp for the robotic arm equipped with a two-fingered gripper. To measure and compare the efficiency of these methods, the authors measured the success rate in various scenarios. Additionally, they evaluated the accuracy of object detection and pose estimation modules.

Findings

The performed experiments revealed that the CNN-based object detection and the category-free grasping methods can be integrated to obtain the system which allows grasping defined objects in the unstructured environment. The authors also identified the specific limitations of neural-based and point cloud-based methods. They show how the determined properties influence the performance of the whole system.

Research limitations/implications

The authors identified the limitations of the proposed methods and the improvements are envisioned as part of future research.

Practical implications

The evaluation of the grasping and object detection methods on the mobile manipulating robot may be useful for all researchers working on the autonomy of similar platforms in various applications.

Social implications

The proposed method increases the autonomy of robots in applications in the small industry which is related to repetitive tasks in a noisy and potentially risky environment. This allows reducing the human workload in these types of environments.

Originality/value

The main contribution of this research is the integration of the state-of-the-art methods for grasping objects with object detection methods and evaluation of the whole system on the industrial robot. Moreover, the properties of each subsystem are identified and measured.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 November 2002

N. Boubekri and Pinaki Chakraborty

The application of robots to industrial problems often requires grasping and manipulation of the work piece. The robot is able to perform a task adequately only when it is…

3226

Abstract

The application of robots to industrial problems often requires grasping and manipulation of the work piece. The robot is able to perform a task adequately only when it is assigned proper tooling and adequate methods of grasping and handling work pieces. The design of such a task requires an in‐depth knowledge of several interrelated subjects including: gripper design, force, position, stiffness and compliance control and grasp configurations. In this paper, we review the research finding on these subjects in order to present in a concise manner, which can be easily accessed by the designers of robot task, the information reported by the researchers, and identify based on the review, future research directions in these areas.

Details

Integrated Manufacturing Systems, vol. 13 no. 7
Type: Research Article
ISSN: 0957-6061

Keywords

Article
Publication date: 6 September 2022

Kaimeng Wang and Te Tang

This paper aims to present a new approach for robot programming by demonstration, which generates robot programs by tracking 6 dimensional (6D) pose of the demonstrator’s hand…

Abstract

Purpose

This paper aims to present a new approach for robot programming by demonstration, which generates robot programs by tracking 6 dimensional (6D) pose of the demonstrator’s hand using a single red green blue (RGB) camera without requiring any additional sensors.

Design/methodology/approach

The proposed method learns robot grasps and trajectories directly from a single human demonstration by tracking the movements of both human hands and objects. To recover the 6D pose of an object from a single RGB image, a deep learning–based method is used to detect the keypoints of the object first and then solve a perspective-n-point problem. This method is first extended to estimate the 6D pose of the nonrigid hand by separating fingers into multiple rigid bones linked with hand joints. The accurate robot grasp can be generated according to the relative positions between hands and objects in the 2 dimensional space. Robot end-effector trajectories are generated from hand movements and then refined by objects’ start and end positions.

Findings

Experiments are conducted on a FANUC LR Mate 200iD robot to verify the proposed approach. The results show the feasibility of generating robot programs by observing human demonstration once using a single RGB camera.

Originality/value

The proposed approach provides an efficient and low-cost robot programming method with a single RGB camera. A new 6D hand pose estimation approach, which is used to generate robot grasps and trajectories, is developed.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 28 May 2021

Zhengtuo Wang, Yuetong Xu, Guanhua Xu, Jianzhong Fu, Jiongyan Yu and Tianyi Gu

In this work, the authors aim to provide a set of convenient methods for generating training data, and then develop a deep learning method based on point clouds to estimate the…

Abstract

Purpose

In this work, the authors aim to provide a set of convenient methods for generating training data, and then develop a deep learning method based on point clouds to estimate the pose of target for robot grasping.

Design/methodology/approach

This work presents a deep learning method PointSimGrasp on point clouds for robot grasping. In PointSimGrasp, a point cloud emulator is introduced to generate training data and a pose estimation algorithm, which, based on deep learning, is designed. After trained with the emulation data set, the pose estimation algorithm could estimate the pose of target.

Findings

In experiment part, an experimental platform is built, which contains a six-axis industrial robot, a binocular structured-light sensor and a base platform with adjustable inclination. A data set that contains three subsets is set up on the experimental platform. After trained with the emulation data set, the PointSimGrasp is tested on the experimental data set, and an average translation error of about 2–3 mm and an average rotation error of about 2–5 degrees are obtained.

Originality/value

The contributions are as follows: first, a deep learning method on point clouds is proposed to estimate 6D pose of target; second, a convenient training method for pose estimation algorithm is presented and a point cloud emulator is introduced to generate training data; finally, an experimental platform is built, and the PointSimGrasp is tested on the platform.

Details

Assembly Automation, vol. 41 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 5 April 2021

Shifeng Lin and Ning Wang

In multi-robot cooperation, the cloud can share sensor data, which can help robots better perceive the environment. For cloud robotics, robot grasping is an important ability that…

Abstract

Purpose

In multi-robot cooperation, the cloud can share sensor data, which can help robots better perceive the environment. For cloud robotics, robot grasping is an important ability that must be mastered. Usually, the information source of grasping mainly comes from visual sensors. However, due to the uncertainty of the working environment, the information acquisition of the vision sensor may encounter the situation of being blocked by unknown objects. This paper aims to propose a solution to the problem in robot grasping when the vision sensor information is blocked by sharing the information of multi-vision sensors in the cloud.

Design/methodology/approach

First, the random sampling consensus algorithm and principal component analysis (PCA) algorithms are used to detect the desktop range. Then, the minimum bounding rectangle of the occlusion area is obtained by the PCA algorithm. The candidate camera view range is obtained by plane segmentation. Then the candidate camera view range is combined with the manipulator workspace to obtain the camera posture and drive the arm to take pictures of the desktop occlusion area. Finally, the Gaussian mixture model (GMM) is used to approximate the shape of the object projection and for every single Gaussian model, the grabbing rectangle is generated and evaluated to get the most suitable one.

Findings

In this paper, a variety of cloud robotic being blocked are tested. Experimental results show that the proposed algorithm can capture the image of the occluded desktop and grab the objects in the occluded area successfully.

Originality/value

In the existing work, there are few research studies on using active multi-sensor to solve the occlusion problem. This paper presents a new solution to the occlusion problem. The proposed method can be applied to the multi-cloud robotics working environment through cloud sharing, which helps the robot to perceive the environment better. In addition, this paper proposes a method to obtain the object-grabbing rectangle based on GMM shape approximation of point cloud projection. Experiments show that the proposed methods can work well.

Details

Assembly Automation, vol. 41 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 6 June 2022

Guoyang Wan, Fudong Li, Bingyou Liu, Shoujun Bai, Guofeng Wang and Kaisheng Xing

This paper aims to study six degrees-of-freedom (6DOF) pose measurement of reflective metal casts by machine vision, analyze the problems existing in the positioning of metal…

Abstract

Purpose

This paper aims to study six degrees-of-freedom (6DOF) pose measurement of reflective metal casts by machine vision, analyze the problems existing in the positioning of metal casts by stereo vision sensor in unstructured environment and put forward the visual positioning and grasping strategy that can be used in industrial robot cell.

Design/methodology/approach

A multikeypoints detection network Binocular Attention Hourglass Net is constructed, which can complete the two-dimensional positioning of the left and right cameras of the stereo vision system at the same time and provide reconstruction information for three-dimensional pose measurement. Generate adversarial networks is introduced to enhance the image of local feature area of object surface, and the three-dimensional pose measurement of object is completed by combining RANSAC ellipse fitting algorithm and triangulation method.

Findings

The proposed method realizes the high-precision 6DOF positioning and grasping of reflective metal casts by industrial robots; it has been applied in many fields and solves the problem of difficult visual measurement of reflective casts. The experimental results show that the system exhibits superior recognition performance, which meets the requirements of the grasping task.

Research limitations/implications

Because of the chosen research approach, the research results may lack generalizability. The proposed method is more suitable for objects with plane positioning features.

Originality/value

This paper realizes the 6DOF pose measurement of reflective casts by vision system, and solves the problem of positioning and grasping such objects by industrial robot.

Details

Assembly Automation, vol. 42 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 15 June 2015

Ryan Carpenter, Ross Hatton and Ravi Balasubramanian

– The purpose of this paper is to develop an automated industrial robotic system for handling steel castings of various sizes and shapes in a foundry.

Abstract

Purpose

The purpose of this paper is to develop an automated industrial robotic system for handling steel castings of various sizes and shapes in a foundry.

Design/methodology/approach

The authors first designed a prismatic gripper for pick-and-place operations that incorporates underactuated passive hydraulic contact (PHC) phalanges that enable the gripper to easily adapt to different casting shapes. The authors then optimized the gripper parameters and compared it to an adaptive revolute gripper using two methods: a planar physics based quasistatic simulation that accounts for object dynamics and validation using physical prototypes on a physical robot.

Findings

Through simulation, the authors found that an optimized PHC gripper improves grasp performance by 12 per cent when compared to an human-chosen PHC configuration and 60 per cent when compared to the BarrettHand™. Physical testing validated this finding with an improvement of 11 per cent and 280 per cent, respectively.

Originality/value

This paper presents for the first time optimized prismatic grippers which passively adapt to an object shape in grasping tasks.

Details

Industrial Robot: An International Journal, vol. 42 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 3 July 2023

Kento Nakatsuru, Weiwei Wan and Kensuke Harada

This paper aims to study using a mobile manipulator with a collaborative robotic arm component to manipulate objects beyond the robot’s maximum payload.

Abstract

Purpose

This paper aims to study using a mobile manipulator with a collaborative robotic arm component to manipulate objects beyond the robot’s maximum payload.

Design/methodology/approach

This paper proposes a single-short probabilistic roadmap-based method to plan and optimize manipulation motion with environment support. The method uses an expanded object mesh model to examine contact and randomly explores object motion while keeping contact and securing affordable grasping force. It generates robotic motion trajectories after obtaining object motion using an optimization-based algorithm. With the proposed method’s help, the authors plan contact-rich manipulation without particularly analyzing an object’s contact modes and their transitions. The planner and optimizer determine them automatically.

Findings

The authors conducted experiments and analyses using simulations and real-world executions to examine the method’s performance. The method successfully found manipulation motion that met contact, force and kinematic constraints. It allowed a mobile manipulator to move heavy objects while leveraging supporting forces from environmental obstacles.

Originality/value

This paper presents an automatic approach for solving contact-rich heavy object manipulation problems. Unlike previous methods, the new approach does not need to explicitly analyze contact states and build contact transition graphs, thus providing a new view for robotic grasp-less manipulation, nonprehensile manipulation, manipulation with contact, etc.

Details

Robotic Intelligence and Automation, vol. 43 no. 4
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 1 June 1996

Peter Sorenti

Gives an example of the use of offline 3‐D graphical simulation to assist in program generation for a robot palletizing operation, a technique which is proving increasingly cost…

234

Abstract

Gives an example of the use of offline 3‐D graphical simulation to assist in program generation for a robot palletizing operation, a technique which is proving increasingly cost effective. Describes GRASP, the 3‐D graphical simulation tool for robotics applications whose special palletizing module was employed, and reports on the successful outcome. Claims that GRASP has enabled verified robot programs to be generated which can be produced in less than a day, saving more than two days over the manual programming approach and reducing cell downtime from three days to less than two hours.

Details

Industrial Robot: An International Journal, vol. 23 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 1000