Search results

1 – 10 of over 1000
Open Access
Article
Publication date: 25 March 2021

Bartłomiej Kulecki, Kamil Młodzikowski, Rafał Staszak and Dominik Belter

The purpose of this paper is to propose and evaluate the method for grasping a defined set of objects in an unstructured environment. To this end, the authors propose the method…

2073

Abstract

Purpose

The purpose of this paper is to propose and evaluate the method for grasping a defined set of objects in an unstructured environment. To this end, the authors propose the method of integrating convolutional neural network (CNN)-based object detection and the category-free grasping method. The considered scenario is related to mobile manipulating platforms that move freely between workstations and manipulate defined objects. In this application, the robot is not positioned with respect to the table and manipulated objects. The robot detects objects in the environment and uses grasping methods to determine the reference pose of the gripper.

Design/methodology/approach

The authors implemented the whole pipeline which includes object detection, grasp planning and motion execution on the real robot. The selected grasping method uses raw depth images to find the configuration of the gripper. The authors compared the proposed approach with a representative grasping method that uses a 3D point cloud as an input to determine the grasp for the robotic arm equipped with a two-fingered gripper. To measure and compare the efficiency of these methods, the authors measured the success rate in various scenarios. Additionally, they evaluated the accuracy of object detection and pose estimation modules.

Findings

The performed experiments revealed that the CNN-based object detection and the category-free grasping methods can be integrated to obtain the system which allows grasping defined objects in the unstructured environment. The authors also identified the specific limitations of neural-based and point cloud-based methods. They show how the determined properties influence the performance of the whole system.

Research limitations/implications

The authors identified the limitations of the proposed methods and the improvements are envisioned as part of future research.

Practical implications

The evaluation of the grasping and object detection methods on the mobile manipulating robot may be useful for all researchers working on the autonomy of similar platforms in various applications.

Social implications

The proposed method increases the autonomy of robots in applications in the small industry which is related to repetitive tasks in a noisy and potentially risky environment. This allows reducing the human workload in these types of environments.

Originality/value

The main contribution of this research is the integration of the state-of-the-art methods for grasping objects with object detection methods and evaluation of the whole system on the industrial robot. Moreover, the properties of each subsystem are identified and measured.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 9 September 2021

Xuan Zhao, Hancheng Yu, Mingkui Feng and Gang Sun

Robot automatic grasping has important application value in industrial applications. Recent works have explored on the performance of deep learning for robotic grasp detection

Abstract

Purpose

Robot automatic grasping has important application value in industrial applications. Recent works have explored on the performance of deep learning for robotic grasp detection. They usually use oriented anchor boxes (OABs) as detection prior and achieve better performance than previous works. However, the parameters of their loss belong to different coordinates, this may affect the regression accuracy. This paper aims to propose an oriented regression loss to solve the problem of inconsistency among the loss parameters.

Design/methodology/approach

In the oriented loss, the center coordinates errors between the ground truth grasp rectangle and the predicted grasp rectangle rotate to the vertical and horizontal of the OAB. And then the direction error is used as an orientation factor, combining with the errors of the rotated center coordinates, width and height of the predicted grasp rectangle.

Findings

The proposed oriented regression loss is evaluated on the YOLO-v3 framework to the grasp detection task. It yields state-of-the-art performance with an accuracy of 98.8% and a speed of 71 frames per second with GTX 1080Ti on Cornell datasets.

Originality/value

This paper proposes an oriented loss to improve the regression accuracy of deep learning for grasp detection. The authors apply the proposed deep grasp network to the visual servo intelligent crane. The experimental result indicates that the approach is accurate and robust enough for real-time grasping applications.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 15 February 2022

Xiaojun Wu, Peng Li, Jinghui Zhou and Yunhui Liu

Scattered parts are laid randomly during the manufacturing process and have difficulty to recognize and manipulate. This study aims to complete the grasp of the scattered parts by…

Abstract

Purpose

Scattered parts are laid randomly during the manufacturing process and have difficulty to recognize and manipulate. This study aims to complete the grasp of the scattered parts by a manipulator with a camera and learning method.

Design/methodology/approach

In this paper, a cascaded convolutional neural network (CNN) method for robotic grasping based on monocular vision and small data set of scattered parts is proposed. This method can be divided into three steps: object detection, monocular depth estimation and keypoint estimation. In the first stage, an object detection network is improved to effectively locate the candidate parts. Then, it contains a neural network structure and corresponding training method to learn and reason high-resolution input images to obtain depth estimation. The keypoint estimation in the third step is expressed as a cumulative form of multi-scale prediction from a network to use an red green blue depth (RGBD) map that is acquired from the object detection and depth map estimation. Finally, a grasping strategy is studied to achieve successful and continuous grasping. In the experiments, different workpieces are used to validate the proposed method. The best grasping success rate is more than 80%.

Findings

By using the CNN-based method to extract the key points of the scattered parts and calculating the possibility of grasp, the successful rate is increased.

Practical implications

This method and robotic systems can be used in picking and placing of most industrial automatic manufacturing or assembly processes.

Originality/value

Unlike standard parts, scattered parts are randomly laid and have difficulty recognizing and grasping for the robot. This study uses a cascaded CNN network to extract the keypoints of the scattered parts, which are also labeled with the possibility of successful grasping. Experiments are conducted to demonstrate the grasping of those scattered parts.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 3 February 2020

Hui Zhang, Jinwen Tan, Chenyang Zhao, Zhicong Liang, Li Liu, Hang Zhong and Shaosheng Fan

This paper aims to solve the problem between detection efficiency and performance in grasp commodities rapidly. A fast detection and grasping method based on improved faster R-CNN…

Abstract

Purpose

This paper aims to solve the problem between detection efficiency and performance in grasp commodities rapidly. A fast detection and grasping method based on improved faster R-CNN is purposed and applied to the mobile manipulator to grab commodities on the shelf.

Design/methodology/approach

To reduce the time cost of algorithm, a new structure of neural network based on faster R CNN is designed. To select the anchor box reasonably according to the data set, the data set-adaptive algorithm for choosing anchor box is presented; multiple models of ten types of daily objects are trained for the validation of the improved faster R-CNN. The proposed algorithm is deployed to the self-developed mobile manipulator, and three experiments are designed to evaluate the proposed method.

Findings

The result indicates that the proposed method is successfully performed on the mobile manipulator; it not only accomplishes the detection effectively but also grasps the objects on the shelf successfully.

Originality/value

The proposed method can improve the efficiency of faster R-CNN, maintain excellent performance, meet the requirement of real-time detection, and the self-developed mobile manipulator can accomplish the task of grasping objects.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 9 April 2021

Yang Chen and Fuchun Sun

The authors want to design an adaptive grasping control strategy without setting the expected contact force in advance to maintain grasping stable, so that the proposed control…

Abstract

Purpose

The authors want to design an adaptive grasping control strategy without setting the expected contact force in advance to maintain grasping stable, so that the proposed control system can deal with unknown object grasping manipulation tasks.

Design/methodology/approach

The adaptive grasping control strategy is proposed based on bang-bang-like control principle and slippage detection module. The bang-bang-like control method is designed to find and set the expected contact force for the whole control system, and the slippage detection function is achieved by dynamic time warping algorithm.

Findings

The expected contact force can adaptively adjust in grasping tasks to avoid bad effects on the control system by the differences of prior test results or designers. Slippage detection can be recognized in time with variation of expected contact force manipulation environment in the control system. Based on if the slippage caused by an unexpected disturbance happens, the control system can automatically adjust the expected contact force back to the level of the previous stable state after a given time, and has the ability to identify an unnecessary increasing in the expected contact force.

Originality/value

Only contact force is used as feedback variable in control system, and the proposed strategy can save hardware components and electronic circuit components for sensing, reducing the cost and design difficulty of conducting real control system and making it easy to realize in engineering application field. The expected contact force can adaptively adjust due to unknown disturbance and slippage for various grasping manipulation tasks.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 2 January 2023

Enbo Li, Haibo Feng and Yili Fu

The grasping task of robots in dense cluttered scenes from a single-view has not been solved perfectly, and there is still a problem of low grasping success rate. This study aims…

Abstract

Purpose

The grasping task of robots in dense cluttered scenes from a single-view has not been solved perfectly, and there is still a problem of low grasping success rate. This study aims to propose an end-to-end grasp generation method to solve this problem.

Design/methodology/approach

A new grasp representation method is proposed, which cleverly uses the normal vector of the table surface to derive the grasp baseline vectors, and maps the grasps to the pointed points (PP), so that there is no need to add orthogonal constraints between vectors when using a neural network to predict rotation matrixes of grasps.

Findings

Experimental results show that the proposed method is beneficial to the training of the neural network, and the model trained on synthetic data set can also have high grasping success rate and completion rate in real-world tasks.

Originality/value

The main contribution of this paper is that the authors propose a new grasp representation method, which maps the 6-DoF grasps to a PP and an angle related to the tabletop normal vector, thereby eliminating the need to add orthogonal constraints between vectors when directly predicting grasps using neural networks. The proposed method can generate hundreds of grasps covering the whole surface in about 0.3 s. The experimental results show that the proposed method has obvious superiority compared with other methods.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 30 December 2021

Yongxiang Wu, Yili Fu and Shuguo Wang

This paper aims to use fully convolutional network (FCN) to predict pixel-wise antipodal grasp affordances for unknown objects and improve the grasp detection performance through…

Abstract

Purpose

This paper aims to use fully convolutional network (FCN) to predict pixel-wise antipodal grasp affordances for unknown objects and improve the grasp detection performance through multi-scale feature fusion.

Design/methodology/approach

A modified FCN network is used as the backbone to extract pixel-wise features from the input image, which are further fused with multi-scale context information gathered by a three-level pyramid pooling module to make more robust predictions. Based on the proposed unify feature embedding framework, two head networks are designed to implement different grasp rotation prediction strategies (regression and classification), and their performances are evaluated and compared with a defined point metric. The regression network is further extended to predict the grasp rectangles for comparisons with previous methods and real-world robotic grasping of unknown objects.

Findings

The ablation study of the pyramid pooling module shows that the multi-scale information fusion significantly improves the model performance. The regression approach outperforms the classification approach based on same feature embedding framework on two data sets. The regression network achieves a state-of-the-art accuracy (up to 98.9%) and speed (4 ms per image) and high success rate (97% for household objects, 94.4% for adversarial objects and 95.3% for objects in clutter) in the unknown object grasping experiment.

Originality/value

A novel pixel-wise grasp affordance prediction network based on multi-scale feature fusion is proposed to improve the grasp detection performance. Two prediction approaches are formulated and compared based on the proposed framework. The proposed method achieves excellent performances on three benchmark data sets and real-world robotic grasping experiment.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 5 September 2016

JingRong Li, YuHua Xu, JianLong Ni and QingHui Wang

Hand gesture-based interaction can provide far more intuitive, natural and immersive feelings for users to manipulate 3D objects for virtual assembly (VA). A mechanical assembly…

Abstract

Purpose

Hand gesture-based interaction can provide far more intuitive, natural and immersive feelings for users to manipulate 3D objects for virtual assembly (VA). A mechanical assembly consists of mostly general-purpose machine elements or mechanical parts that can be defined into four types based on their geometric features and functionalities. For different types of machine elements, engineers formulate corresponding grasping gestures based on their domain knowledge or customs for ease of assembly. Therefore, this paper aims to support a virtual hand to assemble mechanical parts.

Design/methodology/approach

It proposes a novel glove-based virtual hand grasping approach for virtual mechanical assembly. The kinematic model of virtual hand is set up first by analyzing the hand structure and possible movements, and then four types of grasping gestures are defined with joint angles of fingers for connectors and three types of parts, respectively. The recognition of virtual hand grasping is developed based on collision detection and gesture matching. Moreover, stable grasping conditions are discussed.

Findings

A prototype system is designed and developed to implement the proposed approach. The case study on VA of a two-stage gear reducer demonstrates the functionality of the system. From the users’ feedback, it is found that more natural and stable hand grasping interaction for VA of mechanical parts can be achieved.

Originality/value

It proposes a novel glove-based virtual hand grasping approach for virtual mechanical assembly.

Details

Assembly Automation, vol. 36 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 13 January 2022

Jiang Daqi, Wang Hong, Zhou Bin and Wei Chunfeng

This paper aims to save time spent on manufacturing the data set and make the intelligent grasping system easy to deploy into a practical industrial environment. Due to the…

Abstract

Purpose

This paper aims to save time spent on manufacturing the data set and make the intelligent grasping system easy to deploy into a practical industrial environment. Due to the accuracy and robustness of the convolutional neural network, the success rate of the gripping operation reached a high level.

Design/Methodology/Approach

The proposed system comprises two diverse kinds of convolutional neuron network (CNN) algorithms used in different stages and a binocular eye-in-hand system on the end effector, which detects the position and orientation of workpiece. Both algorithms are trained by the data sets containing images and annotations, which are generated automatically by the proposed method.

Findings

The approach can be successfully applied to standard position-controlled robots common in the industry. The algorithm performs excellently in terms of elapsed time. Procession of a 256 × 256 image spends less than 0.1 s without relying on high-performance GPUs. The approach is validated in a series of grasping experiments. This method frees workers from monotonous work and improves factory productivity.

Originality/Value

The authors propose a novel neural network whose performance is tested to be excellent. Moreover, experimental results demonstrate that the proposed second level is extraordinary robust subject to environmental variations. The data sets are generated automatically which saves time spent on manufacturing the data set and makes the intelligent grasping system easy to deploy into a practical industrial environment. Due to the accuracy and robustness of the convolutional neural network, the success rate of the gripping operation reached a high level.

Details

Assembly Automation, vol. 42 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 21 August 2017

Yassine Bouteraa and Ismail Ben Abdallah

The idea is to exploit the natural stability and performance of the human arm during movement, execution and manipulation. The purpose of this paper is to remotely control a…

Abstract

Purpose

The idea is to exploit the natural stability and performance of the human arm during movement, execution and manipulation. The purpose of this paper is to remotely control a handling robot with a low cost but effective solution.

Design/methodology/approach

The developed approach is based on three different techniques to be able to ensure movement and pattern recognition of the operator’s arm as well as an effective control of the object manipulation task. In the first, the methodology works on the kinect-based gesture recognition of the operator’s arm. However, using only the vision-based approach for hand posture recognition cannot be the suitable solution mainly when the hand is occluded in such situations. The proposed approach supports the vision-based system by an electromyography (EMG)-based biofeedback system for posture recognition. Moreover, the novel approach appends to the vision system-based gesture control and the EMG-based posture recognition a force feedback to inform operator of the real grasping state.

Findings

The main finding is to have a robust method able to gesture-based control a robot manipulator during movement, manipulation and grasp. The proposed approach uses a real-time gesture control technique based on a kinect camera that can provide the exact position of each joint of the operator’s arm. The developed solution integrates also an EMG biofeedback and a force feedback in its control loop. In addition, the authors propose a high-friendly human-machine-interface (HMI) which allows user to control in real time a robotic arm. Robust trajectory tracking challenge has been solved by the implementation of the sliding mode controller. A fuzzy logic controller has been implemented to manage the grasping task based on the EMG signal. Experimental results have shown a high efficiency of the proposed approach.

Research limitations/implications

There are some constraints when applying the proposed method, such as the sensibility of the desired trajectory generated by the human arm even in case of random and unwanted movements. This can damage the manipulated object during the teleoperation process. In this case, such operator skills are highly required.

Practical implications

The developed control approach can be used in all applications, which require real-time human robot cooperation.

Originality/value

The main advantage of the developed approach is that it benefits at the same time of three various techniques: EMG biofeedback, vision-based system and haptic feedback. In such situation, using only vision-based approaches mainly for the hand postures recognition is not effective. Therefore, the recognition should be based on the biofeedback naturally generated by the muscles responsible of each posture. Moreover, the use of force sensor in closed-loop control scheme without operator intervention is ineffective in the special cases in which the manipulated objects vary in a wide range with different metallic characteristics. Therefore, the use of human-in-the-loop technique can imitate the natural human postures in the grasping task.

Details

Industrial Robot: An International Journal, vol. 44 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 1000