Search results

1 – 10 of 16
Article
Publication date: 19 October 2015

Ping Zhang, Xin Liu, Guanglong Du, Bin Liang and Xueqian Wang

The purpose of this paper is to present a markerless human–manipulators interface which maps the position and orientation of human end-effector (EE, the center of the palm) to…

Abstract

Purpose

The purpose of this paper is to present a markerless human–manipulators interface which maps the position and orientation of human end-effector (EE, the center of the palm) to those of robot EE so that the robot could copy the movement of the operator hand.

Design/methodology/approach

The tracking system of this human–manipulators interface comprises five Leap Motions (LMs) which not only makes up the narrow workspace drawback of one LM but also provides redundancies to improve the data precision. However, because of the native noises and tracking errors of the LMs, the measurement errors increase over time. To address this problem, two filter tools are integrated to obtain the relatively accurate estimation of the human EE, that is, Particle Filter for position estimation and Kalman Filter for orientation estimation. Because the operator has inherent perceptive limitations, the motions of the manipulator may be out of sync with the hand motions, so that it is hard to complete with the high performance manipulation. Therefore, in this paper, an over-damping method is adopted to improve reliability and accuracy.

Findings

A series of human–manipulators interaction experiments were carried out to verify the proposed system. Compared with the markerless and contactless methods(Kofman et al., 2007; Du and Zhang, 2015), the method described in this study is more accurate and efficient.

Originality/value

The proposed method would not hinder most natural human limb motion and allows the operator to concentrate on his/her own task, making it perform high-precision manipulations efficiently.

Details

Industrial Robot: An International Journal, vol. 42 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 17 August 2015

Ping Zhang, Bei Li and Guanglong Du

This paper aims to develop a wearable-based human-manipulator interface which integrates the interval Kalman filter (IKF), unscented Kalman filter (UKF), over damping method (ODM…

Abstract

Purpose

This paper aims to develop a wearable-based human-manipulator interface which integrates the interval Kalman filter (IKF), unscented Kalman filter (UKF), over damping method (ODM) and adaptive multispace transformation (AMT) to perform immersive human-manipulator interaction by interacting the natural and continuous motion of the human operator’s hand with the robot manipulator.

Design/methodology/approach

The interface requires that a wearable watch is tightly worn on the operator’s hand to track the continuous movements of the operator’s hand. Nevertheless, the measurement errors generated by the sensor error and tracking failure signicantly occur several times, which means that the measurement is not determined with sufficient accuracy. Due to this fact, IKF and UKF are used to compensate for the noisy and incomplete measurements, and ODM is established to eliminate the influence of the error signals like data jitter. Furthermore, to be subject to the inherent perceptive limitations of the human operator and the motor, AMT that focuses on a secondary treatment is also introduced.

Findings

Experimental studies on the GOOGOL GRB3016 robot show that such a wearable-based interface that incorporates the feedback mechanism and hybrid filters can operate the robot manipulator more flexibly and advantageously even if the operator is nonprofessional; the feedback mechanism introduced here can successfully assist in improving the performance of the interface.

Originality/value

The interface uses one wearable watch to simultaneously track the orientation and position of the operator’s hand; it is not only avoids problems of occlusion, identification and limited operating space, but also realizes a kind of two-way human-manipulator interaction, a feedback mechanism can be triggered in the watch to reflect the system states in real time. Furthermore, the interface gets rid of the synchronization question in posture estimation, as hybrid filters work independently to compensate the noisy measurements respectively.

Article
Publication date: 21 March 2016

Alberto Brunete, Carlos Mateo, Ernesto Gambao, Miguel Hernando, Jukka Koskinen, Jari M Ahola, Tuomas Seppälä and Tapio Heikkila

This paper aims to propose a new technique for programming robotized machining tasks based on intuitive human–machine interaction. This will enable operators to create robot…

Abstract

Purpose

This paper aims to propose a new technique for programming robotized machining tasks based on intuitive human–machine interaction. This will enable operators to create robot programs for small-batch production in a fast and easy way, reducing the required time to accomplish the programming tasks.

Design/methodology/approach

This technique makes use of online walk-through path guidance using an external force/torque sensor, and simple and intuitive visual programming, by a demonstration method and symbolic task-level programming.

Findings

Thanks to this technique, the operator can easily program robots without learning every robot-specific language and can design new tasks for industrial robots based on manual guidance.

Originality/value

The main contribution of the paper is a new procedure to program machining tasks based on manual guidance (walk-through teaching method) and user-friendly visual programming. Up to now, the acquisition of paths and the task programming were done in separate steps and in separate machines. The authors propose a procedure for using a tablet as the only user interface to acquire paths and to make a program to use this path for machining tasks.

Details

Industrial Robot: An International Journal, vol. 43 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 20 October 2014

Ping Zhang, Guanglong Du and Di Li

The aim of this paper is to present a novel methodology which incorporates Camshift, Kalman filter (KFs) and adaptive multi-space transformation (AMT) for a human-robot interface

Abstract

Purpose

The aim of this paper is to present a novel methodology which incorporates Camshift, Kalman filter (KFs) and adaptive multi-space transformation (AMT) for a human-robot interface, which perfects human intelligence and teleoperation.

Design/methodology/approach

In the proposed method, an inertial measurement unit is used to measure the orientation of the human hand, and a Camshift algorithm is used to track the human hand using a three-dimensional camera. Although the location and the orientation of the human can be obtained from the two sensors, the measurement error increases over time due to the noise of the devices and the tracking errors. KFs are used to estimate the location and the orientation of the human hand. Moreover, to be subject to the perceptive limitations and the motor limitations, human operator is hard to carry out the high precision operation. An AMT method is proposed to assist the operator to improve accuracy and reliability in determining the pose of the robot.

Findings

The experimental results show that this method would not hinder most natural human-limb motion and allows the operator to concentrate on his/her own task. Compared with the non-contacting marker-less method (Kofman et al., 2007), this method proves more accurate and stable.

Originality/value

The human-robot interface system was experimentally verified in a laboratory environment, and the results indicate that such a system can complete high-precision manipulation efficiently.

Details

Industrial Robot: An International Journal, vol. 41 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 29 January 2020

Dianchen Zhu, Huiying Wen and Yichuan Deng

To improve insufficient management by artificial management, especially for traffic accidents that occur at crossroads, the purpose of this paper is to develop a pro-active…

378

Abstract

Purpose

To improve insufficient management by artificial management, especially for traffic accidents that occur at crossroads, the purpose of this paper is to develop a pro-active warning system for crossroads at construction sites. Although prior studies have made efforts to develop warning systems for construction sites, most of them paid attention to the construction process, while the accidents that occur at crossroads were probably overlooked.

Design/methodology/approach

By summarizing the main reasons resulting for those accidents occurring at crossroads, a pro-active warning system that could provide six functions for countermeasures was designed. Several approaches relating to computer vision and a prediction algorithm were applied and proposed to realize the setting functions.

Findings

One 12-hour video that films a crossroad at a construction site was selected as the original data. The test results show that all designed functions could operate normally, several predicted dangerous situations could be detected and corresponding proper warnings could be given. To validate the applicability of this system, another 36-hour video data were chosen for a performance test, and the findings indicate that all applied algorithms show a significant fitness of the data.

Originality/value

Computer vision algorithms have been widely used in previous studies to address video data or monitoring information; however, few of them have demonstrated the high applicability of identification and classification of the different participants at construction sites. In addition, none of these studies attempted to use a dynamic prediction algorithm to predict risky events, which could provide significant information for relevant active warnings.

Details

Engineering, Construction and Architectural Management, vol. 27 no. 5
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 3 February 2020

Grant Rudd, Liam Daly and Filip Cuckov

This paper aims to present an intuitive control system for robotic manipulators that pairs a Leap Motion, a low-cost optical tracking and gesture recognition device, with the…

Abstract

Purpose

This paper aims to present an intuitive control system for robotic manipulators that pairs a Leap Motion, a low-cost optical tracking and gesture recognition device, with the ability to record and replay trajectories and operation to create an intuitive method of controlling and programming a robotic manipulator. This system was designed to be extensible and includes modules and methods for obstacle detection and dynamic trajectory modification for obstacle avoidance.

Design/methodology/approach

The presented control architecture, while portable to any robotic platform, was designed to actuate a six degree-of-freedom robotic manipulator of our own design. From the data collected by the Leap Motion, the manipulator was controlled by mapping the position and orientation of the human hand to values in the joint space of the robot. Additional recording and playback functionality was implemented to allow for the robot to repeat the desired tasks once the task had been demonstrated and recorded.

Findings

Experiments were conducted on our custom-built robotic manipulator by first using a simulation model to characterize and quantify the robot’s tracking of the Leap Motion generated trajectory. Tests were conducted in the Gazebo simulation software in conjunction with Robot Operating System, where results were collected by recording both the real-time input from the Leap Motion sensor, and the corresponding pose data. The results of these experiments show that the goal of accurate and real-time control of the robot was achieved and validated our methods of transcribing, recording and repeating six degree-of-freedom trajectories from the Leap Motion camera.

Originality/value

As robots evolve in complexity, the methods of programming them need to evolve to become more intuitive. Humans instinctively teach by demonstrating the task to a given subject, who then observes the various poses and tries to replicate the motions. This work aims to integrate the natural human teaching methods into robotics programming through an intuitive, demonstration-based programming method.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 2 September 2019

Bo Zhang, Guanglong Du, Wenming Shen and Fang Li

The purpose of this paper is the research of a novel gesture-based dual-robot collaborative interaction interface, which achieves the gesture recognition when both hands overlap…

Abstract

Purpose

The purpose of this paper is the research of a novel gesture-based dual-robot collaborative interaction interface, which achieves the gesture recognition when both hands overlap. This paper designs a hybrid-sensor gesture recognition platform to detect the both-hand data for dual-robot control.

Design/methodology/approach

This paper uses a combination of Leap Motion and PrimeSense in the vertical direction, which detects both-hand data in real time. When there is occlusion between hands, each hand is detected by one of the sensors, and a quaternion-based algorithm is used to realize the conversion of two sensors corresponding to different coordinate systems. When there is no occlusion, the data are fused by a self-adaptive weight fusion algorithm. Then the collision detection algorithm is used to detect the collision between robots to ensure safety. Finally, the data are transmitted to the dual robots.

Findings

This interface is implemented on a dual-robot system consisting of two 6-DOF robots. The dual-robot cooperative experiment indicates that the proposed interface is feasible and effective, and it takes less time to operate and has higher interaction efficiency.

Originality/value

A novel gesture-based dual-robot collaborative interface is proposed. It overcomes the problem of gesture occlusion in two-hand interaction with low computational complexity and low equipment cost. The proposed interface can perform a long-term stable tracking of the two-hand gestures even if there is occlusion between the hands. Meanwhile, it reduces the number of hand reset to reduce the operation time. The proposed interface achieves a natural and safe interaction between the human and the dual robot.

Details

Industrial Robot: the international journal of robotics research and application, vol. 46 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 27 April 2020

Yongxiang Wu, Yili Fu and Shuguo Wang

This paper aims to design a deep neural network for object instance segmentation and six-dimensional (6D) pose estimation in cluttered scenes and apply the proposed method in…

454

Abstract

Purpose

This paper aims to design a deep neural network for object instance segmentation and six-dimensional (6D) pose estimation in cluttered scenes and apply the proposed method in real-world robotic autonomous grasping of household objects.

Design/methodology/approach

A novel deep learning method is proposed for instance segmentation and 6D pose estimation in cluttered scenes. An iterative pose refinement network is integrated with the main network to obtain more robust final pose estimation results for robotic applications. To train the network, a technique is presented to generate abundant annotated synthetic data consisting of RGB-D images and object masks in a fast manner without any hand-labeling. For robotic grasping, the offline grasp planning based on eigengrasp planner is performed and combined with the online object pose estimation.

Findings

The experiments on the standard pose benchmarking data sets showed that the method achieves better pose estimation and time efficiency performance than state-of-art methods with depth-based ICP refinement. The proposed method is also evaluated on a seven DOFs Kinova Jaco robot with an Intel Realsense RGB-D camera, the grasping results illustrated that the method is accurate and robust enough for real-world robotic applications.

Originality/value

A novel 6D pose estimation network based on the instance segmentation framework is proposed and a neural work-based iterative pose refinement module is integrated into the method. The proposed method exhibits satisfactory pose estimation and time efficiency for the robotic grasping.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 17 August 2015

Gilbert Tang, Seemal Asif and Phil Webb

The purpose of this paper is to describe the integration of a gesture control system for industrial collaborative robot. Human and robot collaborative systems can be a viable…

Abstract

Purpose

The purpose of this paper is to describe the integration of a gesture control system for industrial collaborative robot. Human and robot collaborative systems can be a viable manufacturing solution, but efficient control and communication are required for operations to be carried out effectively and safely.

Design/methodology/approach

The integrated system consists of facial recognition, static pose recognition and dynamic hand motion tracking. Each sub-system has been tested in isolation before integration and demonstration of a sample task.

Findings

It is demonstrated that the combination of multiple gesture control methods can increase its potential applications for industrial robots.

Originality/value

The novelty of the system is the combination of a dual gesture controls method which allows operators to command an industrial robot by posing hand gestures as well as control the robot motion by moving one of their hands in front of the sensor. A facial verification system is integrated to improve the robustness, reliability and security of the control system which also allows assignment of permission levels to different users.

Details

Industrial Robot: An International Journal, vol. 42 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 2 July 2020

Zoltan Dobra and Krishna S. Dhir

Recent years have seen a technological change, Industry 4.0, in the manufacturing industry. Human–robot cooperation, a new application, is increasing and facilitating…

1293

Abstract

Purpose

Recent years have seen a technological change, Industry 4.0, in the manufacturing industry. Human–robot cooperation, a new application, is increasing and facilitating collaboration without fences, cages or any kind of separation. The purpose of the paper is to review mainstream academic publications to evaluate the current status of human–robot cooperation and identify potential areas of further research.

Design/methodology/approach

A systematic literature review is offered that searches, appraises, synthetizes and analyses relevant works.

Findings

The authors report the prevailing status of human–robot collaboration, human factors, complexity/ programming, safety, collision avoidance, instructing the robot system and other aspects of human–robot collaboration.

Practical implications

This paper identifies new directions and potential research in practice of human–robot collaboration, such as measuring the degree of collaboration, integrating human–robot cooperation into teamwork theories, effective functional relocation of the robot and product design for human robot collaboration.

Originality/value

This paper will be useful for three cohorts of readers, namely, the manufacturers who require a baseline for development and deployment of robots; users of robots-seeking manufacturing advantage and researchers looking for new directions for further exploration of human–machine collaboration.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of 16