Search results

1 – 10 of 32
Article
Publication date: 20 October 2014

Hui Pan, Na Li Wang and Yin Shi Qin

The purpose of this paper is to propose a method that calibrates the hand-eye relationship for eye-to-hand configuration and afterwards a rectification to improve the accuracy of…

Abstract

Purpose

The purpose of this paper is to propose a method that calibrates the hand-eye relationship for eye-to-hand configuration and afterwards a rectification to improve the accuracy of general calibration.

Design/methodology/approach

The hand-eye calibration of eye-to-hand configuration is summarized as a equation AX = XB which is the same as in eye-in-hand calibration. A closed-form solution is derived. To abate the impact of noise, a rectification is conducted after the general calibration.

Findings

Simulation and actual experiments confirm that the accuracy of calibration is obviously improved.

Originality/value

Only a calibration plane is required for the hand-eye calibration. Taking the impact of noise into account, a rectification is carried out after the general calibration and, as a result, that the accuracy is obviously improved. The method can be applied in many actual applications.

Details

Industrial Robot: An International Journal, vol. 41 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 28 March 2023

Xinwei Guo and Yang Chen

Currently, the vision and depth information obtained from the eye-to-hand RGB-D camera can apply to the reconstruction of the three-dimensional (3D) environment for a robotic…

Abstract

Purpose

Currently, the vision and depth information obtained from the eye-to-hand RGB-D camera can apply to the reconstruction of the three-dimensional (3D) environment for a robotic operation workspace. The reconstructed 3D space contributes to a symmetrical and equal observation view for robots and humans, which can be considered a digital twin (DT) environment. The purpose of this study is to enhance the robot skill in the physical workspace, although the artificial intelligence (AI) technique has high performance of the robotic operation in the known environments.

Design/methodology/approach

A multimodal interaction framework is proposed in DT operation environments.

Findings

A fast image-based target segmentation technique is combined in the 3D reconstruction of the robotic operation environment from the eye-to-hand camera, thus expediting the 3D DT environment generation without accuracy loss. A multimodal interaction interface is integrated into the DT environment.

Originality/value

The users are supported to operate the virtual objects in the DT environment using speech, mouse and keyboard simultaneously. The humans’ operations in 3D DT virtual space are recorded, and cues are provided for the robot’s operations in practice.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 January 1991

D.F.H. Wolfe, S.W. Wijesoma and R.J. Richards

Tasks in automated manufacturing and assembly increasingly involve robot operations guided by vision systems. The traditional “look‐and‐move” approach to linking machine vision…

Abstract

Tasks in automated manufacturing and assembly increasingly involve robot operations guided by vision systems. The traditional “look‐and‐move” approach to linking machine vision systems and robot manipulators which is generally used in these operations relies heavily on accurate camera to real‐world calibration processes and on highly accurate robot arms with well‐known kinematics. As a consequence, the cost of robot automation has not been justifiable in many applications. This article describes a novel real‐time vision control strategy giving “eye‐to‐hand co‐ordination” which offers good performance even in the presence of significant vision system miscalibrations and kinematic model parametric errors. This strategy offers the potential for low cost vision‐guided robots.

Details

Assembly Automation, vol. 11 no. 1
Type: Research Article
ISSN: 0144-5154

Article
Publication date: 31 May 2023

Xu Jingbo, Li Qiaowei and White Bai

The purpose of this study is solving the hand–eye calibration issue for line structured light vision sensor. Only after hand–eye calibration the sensor measurement data can be…

Abstract

Purpose

The purpose of this study is solving the hand–eye calibration issue for line structured light vision sensor. Only after hand–eye calibration the sensor measurement data can be applied to robot system.

Design/methodology/approach

In this paper, the hand–eye calibration methods are studied, respectively, for eye-in-hand and eye-to-hand. Firstly, the coordinates of the target point in robot system are obtained by tool centre point (TCP), then the robot is controlled to make the sensor measure the target point in multiple poses and the measurement data and pose data are obtained; finally, the sum of squared calibration errors is minimized by the least square method. Furthermore, the missing vector in the process of solving the transformation matrix is obtained by vector operation, and the complete matrix is obtained.

Findings

On this basis, the sensor measurement data can be easily and accurately converted to the robot coordinate system by matrix operation.

Originality/value

This method has no special requirement for robot pose control, and its calibration process is fast and efficient, with high precision and has practical popularized value.

Details

Sensor Review, vol. 43 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 26 February 2021

Ioan Doroftei, Daniel Chirita, Ciprian Stamate, Stelian Cazan, Carlos Pascal and Adrian Burlacu

The mass electronics sector is one of the most critical sources of waste, in terms of volume and content with dangerous effects on the environment. The purpose of this study is to…

Abstract

Purpose

The mass electronics sector is one of the most critical sources of waste, in terms of volume and content with dangerous effects on the environment. The purpose of this study is to provide an automated and accurate dismantling system that can improve the outcome of recycling.

Design/methodology/approach

Following a short introduction, the paper details the implementation layout and highlights the advantages of using a custom architecture for the automated dismantling of printed circuit board waste.

Findings

Currently, the amount of electronic waste is impressive while manual dismantling is a very common and non-efficient approach. Designing an automatic procedure that can be replicated, is one of the tasks for efficient electronic waste recovery. This paper proposes an automated dismantling system for the advanced recovery of particular waste materials from computer and telecommunications equipment. The automated dismantling architecture is built using a robotic system, a custom device and an eye-to-hand configuration for a stereo vision system.

Originality/value

The proposed approach is innovative because of its custom device design. The custom device is built using a programmable screwdriver combined with an innovative rotary dismantling tool. The dismantling torque can be tuned empirically.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 7 November 2019

Megha G. Krishnan, Abhilash T. Vijayan and Ashok Sankar

This paper aims to improve the performance of a two-camera robotic feedback system designed for automatic pick and place application by modifying its velocity profile during…

Abstract

Purpose

This paper aims to improve the performance of a two-camera robotic feedback system designed for automatic pick and place application by modifying its velocity profile during switching of control.

Design/methodology/approach

Cooperation of global and local vision sensors ensures visibility of the target for a two-camera robotic system. The master camera, monitoring the workspace, guides the robot such that image-based visual servoing (IBVS) by the eye-in-hand camera transcends its inherent shortcomings. A hybrid control law steers the robot until the system switches to IBVS in a region proven for its asymptotic stability and convergence through a qualitative overview of the scheme. Complementary gain factors can ensure a smooth transition in velocity during switching considering the versatility and range of the workspace.

Findings

The proposed strategy is verified through simulation studies and implemented on a 6-DOF industrial robot ABB IRB 1200 to validate the practicality of adaptive gain approach while switching in a hybrid visual feedback system. This approach can be extended to any control problem with uneven switching surfaces or coarse/fine controllers which are subjected to discrete time events.

Practical implications

In complex workspace where robots operate in parallel with other robots/humans and share workspaces, the supervisory control scheme ensures convergence. This study proves that hybrid control laws are more effective than conventional approaches in unstructured environments and visibility constraints can be overcome by the integration of multiple vision sensors.

Originality/value

The supervisory control is designed to combine the visual feedback data from eye-in-hand and eye-to-hand sensors. A gain adaptive approach smoothens the velocity characteristics of the end-effector while switching the control from master camera to the end-effector camera.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 3 April 2019

Yi Liu, Ming Cong, Hang Dong and Dong Liu

The purpose of this paper is to propose a new method based on three-dimensional (3D) vision technologies and human skill integrated deep learning to solve assembly positioning…

Abstract

Purpose

The purpose of this paper is to propose a new method based on three-dimensional (3D) vision technologies and human skill integrated deep learning to solve assembly positioning task such as peg-in-hole.

Design/methodology/approach

Hybrid camera configuration was used to provide the global and local views. Eye-in-hand mode guided the peg to be in contact with the hole plate using 3D vision in global view. When the peg was in contact with the workpiece surface, eye-to-hand mode provided the local view to accomplish peg-hole positioning based on trained CNN.

Findings

The results of assembly positioning experiments proved that the proposed method successfully distinguished the target hole from the other same size holes according to the CNN. The robot planned the motion according to the depth images and human skill guide line. The final positioning precision was good enough for the robot to carry out force controlled assembly.

Practical implications

The developed framework can have an important impact on robotic assembly positioning process, which combine with the existing force-guidance assembly technology as to build a whole set of autonomous assembly technology.

Originality/value

This paper proposed a new approach to the robotic assembly positioning based on 3D visual technologies and human skill integrated deep learning. Dual cameras swapping mode was used to provide visual feedback for the entire assembly motion planning process. The proposed workpiece positioning method provided an effective disturbance rejection, autonomous motion planning and increased overall performance with depth images feedback. The proposed peg-hole positioning method with human skill integrated provided the capability of target perceptual aliasing avoiding and successive motion decision for the robotic assembly manipulation.

Details

Industrial Robot: the international journal of robotics research and application, vol. 46 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 2 March 2012

Jwu‐Sheng Hu and Yung‐Jung Chang

The purpose of this paper is to propose a calibration method that can calibrate the relationships among the robot manipulator, the camera and the workspace.

Abstract

Purpose

The purpose of this paper is to propose a calibration method that can calibrate the relationships among the robot manipulator, the camera and the workspace.

Design/methodology/approach

The method uses a laser pointer rigidly mounted on the manipulator and projects the laser beam on the work plane. Nonlinear constraints governing the relationships of the geometrical parameters and measurement data are derived. The uniqueness of the solution is guaranteed when the camera is calibrated in advance. As a result, a decoupled multi‐stage closed‐form solution can be derived based on parallel line constraints, line/plane intersection and projective geometry. The closed‐form solution can be further refined by nonlinear optimization which considers all parameters simultaneously in the nonlinear model.

Findings

Computer simulations and experimental tests using actual data confirm the effectiveness of the proposed calibration method and illustrate its ability to work even when the eye cannot see the hand.

Originality/value

Only a laser pointer is required for this calibration method and this method can work without any manual measurement. In addition, this method can also be applied when the robot is not within the camera field of view.

Details

Industrial Robot: An International Journal, vol. 39 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 12 May 2020

Jing Bai, Yuchang Zhang, Xiansheng Qin, Zhanxi Wang and Chen Zheng

The purpose of this paper is to present a visual detection approach to predict the poses of target objects placed in arbitrary positions before completing the corresponding tasks…

Abstract

Purpose

The purpose of this paper is to present a visual detection approach to predict the poses of target objects placed in arbitrary positions before completing the corresponding tasks in mobile robotic manufacturing systems.

Design/methodology/approach

A hybrid visual detection approach that combines monocular vision and laser ranging is proposed based on an eye-in-hand vision system. The laser displacement sensor is adopted to achieve normal alignment for an arbitrary plane and obtain depth information. The monocular camera measures the two-dimensional image information. In addition, a robot hand-eye relationship calibration method is presented in this paper.

Findings

First, a hybrid visual detection approach for mobile robotic manufacturing systems is proposed. This detection approach is based on an eye-in-hand vision system consisting of one monocular camera and three laser displacement sensors and it can achieve normal alignment for an arbitrary plane and spatial positioning of the workpiece. Second, based on this vision system, a robot hand-eye relationship calibration method is presented and it was successfully applied to a mobile robotic manufacturing system designed by the authors’ team. As a result, the relationship between the workpiece coordinate system and the end-effector coordinate system could be established accurately.

Practical implications

This approach can quickly and accurately establish the relationship between the coordinate system of the workpiece and that of the end-effector. The normal alignment accuracy of the hand-eye vision system was less than 0.5° and the spatial positioning accuracy could reach 0.5 mm.

Originality/value

This approach can achieve normal alignment for arbitrary planes and spatial positioning of the workpiece and it can quickly establish the pose relationship between the workpiece and end-effector coordinate systems. Moreover, the proposed approach can significantly improve the work efficiency, flexibility and intelligence of mobile robotic manufacturing systems.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 August 2004

Thomas Schack

This article addresses the functional links between knowledge and performance in human activity. Starting with the evolutionary roots of knowledge and activity, it shows how the…

2779

Abstract

This article addresses the functional links between knowledge and performance in human activity. Starting with the evolutionary roots of knowledge and activity, it shows how the combination of adaptive behavior and knowledge storage has formed over various stages of evolution. The cognitive architecture of human actions is discussed against this background, and it is shown how knowledge is integrated into action control. Then, methodological issues in the study of action knowledge are considered, and an experimental method is presented that can be used to assess the structure of action knowledge in long‐term memory. This method is applied in studies on the relation between object knowledge and performance in mechanics and between movement knowledge and performance in high‐performance sportswomen. These studies show how experts’ knowledge systems can be assessed, and how this may contribute to the optimization of human performance. In high‐level experts, these representational frameworks were organized in a highly hierarchical tree‐like structure, were remarkably similar between individuals, and matched well the functional demands of the task. In comparison, the action representations in low‐level performers were organized less hierarchically, were more variable between persons, and were not so well in accordance with functional demands. These results support the hypothesis that voluntary actions are planned, executed, and stored in memory directly by way of representations of their anticipated perceptual effects. The method offers new possibilities to investigate knowledge structures. Based on such results it is possible to improve performance via special training‐techniques. This paper fulfils an identified research need concerning the interaction of knowledge and performance and offers new perspectives for future forms of knowledge management.

Details

Journal of Knowledge Management, vol. 8 no. 4
Type: Research Article
ISSN: 1367-3270

Keywords

1 – 10 of 32