Search results

1 – 10 of 76
Article
Publication date: 1 January 1983

Clive Loughlin and Ed Hudson

The advent of low cost miniature solid state cameras now makes eye‐in‐hand robot vision a practical possibility. This paper discusses the advantages of eye‐in‐hand vision and…

Abstract

The advent of low cost miniature solid state cameras now makes eye‐in‐hand robot vision a practical possibility. This paper discusses the advantages of eye‐in‐hand vision and shows that with the Unimation VAL operating system it is easier to use than is possible with static overhead cameras.

Details

Sensor Review, vol. 3 no. 1
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 20 October 2014

Haitao Yang, Minghe Jin, Zongwu Xie, Kui Sun and Hong Liu

The purpose of this paper is to solve the ground verification and test method for space robot system capturing the target satellite based on visual servoing with time-delay in…

Abstract

Purpose

The purpose of this paper is to solve the ground verification and test method for space robot system capturing the target satellite based on visual servoing with time-delay in 3-dimensional space prior to space robot being launched.

Design/methodology/approach

To implement the approaching and capturing task, a motion planning method for visual servoing the space manipulator to capture a moving target is presented. This is mainly used to solve the time-delay problem of the visual servoing control system and the motion uncertainty of the target satellite. To verify and test the feasibility and reliability of the method in three-dimensional (3D) operating space, a set of ground hardware-in-the-loop simulation verification systems is developed, which adopts the end-tip kinematics equivalence and dynamics simulation method.

Findings

The results of the ground hardware-in-the-loop simulation experiment validate the reliability of the eye-in-hand visual system in the 3D operating space and prove the validity of the visual servoing motion planning method with time-delay compensation. At the same time, owing to the dynamics simulator of the space robot added in the ground hardware-in-the-loop verification system, the base disturbance can be considered during the approaching and capturing procedure, which makes the ground verification system realistic and credible.

Originality/value

The ground verification experiment system includes the real controller of space manipulator, the eye-in-hand camera and the dynamics simulator, which can veritably simulate the capturing process based on the visual servoing in space and consider the effect of time delay and the free-floating base disturbance.

Details

Industrial Robot: An International Journal, vol. 41 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 12 May 2020

Jing Bai, Yuchang Zhang, Xiansheng Qin, Zhanxi Wang and Chen Zheng

The purpose of this paper is to present a visual detection approach to predict the poses of target objects placed in arbitrary positions before completing the corresponding tasks…

Abstract

Purpose

The purpose of this paper is to present a visual detection approach to predict the poses of target objects placed in arbitrary positions before completing the corresponding tasks in mobile robotic manufacturing systems.

Design/methodology/approach

A hybrid visual detection approach that combines monocular vision and laser ranging is proposed based on an eye-in-hand vision system. The laser displacement sensor is adopted to achieve normal alignment for an arbitrary plane and obtain depth information. The monocular camera measures the two-dimensional image information. In addition, a robot hand-eye relationship calibration method is presented in this paper.

Findings

First, a hybrid visual detection approach for mobile robotic manufacturing systems is proposed. This detection approach is based on an eye-in-hand vision system consisting of one monocular camera and three laser displacement sensors and it can achieve normal alignment for an arbitrary plane and spatial positioning of the workpiece. Second, based on this vision system, a robot hand-eye relationship calibration method is presented and it was successfully applied to a mobile robotic manufacturing system designed by the authors’ team. As a result, the relationship between the workpiece coordinate system and the end-effector coordinate system could be established accurately.

Practical implications

This approach can quickly and accurately establish the relationship between the coordinate system of the workpiece and that of the end-effector. The normal alignment accuracy of the hand-eye vision system was less than 0.5° and the spatial positioning accuracy could reach 0.5 mm.

Originality/value

This approach can achieve normal alignment for arbitrary planes and spatial positioning of the workpiece and it can quickly establish the pose relationship between the workpiece and end-effector coordinate systems. Moreover, the proposed approach can significantly improve the work efficiency, flexibility and intelligence of mobile robotic manufacturing systems.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 7 November 2019

Megha G. Krishnan, Abhilash T. Vijayan and Ashok Sankar

This paper aims to improve the performance of a two-camera robotic feedback system designed for automatic pick and place application by modifying its velocity profile during…

Abstract

Purpose

This paper aims to improve the performance of a two-camera robotic feedback system designed for automatic pick and place application by modifying its velocity profile during switching of control.

Design/methodology/approach

Cooperation of global and local vision sensors ensures visibility of the target for a two-camera robotic system. The master camera, monitoring the workspace, guides the robot such that image-based visual servoing (IBVS) by the eye-in-hand camera transcends its inherent shortcomings. A hybrid control law steers the robot until the system switches to IBVS in a region proven for its asymptotic stability and convergence through a qualitative overview of the scheme. Complementary gain factors can ensure a smooth transition in velocity during switching considering the versatility and range of the workspace.

Findings

The proposed strategy is verified through simulation studies and implemented on a 6-DOF industrial robot ABB IRB 1200 to validate the practicality of adaptive gain approach while switching in a hybrid visual feedback system. This approach can be extended to any control problem with uneven switching surfaces or coarse/fine controllers which are subjected to discrete time events.

Practical implications

In complex workspace where robots operate in parallel with other robots/humans and share workspaces, the supervisory control scheme ensures convergence. This study proves that hybrid control laws are more effective than conventional approaches in unstructured environments and visibility constraints can be overcome by the integration of multiple vision sensors.

Originality/value

The supervisory control is designed to combine the visual feedback data from eye-in-hand and eye-to-hand sensors. A gain adaptive approach smoothens the velocity characteristics of the end-effector while switching the control from master camera to the end-effector camera.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 15 March 2019

Clint Alex Steed

This paper aims to present an approach for the simulation of a heterogeneous robotic cell. The simulation enables the cell’s developers to conveniently compare the performance of…

Abstract

Purpose

This paper aims to present an approach for the simulation of a heterogeneous robotic cell. The simulation enables the cell’s developers to conveniently compare the performance of alternative cell configurations. The approach combines the use of multiple available simulation tools, with a custom holonic cell controller. This overcomes the limitation of currently available robot simulation packages by allowing integration of multiple simulation tools including multiple vendor simulation packages.

Design/methodology/approach

A feeding cell was developed as a case study representing a typical robotic application. The case study would compare two configurations of the cell, namely, eye-in-hand vision and fixed-camera vision. The authors developed the physical cell in parallel with the simulated cell to validate its performance. Then they used simulation to scale the models (by adding subsystems) and shortlist suitable cell configurations based on initial capital investment and throughput rate per unit cost. The feeding cell consisted of a six-degree of freedom industrial robot (KUKA KR16), two smart cameras (Cognex ism-1100 and DVT Legend 500), an industrial PC (Beckhoff) and custom reconfigurable singulation units.

Findings

The approach presented here allows the combination of dissimilar simulation models constructed for the above mentioned case study. Experiments showed the model developed in this approach could reasonably predict various eye-in-hand and fixed-camera systems’ performance. Combining the holonic controller with the simulation allows developers to easily compare the performance of a variety of configurations. The use of a common communication platform allowed the communication between multiple simulation packages, allowing multi-vendor simulation, thereby overcoming current limitation in simulation software.

Research limitations/implications

The case study developed here is considered a typical feeding and assembly application. This is however very different from other robotic applications which should be explored in separate case studies. Simulation packages with the same communication interface as the physical resource can be integrated. If the communication interface is not available, other means of simulation can be used. The case study findings are limited to the specific products being used and their simulation packages. However, these are indicative of typical industry technologies available. Only real-time simulations were considered.

Practical implications

This simulation-based approach allows designers to quickly quantify the performance of alternative system configurations (eye-in-hand or fixed camera in this case) and scale, thereby enabling them to better optimize robotic cell designs. In addition, the holonic control system’s modular control interface allows for the development of the higher-level controller without hardware and easy replacement of the lower level components with other hardware or simulation models.

Originality/value

The combination of a holonic control system with a simulation to replace hardware is shown to be a useful tool. The inherent modularity of holonic control systems allows that multiple simulation components be connected, thereby overcoming the limitation of vendor-specific simulation packages.

Details

Industrial Robot: the international journal of robotics research and application, vol. 46 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 13 January 2022

Jiang Daqi, Wang Hong, Zhou Bin and Wei Chunfeng

This paper aims to save time spent on manufacturing the data set and make the intelligent grasping system easy to deploy into a practical industrial environment. Due to the…

Abstract

Purpose

This paper aims to save time spent on manufacturing the data set and make the intelligent grasping system easy to deploy into a practical industrial environment. Due to the accuracy and robustness of the convolutional neural network, the success rate of the gripping operation reached a high level.

Design/Methodology/Approach

The proposed system comprises two diverse kinds of convolutional neuron network (CNN) algorithms used in different stages and a binocular eye-in-hand system on the end effector, which detects the position and orientation of workpiece. Both algorithms are trained by the data sets containing images and annotations, which are generated automatically by the proposed method.

Findings

The approach can be successfully applied to standard position-controlled robots common in the industry. The algorithm performs excellently in terms of elapsed time. Procession of a 256 × 256 image spends less than 0.1 s without relying on high-performance GPUs. The approach is validated in a series of grasping experiments. This method frees workers from monotonous work and improves factory productivity.

Originality/Value

The authors propose a novel neural network whose performance is tested to be excellent. Moreover, experimental results demonstrate that the proposed second level is extraordinary robust subject to environmental variations. The data sets are generated automatically which saves time spent on manufacturing the data set and makes the intelligent grasping system easy to deploy into a practical industrial environment. Due to the accuracy and robustness of the convolutional neural network, the success rate of the gripping operation reached a high level.

Details

Assembly Automation, vol. 42 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 31 May 2023

Xu Jingbo, Li Qiaowei and White Bai

The purpose of this study is solving the hand–eye calibration issue for line structured light vision sensor. Only after hand–eye calibration the sensor measurement data can be…

Abstract

Purpose

The purpose of this study is solving the hand–eye calibration issue for line structured light vision sensor. Only after hand–eye calibration the sensor measurement data can be applied to robot system.

Design/methodology/approach

In this paper, the hand–eye calibration methods are studied, respectively, for eye-in-hand and eye-to-hand. Firstly, the coordinates of the target point in robot system are obtained by tool centre point (TCP), then the robot is controlled to make the sensor measure the target point in multiple poses and the measurement data and pose data are obtained; finally, the sum of squared calibration errors is minimized by the least square method. Furthermore, the missing vector in the process of solving the transformation matrix is obtained by vector operation, and the complete matrix is obtained.

Findings

On this basis, the sensor measurement data can be easily and accurately converted to the robot coordinate system by matrix operation.

Originality/value

This method has no special requirement for robot pose control, and its calibration process is fast and efficient, with high precision and has practical popularized value.

Details

Sensor Review, vol. 43 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 8 March 2011

Umer Khan, Ibrar Jan, Naeem Iqbal and Jian Dai

The purpose of this paper is to present the control of six degrees of freedom (PUMA560) robotic arm using visual servoing, based upon linear matrix inequality (LMI). The aim lies…

Abstract

Purpose

The purpose of this paper is to present the control of six degrees of freedom (PUMA560) robotic arm using visual servoing, based upon linear matrix inequality (LMI). The aim lies in developing such a method that neither involves camera calibration parameters nor inverse kinematics. The approach adopted in this paper includes transpose Jacobian control; thus, inverse of the Jacobian matrix is no longer required. By invoking the Lyapunov's direct method, closed‐loop stability of the system is ensured. Simulation results are shown for three different cases, which exhibit the system stability and convergence even in the presence of large errors.

Design/methodology/approach

The paper presents LMI‐based visual servo control of PUMA560 robotic arm.

Findings

The proposed method is implementable in the dynamic environment due to its independence to camera and object model.

Research limitations/implications

Visibility constraint is not included during servoing – this may cause features to leave the camera field of view (fov).

Originality/value

LMI optimization is employed for visual servo control in an uncalibrated environment. Lyapunov's direct method is utilized which ensures system stability and convergence.

Details

Industrial Robot: An International Journal, vol. 38 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 30 November 2021

Bence Tipary and Ferenc Gábor Erdős

The purpose of this paper is to propose a novel measurement technique and a modelless calibration method for improving the positioning accuracy of a three-axis parallel kinematic…

Abstract

Purpose

The purpose of this paper is to propose a novel measurement technique and a modelless calibration method for improving the positioning accuracy of a three-axis parallel kinematic machine (PKM). The aim is to present a low-cost calibration alternative, for small and medium-sized enterprises, as well as educational and research teams, with no expensive measuring devices at their disposal.

Design/methodology/approach

Using a chessboard pattern on a ground-truth plane, a digital indicator, a two-dimensional eye-in-hand camera and a laser pointer, positioning errors are explored in the machine workspace. With the help of these measurements, interpolation functions are set up per direction, resulting in an interpolation vector function to compensate the volumetric errors in the workspace.

Findings

Based on the proof-of-concept system for the linear-delta PKM, it is shown that using the proposed measurement technique and modelless calibration method, positioning accuracy is significantly improved using simple setups.

Originality/value

In the proposed method, a combination of low-cost devices is applied to improve the three-dimensional positioning accuracy of a PKM. By using the presented tools, the parametric kinematic model is not required; furthermore, the calibration setup is simple, there is no need for hand–eye calibration and special fixturing in the machine workspace.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 20 October 2014

Hui Pan, Na Li Wang and Yin Shi Qin

The purpose of this paper is to propose a method that calibrates the hand-eye relationship for eye-to-hand configuration and afterwards a rectification to improve the accuracy of…

Abstract

Purpose

The purpose of this paper is to propose a method that calibrates the hand-eye relationship for eye-to-hand configuration and afterwards a rectification to improve the accuracy of general calibration.

Design/methodology/approach

The hand-eye calibration of eye-to-hand configuration is summarized as a equation AX = XB which is the same as in eye-in-hand calibration. A closed-form solution is derived. To abate the impact of noise, a rectification is conducted after the general calibration.

Findings

Simulation and actual experiments confirm that the accuracy of calibration is obviously improved.

Originality/value

Only a calibration plane is required for the hand-eye calibration. Taking the impact of noise into account, a rectification is carried out after the general calibration and, as a result, that the accuracy is obviously improved. The method can be applied in many actual applications.

Details

Industrial Robot: An International Journal, vol. 41 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of 76