Search results
1 – 10 of 164
This paper aims to propose a hand–eye calibration method of arc welding robot and laser vision sensor by using semidefinite programming (SDP).
Abstract
Purpose
This paper aims to propose a hand–eye calibration method of arc welding robot and laser vision sensor by using semidefinite programming (SDP).
Design/methodology/approach
The conversion relationship between the pixel coordinate system and laser plane coordinate system is established on the basis of the mathematical model of three-dimensional measurement of laser vision sensor. In addition, the conversion relationship between the arc welding robot coordinate system and the laser vision sensor measurement coordinate system is also established on the basis of the hand–eye calibration model. The ordinary least square (OLS) is used to calculate the rotation matrix, and the SDP is used to identify the direction vectors of the rotation matrix to ensure their orthogonality.
Findings
The feasibility identification can reduce the calibration error, and ensure the orthogonality of the calibration results. More accurate calibration results can be obtained by combining OLS + SDP.
Originality/value
A set of advanced calibration methods is systematically established, which includes parameters calibration of laser vision sensor and hand–eye calibration of robots and sensors. For the hand–eye calibration, the physics feasibility problem of rotating matrix is creatively put forward, and is solved through SDP algorithm. High-precision calibration results provide a good foundation for future research on seam tracking.
Details
Keywords
Xu Jingbo, Li Qiaowei and White Bai
The purpose of this study is solving the hand–eye calibration issue for line structured light vision sensor. Only after hand–eye calibration the sensor measurement data can be…
Abstract
Purpose
The purpose of this study is solving the hand–eye calibration issue for line structured light vision sensor. Only after hand–eye calibration the sensor measurement data can be applied to robot system.
Design/methodology/approach
In this paper, the hand–eye calibration methods are studied, respectively, for eye-in-hand and eye-to-hand. Firstly, the coordinates of the target point in robot system are obtained by tool centre point (TCP), then the robot is controlled to make the sensor measure the target point in multiple poses and the measurement data and pose data are obtained; finally, the sum of squared calibration errors is minimized by the least square method. Furthermore, the missing vector in the process of solving the transformation matrix is obtained by vector operation, and the complete matrix is obtained.
Findings
On this basis, the sensor measurement data can be easily and accurately converted to the robot coordinate system by matrix operation.
Originality/value
This method has no special requirement for robot pose control, and its calibration process is fast and efficient, with high precision and has practical popularized value.
Details
Keywords
Jing Bai, Yuchang Zhang, Xiansheng Qin, Zhanxi Wang and Chen Zheng
The purpose of this paper is to present a visual detection approach to predict the poses of target objects placed in arbitrary positions before completing the corresponding tasks…
Abstract
Purpose
The purpose of this paper is to present a visual detection approach to predict the poses of target objects placed in arbitrary positions before completing the corresponding tasks in mobile robotic manufacturing systems.
Design/methodology/approach
A hybrid visual detection approach that combines monocular vision and laser ranging is proposed based on an eye-in-hand vision system. The laser displacement sensor is adopted to achieve normal alignment for an arbitrary plane and obtain depth information. The monocular camera measures the two-dimensional image information. In addition, a robot hand-eye relationship calibration method is presented in this paper.
Findings
First, a hybrid visual detection approach for mobile robotic manufacturing systems is proposed. This detection approach is based on an eye-in-hand vision system consisting of one monocular camera and three laser displacement sensors and it can achieve normal alignment for an arbitrary plane and spatial positioning of the workpiece. Second, based on this vision system, a robot hand-eye relationship calibration method is presented and it was successfully applied to a mobile robotic manufacturing system designed by the authors’ team. As a result, the relationship between the workpiece coordinate system and the end-effector coordinate system could be established accurately.
Practical implications
This approach can quickly and accurately establish the relationship between the coordinate system of the workpiece and that of the end-effector. The normal alignment accuracy of the hand-eye vision system was less than 0.5° and the spatial positioning accuracy could reach 0.5 mm.
Originality/value
This approach can achieve normal alignment for arbitrary planes and spatial positioning of the workpiece and it can quickly establish the pose relationship between the workpiece and end-effector coordinate systems. Moreover, the proposed approach can significantly improve the work efficiency, flexibility and intelligence of mobile robotic manufacturing systems.
Details
Keywords
The aim of this study is to create a robust and simple collision avoidance approach based on quaternion algebra for vision-based pick and place applications in manufacturing…
Abstract
Purpose
The aim of this study is to create a robust and simple collision avoidance approach based on quaternion algebra for vision-based pick and place applications in manufacturing industries, specifically for use with industrial robots and collaborative robots (cobots).
Design/methodology/approach
In this study, an approach based on quaternion algebra is developed to prevent any collision or breakdown during the movements of industrial robots or cobots in vision system included pick and place applications. The algorithm, integrated into the control system, checks for collisions before the robot moves its end effector to the target position during the process flow. In addition, a hand–eye calibration method is presented to easily calibrate the camera and define the geometric relationships between the camera and the robot coordinate systems.
Findings
This approach, specifically designed for vision-based robot/cobot applications, can be used by developers and robot integrator companies to significantly reduce application costs and the project timeline of the pick and place robotics system installation. Furthermore, the approach ensures a safe, robust and highly efficient application for robotics vision applications across all industries, making it an ideal solution for various industries.
Originality/value
The algorithm for this approach, which can be operated in a robot controller or a programmable logic controller, has been tested as real-time in vision-based robotics applications. It can be applied to both existing and new vision-based pick and place projects with industrial robots or collaborative robots with minimal effort, making it a cost-effective and efficient solution for various industries.
Details
Keywords
Guoyang Wan, Fudong Li, Wenjun Zhu and Guofeng Wang
The positioning and grasping of large-size objects have always had problems of low positioning accuracy, slow grasping speed and high application cost compared with ordinary small…
Abstract
Purpose
The positioning and grasping of large-size objects have always had problems of low positioning accuracy, slow grasping speed and high application cost compared with ordinary small parts tasks. This paper aims to propose and implement a binocular vision-guided grasping system for large-size object with industrial robot.
Design/methodology/approach
To guide the industrial robot to grasp the object with high position and pose accuracy, this study measures the pose of the object by extracting and reconstructing three non-collinear feature points on it. To improve the precision and the robustness of the pose measuring, a coarse-to-fine positioning strategy is proposed. First, a coarse but stable feature is chosen to locate the object in the image and provide initial regions for the fine features. Second, three circular holes are chosen to be the fine features whose centers are extracted with a robust ellipse fitting strategy and thus determine the precise pose and position of the object.
Findings
Experimental results show that the proposed system has achieved high robustness and high positioning accuracy of −1 mm and pose accuracy of −0.5 degree.
Originality/value
It is a high accuracy method that can be used for industrial robot vision-guided and grasp location.
Details
Keywords
Bence Tipary and Ferenc Gábor Erdős
The purpose of this paper is to propose a novel measurement technique and a modelless calibration method for improving the positioning accuracy of a three-axis parallel kinematic…
Abstract
Purpose
The purpose of this paper is to propose a novel measurement technique and a modelless calibration method for improving the positioning accuracy of a three-axis parallel kinematic machine (PKM). The aim is to present a low-cost calibration alternative, for small and medium-sized enterprises, as well as educational and research teams, with no expensive measuring devices at their disposal.
Design/methodology/approach
Using a chessboard pattern on a ground-truth plane, a digital indicator, a two-dimensional eye-in-hand camera and a laser pointer, positioning errors are explored in the machine workspace. With the help of these measurements, interpolation functions are set up per direction, resulting in an interpolation vector function to compensate the volumetric errors in the workspace.
Findings
Based on the proof-of-concept system for the linear-delta PKM, it is shown that using the proposed measurement technique and modelless calibration method, positioning accuracy is significantly improved using simple setups.
Originality/value
In the proposed method, a combination of low-cost devices is applied to improve the three-dimensional positioning accuracy of a PKM. By using the presented tools, the parametric kinematic model is not required; furthermore, the calibration setup is simple, there is no need for hand–eye calibration and special fixturing in the machine workspace.
Details
Keywords
Haixia Wang, Shuhan Shen and Xiao Lu
The purpose of this paper is to propose a screw axis identification (SAI) method based on the product of exponentials (POE) model, which is concerned with calibrating a serial…
Abstract
Purpose
The purpose of this paper is to propose a screw axis identification (SAI) method based on the product of exponentials (POE) model, which is concerned with calibrating a serial robot with m joints equipped with a stereo‐camera vision system.
Design/methodology/approach
Different from conventional approaches, like the circle point analysis (CPA) or the system theoretic method which must collect a great deal of data, the identification of the joint parameters for the proposed method only needs to measure m+1 times for n (n≥3) target points mounted on the manipulator end‐effector.
Findings
In this approach, the joint parameter, called a screw or twist, together with the actual value of joint angle can be obtained by linearly solving a closed‐form expression. Further, this method avoids calibrating the hand‐eye relationship and the exterior parameter of the robot.
Originality/value
Finally, the stability and accuracy of the SAI method are evaluated by simulation experiments, and it is also verified well in practical experiments.
Details
Keywords
Xuhui Ye, Gongping Wu, Fei Fan, XiangYang Peng and Ke Wang
An accurate detection of overhead ground wire under open surroundings with varying illumination is the premise of reliable line grasping with the off-line arm when the inspection…
Abstract
Purpose
An accurate detection of overhead ground wire under open surroundings with varying illumination is the premise of reliable line grasping with the off-line arm when the inspection robot cross obstacle automatically. This paper aims to propose an improved approach which is called adaptive homomorphic filter and supervised learning (AHSL) for overhead ground wire detection.
Design/methodology/approach
First, to decrease the influence of the varying illumination caused by the open work environment of the inspection robot, the adaptive homomorphic filter is introduced to compensation the changing illumination. Second, to represent ground wire more effectively and to extract more powerful and discriminative information for building a binary classifier, the global and local features fusion method followed by supervised learning method support vector machine is proposed.
Findings
Experiment results on two self-built testing data sets A and B which contain relative older ground wires and relative newer ground wire and on the field ground wires show that the use of the adaptive homomorphic filter and global and local feature fusion method can improve the detection accuracy of the ground wire effectively. The result of the proposed method lays a solid foundation for inspection robot grasping the ground wire by visual servo.
Originality/value
This method AHSL has achieved 80.8 per cent detection accuracy on data set A which contains relative older ground wires and 85.3 per cent detection accuracy on data set B which contains relative newer ground wires, and the field experiment shows that the robot can detect the ground wire accurately. The performance achieved by proposed method is the state of the art under open environment with varying illumination.
Details
Keywords
Three themes in complex information processing are revealing themselves to be mutually interconnected: problem‐solving mechanisms, automatic program writing, and the organization…
Abstract
Three themes in complex information processing are revealing themselves to be mutually interconnected: problem‐solving mechanisms, automatic program writing, and the organization of large bodies of knowledge in machine memory. Interconnections are discussed in the contexts of chess and of automatic assembly. Reference is also made to automated chemistry systems.
Jiang Daqi, Wang Hong, Zhou Bin and Wei Chunfeng
This paper aims to save time spent on manufacturing the data set and make the intelligent grasping system easy to deploy into a practical industrial environment. Due to the…
Abstract
Purpose
This paper aims to save time spent on manufacturing the data set and make the intelligent grasping system easy to deploy into a practical industrial environment. Due to the accuracy and robustness of the convolutional neural network, the success rate of the gripping operation reached a high level.
Design/Methodology/Approach
The proposed system comprises two diverse kinds of convolutional neuron network (CNN) algorithms used in different stages and a binocular eye-in-hand system on the end effector, which detects the position and orientation of workpiece. Both algorithms are trained by the data sets containing images and annotations, which are generated automatically by the proposed method.
Findings
The approach can be successfully applied to standard position-controlled robots common in the industry. The algorithm performs excellently in terms of elapsed time. Procession of a 256 × 256 image spends less than 0.1 s without relying on high-performance GPUs. The approach is validated in a series of grasping experiments. This method frees workers from monotonous work and improves factory productivity.
Originality/Value
The authors propose a novel neural network whose performance is tested to be excellent. Moreover, experimental results demonstrate that the proposed second level is extraordinary robust subject to environmental variations. The data sets are generated automatically which saves time spent on manufacturing the data set and makes the intelligent grasping system easy to deploy into a practical industrial environment. Due to the accuracy and robustness of the convolutional neural network, the success rate of the gripping operation reached a high level.
Details