Search results
1 – 10 of 191Yanbiao Zou, Jinchao Li and Xiangzhi Chen
This paper aims to propose a set of six-axis robot arm welding seam tracking experiment platform based on Halcon machine vision library to resolve the curve seam tracking issue.
Abstract
Purpose
This paper aims to propose a set of six-axis robot arm welding seam tracking experiment platform based on Halcon machine vision library to resolve the curve seam tracking issue.
Design/methodology/approach
Robot-based and image coordinate systems are converted based on the mathematical model of the three-dimensional measurement of structured light vision and conversion relations between robot-based and camera coordinate systems. An object tracking algorithm via weighted local cosine similarity is adopted to detect the seam feature points to prevent effectively the interference from arc and spatter. This algorithm models the target state variable and corresponding observation vector within the Bayes framework and finds the optimal region with highest similarity to the image-selected modules using cosine similarity.
Findings
The paper tests the approach and the experimental results show that using metal inert-gas (MIG) welding with maximum welding current of 200A can achieve real-time accurate curve seam tracking under strong arc light and splash. Minimal distance between laser stripe and welding molten pool can reach 15 mm, and sensor sampling frequency can reach 50 Hz.
Originality/value
Designing a set of six-axis robot arm welding seam tracking experiment platform with a system of structured light sensor based on Halcon machine vision library; and adding an object tracking algorithm to seam tracking system to detect image feature points. By this technology, this system can track the curve seam while welding.
Details
Keywords
Xueyong Li, Changhou Lu, Rujing Xiao, Jianchuan Zhang and Jie Ding
The purpose of this paper is to present a novel image sensor technology for raised characters based on line structured‐light. It can convert raised character's three‐dimensional…
Abstract
Purpose
The purpose of this paper is to present a novel image sensor technology for raised characters based on line structured‐light. It can convert raised character's three‐dimensional (3D) features into image's grayscale levels.
Design/methodology/approach
The measurement principle and mathematical model are described. An experimental device is established and system parameters are calibrated. A grayscale conversion algorithm is proposed to convert the distortion of laser stripe to the grayscale intensity of image. The article also introduces a four‐factor method to assess the image quality of characters.
Findings
Experimental results show that the method can get high‐contrast images of raised characters that are conventionally low‐contrast with the background. Besides, the method does not need complicated calibration and mass computation, which makes the system structure simple and increases the speed of image acquisition.
Originality/value
The paper presents a novel image acquisition method for raised characters.
Details
Keywords
Xu Jingbo, Li Qiaowei and White Bai
The purpose of this study is solving the hand–eye calibration issue for line structured light vision sensor. Only after hand–eye calibration the sensor measurement data can be…
Abstract
Purpose
The purpose of this study is solving the hand–eye calibration issue for line structured light vision sensor. Only after hand–eye calibration the sensor measurement data can be applied to robot system.
Design/methodology/approach
In this paper, the hand–eye calibration methods are studied, respectively, for eye-in-hand and eye-to-hand. Firstly, the coordinates of the target point in robot system are obtained by tool centre point (TCP), then the robot is controlled to make the sensor measure the target point in multiple poses and the measurement data and pose data are obtained; finally, the sum of squared calibration errors is minimized by the least square method. Furthermore, the missing vector in the process of solving the transformation matrix is obtained by vector operation, and the complete matrix is obtained.
Findings
On this basis, the sensor measurement data can be easily and accurately converted to the robot coordinate system by matrix operation.
Originality/value
This method has no special requirement for robot pose control, and its calibration process is fast and efficient, with high precision and has practical popularized value.
Details
Keywords
Xiaojun Wu, Bo Liu, Peng Li and Yunhui Liu
Existing calibration methods mainly focus on the camera laser-plane calibration of a single laser-line length, which is not convenient and cannot guarantee the consistency of the…
Abstract
Purpose
Existing calibration methods mainly focus on the camera laser-plane calibration of a single laser-line length, which is not convenient and cannot guarantee the consistency of the results when several three-dimensional (3D) scanners are involved. Thus, this study aims to provide a unified step for different laser-line length calibration requirements for laser profile measurement (LPM) systems.
Design/methodology/approach
3D LPM is the process of converting physical objects into 3D digital models, wherein camera laser-plane calibration is critical for ensuring system precision. However, conventional calibration methods for 3D LPM typically use a calibration target to calibrate the system for a single laser-line length, which needs multiple calibration patterns and makes the procedure complicated. In this paper, a unified calibration method was proposed to automatically calibrate the camera laser-plane parameters for the LPM systems with different laser-line lengths. The authors designed an elaborate planar calibration target with different-sized rings that mounted on a motorized linear platform to calculate the laser-plane parameters of the LPM systems. Then, the camera coordinates of the control points are obtained using the intersection line between the laser line and the planar target. With a new proposed error correction model, the errors caused by hardware assembly can be corrected. To validate the proposed method, three LPM devices with different laser-line lengths are used to verify the proposed system. Experimental results show that the proposed method can calibrate the LPM systems with different laser-line lengths conveniently with standard steps.
Findings
The repeatability and accuracy of the proposed calibration prototypes were evaluated with high-precision workpieces. The experiments have shown that the proposed method is highly adaptive and can automatically calibrate the LPM system with different laser-line lengths with high accuracy.
Research limitations/implications
In the repeatability experiments, there were errors in the measured heights of the test workpieces, and this is because the laser emitter had the best working distance and laser-line length.
Practical implications
By using this proposed method and device, the calibration of the 3D scanning laser device can be done in an automatic way.
Social implications
The calibration efficiency of a laser camera device is increased.
Originality/value
The authors proposed a unified calibration method for LPM systems with different laser-line lengths that consist of a motorized linear joint and a calibration target with elaborately designed ring patterns; the authors realized the automatic parameter calibration.
Details
Keywords
Ming-Yuan Shieh, Chung-Yu Hsieh and Tsung-Min Hsieh
The purpose of this paper is to propose a fast object detection algorithm based on structural light analysis, which aims to detect and recognize human gesture and pose and then to…
Abstract
Purpose
The purpose of this paper is to propose a fast object detection algorithm based on structural light analysis, which aims to detect and recognize human gesture and pose and then to conclude the respective commands for human-robot interaction control.
Design/methodology/approach
In this paper, the human poses are estimated and analyzed by the proposed scheme, and then the resultant data concluded by the fuzzy decision-making system are used to launch respective robotic motions. The RGB camera and the infrared light module aim to do distance estimation of a body or several bodies.
Findings
The modules not only provide image perception but also objective skeleton detection. In which, a laser source in the infrared light module emits invisible infrared light which passes through a filter and is scattered into a semi-random but constant pattern of small dots which is projected onto the environment in front of the sensor. The reflected pattern is then detected by an infrared camera and analyzed for depth estimation. Since the depth of object is a key parameter for pose recognition, one can estimate the distance to each dot and then get depth information by calculation of distance between emitter and receiver.
Research limitations/implications
Future work will consider to reduce the computation time for objective estimation and to tune parameters adaptively.
Practical implications
The experimental results demonstrate the feasibility of the proposed system.
Originality/value
This paper achieves real-time human-robot interaction by visual detection based on structural light analysis.
Details
Keywords
This paper aims to propose a hand–eye calibration method of arc welding robot and laser vision sensor by using semidefinite programming (SDP).
Abstract
Purpose
This paper aims to propose a hand–eye calibration method of arc welding robot and laser vision sensor by using semidefinite programming (SDP).
Design/methodology/approach
The conversion relationship between the pixel coordinate system and laser plane coordinate system is established on the basis of the mathematical model of three-dimensional measurement of laser vision sensor. In addition, the conversion relationship between the arc welding robot coordinate system and the laser vision sensor measurement coordinate system is also established on the basis of the hand–eye calibration model. The ordinary least square (OLS) is used to calculate the rotation matrix, and the SDP is used to identify the direction vectors of the rotation matrix to ensure their orthogonality.
Findings
The feasibility identification can reduce the calibration error, and ensure the orthogonality of the calibration results. More accurate calibration results can be obtained by combining OLS + SDP.
Originality/value
A set of advanced calibration methods is systematically established, which includes parameters calibration of laser vision sensor and hand–eye calibration of robots and sensors. For the hand–eye calibration, the physics feasibility problem of rotating matrix is creatively put forward, and is solved through SDP algorithm. High-precision calibration results provide a good foundation for future research on seam tracking.
Details
Keywords
Zhiming Chen, Lei Li, Yunhua Wu, Bing Hua and Kang Niu
On-orbit service technology is one of the key technologies of space manipulation activities such as spacecraft life extension, fault spacecraft capture, on-orbit debris removal…
Abstract
Purpose
On-orbit service technology is one of the key technologies of space manipulation activities such as spacecraft life extension, fault spacecraft capture, on-orbit debris removal and so on. It is known that the failure satellites, space debris and enemy spacecrafts in space are almost all non-cooperative targets. Relatively accurate pose estimation is critical to spatial operations, but also a recognized technical difficulty because of the undefined prior information of non-cooperative targets. With the rapid development of laser radar, the application of laser scanning equipment is increasing in the measurement of non-cooperative targets. It is necessary to research a new pose estimation method for non-cooperative targets based on 3D point cloud. The paper aims to discuss these issues.
Design/methodology/approach
In this paper, a method based on the inherent characteristics of a spacecraft is proposed for estimating the pose (position and attitude) of the spatial non-cooperative target. First, we need to preprocess the obtained point cloud to reduce noise and improve the quality of data. Second, according to the features of the satellite, a recognition system used for non-cooperative measurement is designed. The components which are common in the configuration of satellite are chosen as the recognized object. Finally, based on the identified object, the ICP algorithm is used to calculate the pose between two frames of point cloud in different times to finish pose estimation.
Findings
The new method enhances the matching speed and improves the accuracy of pose estimation compared with traditional methods by reducing the number of matching points. The recognition of components on non-cooperative spacecraft directly contributes to the space docking, on-orbit capture and relative navigation.
Research limitations/implications
Limited to the measurement distance of the laser radar, this paper considers the pose estimation for non-cooperative spacecraft in the close range.
Practical implications
The pose estimation method for non-cooperative spacecraft in this paper is mainly applied to close proximity space operations such as final rendezvous phase of spacecraft or ultra-close approaching phase of target capture. The system can recognize components needed to be capture and provide the relative pose of non-cooperative spacecraft. The method in this paper is more robust compared with the traditional single component recognition method and overall matching method when scanning of laser radar is not complete or the components are blocked.
Originality/value
This paper introduces a new pose estimation method for non-cooperative spacecraft based on point cloud. The experimental results show that the proposed method can effectively identify the features of non-cooperative targets and track their position and attitude. The method is robust to the noise and greatly improves the speed of pose estimation while guarantee the accuracy.
Details
Keywords
To report on developments in robotic vision by a particular robot manufacturer.
Abstract
Purpose
To report on developments in robotic vision by a particular robot manufacturer.
Design/methodology/approach
Examines FANUC Robotics' philosophy and history of integrated vision, describes its latest offering, and looks at the specification of the new robot controller.
Findings
The new robot controller incorporates image processing hardware and software, including calibration procedures. The intelligent robot responds to changes in its surroundings, eliminating the need for jigs and part‐alignment devices and broadening its capabilities.
Originality/value
Presents the intelligent robot as a practical tool in factory automation.
Details
Keywords
Cengiz Deniz and Mustafa Cakir
This paper aims to introduce a simple hand-eye calibration method that can be easily applied with different objective functions.
Abstract
Purpose
This paper aims to introduce a simple hand-eye calibration method that can be easily applied with different objective functions.
Design/methodology/approach
The hand-eye calibration is solved by using the closed form absolute orientation equations. Instead of processing all samples together, the proposed method goes through all minimal solution sets. Final result is chosen after evaluating the solution set for arbitrary objectives. In this stage, outliers can be excluded optionally if more accuracy is desired.
Findings
The proposed method is very flexible and gives more accurate and convenient results than the existing solutions. The mathematical error expression defined by the calibration equations may not be valid in practice, where especially systematic distortions are present. It is shown in the simulations that the solution which results the least mathematical error in systems may have incorrect, incompatible results in the presence of practical demands.
Research limitations/implications
The performance of the calibration performed with the proposed method is compared with the reference methods in the literature. When the back-projection error is benchmarked, which corresponds to the point repeatability, the proposed approach is considered as the most successful method among all others. Due to its robustness, it is decided to make tooling-sensor calibrations by the recommended method, in the robotic non-destructive testing station in Ford-OTOSAN Kocaeli Plant Body Shop Department.
Originality/value
Arranging the well-known AX = XB calibration equation in quaternion representation as Q_A = Q_x × Q_B × Q_x reveals another common spatial rotation equation. In this way, absolute orientation solution satisfies the hand-eye calibration equations. The proposed solution is not presented in the literature as a standalone hand-eye calibration method, although some researchers drop a hint to the relative formulations.
Details
Keywords
Describes a rugged, high speed structured laser light triangulation‐based vision‐type sensor, with real‐time image processor, along with some actual applications. High speed is…
Abstract
Describes a rugged, high speed structured laser light triangulation‐based vision‐type sensor, with real‐time image processor, along with some actual applications. High speed is essential for a low dynamic error at the laser welding speed (10m/minute). Square butt joint, frequently used in welding, requires high sensor resolution and a complementary detection technique based on the reflected light intensity. Shows some welding techniques, based on the joint geometry measurements; illustrates capabilities of the vision‐based sensor.
Details