Search results

1 – 10 of 756
Article
Publication date: 20 October 2014

Haitao Yang, Minghe Jin, Zongwu Xie, Kui Sun and Hong Liu

The purpose of this paper is to solve the ground verification and test method for space robot system capturing the target satellite based on visual servoing with time-delay in…

Abstract

Purpose

The purpose of this paper is to solve the ground verification and test method for space robot system capturing the target satellite based on visual servoing with time-delay in 3-dimensional space prior to space robot being launched.

Design/methodology/approach

To implement the approaching and capturing task, a motion planning method for visual servoing the space manipulator to capture a moving target is presented. This is mainly used to solve the time-delay problem of the visual servoing control system and the motion uncertainty of the target satellite. To verify and test the feasibility and reliability of the method in three-dimensional (3D) operating space, a set of ground hardware-in-the-loop simulation verification systems is developed, which adopts the end-tip kinematics equivalence and dynamics simulation method.

Findings

The results of the ground hardware-in-the-loop simulation experiment validate the reliability of the eye-in-hand visual system in the 3D operating space and prove the validity of the visual servoing motion planning method with time-delay compensation. At the same time, owing to the dynamics simulator of the space robot added in the ground hardware-in-the-loop verification system, the base disturbance can be considered during the approaching and capturing procedure, which makes the ground verification system realistic and credible.

Originality/value

The ground verification experiment system includes the real controller of space manipulator, the eye-in-hand camera and the dynamics simulator, which can veritably simulate the capturing process based on the visual servoing in space and consider the effect of time delay and the free-floating base disturbance.

Details

Industrial Robot: An International Journal, vol. 41 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 12 July 2024

Yuze Wu, Jianbin Liao, Liangyu Liu, Yu Yan, Yunfei Ai, Yunxiang Li and Wang Wei

This paper aims to address the challenges of the capacitor tower maintenance robot during bolt tightening in high-voltage substations, including difficulties in bolt positioning…

Abstract

Purpose

This paper aims to address the challenges of the capacitor tower maintenance robot during bolt tightening in high-voltage substations, including difficulties in bolt positioning due to tilted angles and anti-bird cover occlusion and issues with fast and accurate docking of bolts while the base is moving.

Design/methodology/approach

This paper proposes a visual servoing method for the capacitor tower maintenance robot, including bolt pose estimation and visual servoing control. Bolt pose estimation includes four components: constructing a keypoint detection network to identify the approximate position, precise positioning, rapid prediction and calculation of bolt pose. In visual servoing, an improved position-based visual servoing (PBVS) is proposed, which eliminate steady-state error and enhance response speed during dynamic tracking by incorporating integral and differential components.

Findings

The bolt detection method exhibits high robustness against varying lighting conditions, partial occlusions, shooting distances and angles. The maximum positioning error at a distance of 250 mm is 2.8 mm. The convergence speed of the improved PBVS is 10% higher than that of the traditional PBVS when the base and target remain relatively stationary. When the base moves at a constant speed, the improved method eliminates steady-state error in dynamic tracking. When the base moves rapidly and intermittently, the maximum error of the improved method in the tracking process is 30% smaller than that of traditional PBVS.

Originality/value

This method enables real-time detection and positioning of bolts in an unstructured environment with tilt angles, variable lighting conditions and occlusion by anti-bird covers. An improved PBVS is proposed to enhance its capability in tracking dynamic targets.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 6 February 2023

Changle Li, Chong Yao, Shuo Xu, Leifeng Zhang, Yilun Fan and Jie Zhao

With the rapid development of the 3C industry, the problem of automated operation of 3C wire is becoming increasingly prominent. However, the 3C wire has high flexibility, and its…

Abstract

Purpose

With the rapid development of the 3C industry, the problem of automated operation of 3C wire is becoming increasingly prominent. However, the 3C wire has high flexibility, and its deformation is difficult to model and control. How to realize the automation operation of flexible wire in 3C products is still an important issue that restricts the development of the 3C industry. Therefore, this paper designs a system that aims to improve the automation level of the 3C industry.

Design/methodology/approach

This paper designed a visual servo control system. Based on the perception of the flexible wire, a Jacobi matrix is used to relate the deformation of the wire to the action of the robot end; by building and optimizing the Jacobi matrix, the robot can control the flexible wire.

Findings

By using the visual servo control system, the shape and deformation of the flexible wire are perceived, and based on this, the robot can control the deformation of the flexible wire well. The experimental environment was built to evaluate the accuracy and stability of the system for controlling the deformation of the flexible wire.

Originality/value

An image-based visual servo system is proposed to operate the flexible wire, including the vision system, visual controller and joint velocity controller. It is a scheme suitable for flexible wire operation, which has helped to automate flexible wire-related industries. Its core is to correlate the motion of the robot end with the deformation of the flexible wire through the Jacobian matrix.

Details

Robotic Intelligence and Automation, vol. 43 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 1 November 2006

Raul Wirz, Raul Marin and Pedro J. Sanz

The authors of this paper aim to describe the design of distributed architectures for the remote control of multirobot systems. A very good example of remote robot programming in…

Abstract

Purpose

The authors of this paper aim to describe the design of distributed architectures for the remote control of multirobot systems. A very good example of remote robot programming in order to validate these architectures is in fact the remote visual servoing control. It uses sequences of camera inputs in order to bring the robots to the desired position, in an iterative way. In fact, in this paper, we enabled the students and scientists in our university to experiment with their remote visual servoing algorithms through a remote real environment instead of using simulation tools.

Design/methodology/approach

Since 2001, the authors have been using the UJI‐TeleLab as a tool to allow students and scientists to program remotely several vision‐based network robots. During this period it has been learnt that multithread remote programming combined with a distributed multirobot architecture, as well as advanced multimedia user interfaces, are very convenient, flexible and profitable for the design of a Tele‐Laboratory. The distributed system architecture permits any external algorithm to have access to almost every feature of several network robots.

Findings

Presents the multirobot system architecture and its performance by programming two closed loop experiments using the Internet as communication media between the user algorithm and the remote robots (i.e. remote visual servoing). They show which conditions of Internet latencies and bandwidth are appropriate for the visual servoing loop. We must take into account that the real images are taken from the remote robot scenario and the experiment algorithm is executed from the client side at the user place. Moreover, the distributed multirobot architecture is validated by performing a multirobot programming example using two manipulators and a mobile robot.

Research limitations/implications

Future work will pursue the development of more sophisticated visual servoing loops using external cameras, pan/tilt and also stereo cameras. Indeed, the stereo cameras control introduces an interesting difficulty related to their synchronization during the loop, which introduces the need to implement Real Time Streaming Protocol (RTSP) based camera monitoring. By using camera servers that support RTSP (e.g. Helix Producer, etc.) it means sending the differences between the frames instead of sending the whole frame information for every iteration.

Practical implications

The distributed multirobot architecture has been validated since 2003 within the education and training scenario. Students and researchers are able to use the system as a tool to rapidly implement complex algorithms in a simple manner. The distributed multirobot architecture is being applied as well within the industrial robotics area in order to program remotely two synchonized robots.

Originality/value

This paper is an original contribution to the network robots field, since it presents a generic architecture to program remotelly a set of heterogeneous robots. The concept of network robot recently came up at the Workshop “network robots” within the IEEE ICRA 2005 World Congress.

Details

Industrial Robot: An International Journal, vol. 33 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 3 June 2019

Hua Liu, Weidong Zhu, Huiyue Dong and Yinglin Ke

To gain accurate support for large aircraft structures by ball joints in aircraft digital assembly, this paper aims to propose a novel approach based on visual servoing such that…

360

Abstract

Purpose

To gain accurate support for large aircraft structures by ball joints in aircraft digital assembly, this paper aims to propose a novel approach based on visual servoing such that the positioner’s ball-socket can automatically and adaptively approach the ball-head fixed on the aircraft structures.

Design/methodology/approach

Image moments of circular marker labeled on the ball-head are selected as visual features to control the three translational degrees of freedom (DOFs) of the positioner, where the composite Jacobian matrix is full rank. Kalman–Bucy filter is adopted for its online estimation, which makes the control scheme more flexible without system calibration. A combination of proportional control with sliding mode control is proposed to improve the system stability and compensate uncertainties of the system.

Findings

The ball-socket can accurately and smoothly reach its desired position in a finite time (50 s). Positional deviations between the spherical centers of ball-head and ball-socket in the X-Y plane can be controlled within 0.05 mm which meets the design requirement.

Practical implications

The proposed approach has been integrated into the pose alignment system. It has shown great potential to be widely applied in the leading support for large aircraft structures in aircraft digital assembly.

Originality/value

An adaptive approach for accurate support of large aircraft structures is proposed, which possesses characteristics of high precision, high efficiency and excellent stability.

Details

Assembly Automation, vol. 39 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 29 July 2020

Megha G. Krishnan, Abhilash T. Vijayan and Ashok S.

Real-time implementation of sophisticated algorithms on robotic systems demands a rewarding interface between hardware and software components. Individual robot manufacturers have…

Abstract

Purpose

Real-time implementation of sophisticated algorithms on robotic systems demands a rewarding interface between hardware and software components. Individual robot manufacturers have dedicated controllers and languages. However, robot operation would require either the knowledge of additional software or expensive add-on installations for effective communication between the robot controller and the computation software. This paper aims to present a novel method of interfacing the commercial robot controllers with most widely used simulation platform, e.g. MATLAB in real-time with a demonstration of visual predictive controller.

Design/methodology/approach

A remote personal computer (PC), running MATLAB, is connected with the IRC5 controller of an ABB robotic arm through the File Transfer Protocol (FTP). FTP server on the IRC5 responds to a request from an FTP client (MATLAB) on a remote computer. MATLAB provides the basic platform for programming and control algorithm development. The controlled output is transferred to the robot controller through Ethernet port as files and, thereby, the proposed scheme ensures connection and control of the robot using the control algorithms developed by the researchers without the additional cost of buying add-on packages or mastering vendor-specific programming languages.

Findings

New control strategies and contrivances can be developed with numerous conditions and constraints in simulation platforms. When the results are to be implemented in real-time systems, the proposed method helps to establish a simple, fast and cost-effective communication with commercial robot controllers for validating the real-time performance of the developed control algorithm.

Practical implications

The proposed method is used for real-time implementation of visual servo control with predictive controller, for accurate pick-and-place application with different initial conditions. The same strategy has been proven effective in supervisory control using two cameras and artificial neural network-based visual control of robotic manipulators.

Originality/value

This paper elaborates a real-time example using visual servoing for researchers working with industrial robots, enabling them to understand and explore the possibilities of robot communication.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 7 November 2019

Megha G. Krishnan, Abhilash T. Vijayan and Ashok Sankar

This paper aims to improve the performance of a two-camera robotic feedback system designed for automatic pick and place application by modifying its velocity profile during…

Abstract

Purpose

This paper aims to improve the performance of a two-camera robotic feedback system designed for automatic pick and place application by modifying its velocity profile during switching of control.

Design/methodology/approach

Cooperation of global and local vision sensors ensures visibility of the target for a two-camera robotic system. The master camera, monitoring the workspace, guides the robot such that image-based visual servoing (IBVS) by the eye-in-hand camera transcends its inherent shortcomings. A hybrid control law steers the robot until the system switches to IBVS in a region proven for its asymptotic stability and convergence through a qualitative overview of the scheme. Complementary gain factors can ensure a smooth transition in velocity during switching considering the versatility and range of the workspace.

Findings

The proposed strategy is verified through simulation studies and implemented on a 6-DOF industrial robot ABB IRB 1200 to validate the practicality of adaptive gain approach while switching in a hybrid visual feedback system. This approach can be extended to any control problem with uneven switching surfaces or coarse/fine controllers which are subjected to discrete time events.

Practical implications

In complex workspace where robots operate in parallel with other robots/humans and share workspaces, the supervisory control scheme ensures convergence. This study proves that hybrid control laws are more effective than conventional approaches in unstructured environments and visibility constraints can be overcome by the integration of multiple vision sensors.

Originality/value

The supervisory control is designed to combine the visual feedback data from eye-in-hand and eye-to-hand sensors. A gain adaptive approach smoothens the velocity characteristics of the end-effector while switching the control from master camera to the end-effector camera.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 4 March 2019

Yu Qiu, Baoquan Li, Wuxi Shi and Yimei Chen

The purpose of this paper is to present a visual servo tracking strategy for the wheeled mobile robot, where the unknown feature depth information can be identified simultaneously…

Abstract

Purpose

The purpose of this paper is to present a visual servo tracking strategy for the wheeled mobile robot, where the unknown feature depth information can be identified simultaneously in the visual servoing process.

Design/methodology/approach

By using reference, desired and current images, system errors are constructed by measurable signals that are obtained by decomposing Euclidean homographies. Subsequently, by taking the advantage of the concurrent learning framework, both historical and current system data are used to construct an adaptive updating mechanism for recovering the unknown feature depth. Then, the kinematic controller is designed for the mobile robot to achieve the visual servo trajectory tracking task. Lyapunov techniques and LaSalle’s invariance principle are used to prove that system errors and the depth estimation error converge to zero synchronously.

Findings

The concurrent learning-based visual servo tracking and identification technology is found to be reliable, accurate and efficient with both simulation and comparative experimental results. Both trajectory tracking and depth estimation errors converge to zero successfully.

Originality/value

On the basis of the concurrent learning framework, an adaptive control strategy is developed for the mobile robot to successfully identify the unknown scene depth while accomplishing the visual servo trajectory tracking task.

Details

Assembly Automation, vol. 39 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 8 March 2011

Umer Khan, Ibrar Jan, Naeem Iqbal and Jian Dai

The purpose of this paper is to present the control of six degrees of freedom (PUMA560) robotic arm using visual servoing, based upon linear matrix inequality (LMI). The aim lies…

Abstract

Purpose

The purpose of this paper is to present the control of six degrees of freedom (PUMA560) robotic arm using visual servoing, based upon linear matrix inequality (LMI). The aim lies in developing such a method that neither involves camera calibration parameters nor inverse kinematics. The approach adopted in this paper includes transpose Jacobian control; thus, inverse of the Jacobian matrix is no longer required. By invoking the Lyapunov's direct method, closed‐loop stability of the system is ensured. Simulation results are shown for three different cases, which exhibit the system stability and convergence even in the presence of large errors.

Design/methodology/approach

The paper presents LMI‐based visual servo control of PUMA560 robotic arm.

Findings

The proposed method is implementable in the dynamic environment due to its independence to camera and object model.

Research limitations/implications

Visibility constraint is not included during servoing – this may cause features to leave the camera field of view (fov).

Originality/value

LMI optimization is employed for visual servo control in an uncalibrated environment. Lyapunov's direct method is utilized which ensures system stability and convergence.

Details

Industrial Robot: An International Journal, vol. 38 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 31 July 2009

Heping Chen, George Zhang, William Eakins and Thomas Fuhlbrigge

The purpose of this paper is to develop an intelligent robot assembly system for the moving production line. Moving production lines are widely used in many manufacturing…

Abstract

Purpose

The purpose of this paper is to develop an intelligent robot assembly system for the moving production line. Moving production lines are widely used in many manufacturing factories, including automotive and general industries. Industrial robots are hardly used to perform any tasks on the moving production lines. One of the main reasons is that it is difficult for conventional industrial robots to adjust to any sort of change. Therefore, more intelligent industrial robotic systems have to be developed to adopt the random motion of the moving production lines. This paper presents an intelligent robotics system that performs an assembly process while the object is moving, using synergic combination of visual servoing and force control technology.

Design/methodology/approach

The developed intelligent robotic system includes some rules to ensure the success of the assembly processes. Also visual servoing and force control are used to deal with the random motion of the moving objects. Since the objects on the moving production lines are moving with random speed, visual servoing is adopted to tracking the motion of the moving object. Force control is also integrated to control the motion of the robot and keep the robotic system compliant with the moving objects to avoid the damage of the whole system.

Findings

The developed intelligent robotic technology has been successfully implemented. The wheel loading process is used as example.

Research limitations/implications

Since the developed technology is based on the low‐level motion control, safety has to be considered. Currently, it is done by motion supervision.

Practical implications

The developed technology can be used to perform assemblies in the moving production lines. Since the developed platform is based on the synergic combination of visual servoing and force control technology, it can be used in other areas, such as seam tracking and seat loading, etc.

Originality/value

This paper provides a practical solution of performing assemblies on the moving production lines, which is not available on the current industrial robot market.

Details

Assembly Automation, vol. 29 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 10 of 756