Search results

1 – 10 of 756
Article
Publication date: 20 October 2014

Haitao Yang, Minghe Jin, Zongwu Xie, Kui Sun and Hong Liu

The purpose of this paper is to solve the ground verification and test method for space robot system capturing the target satellite based on visual servoing with time-delay in…

Abstract

Purpose

The purpose of this paper is to solve the ground verification and test method for space robot system capturing the target satellite based on visual servoing with time-delay in 3-dimensional space prior to space robot being launched.

Design/methodology/approach

To implement the approaching and capturing task, a motion planning method for visual servoing the space manipulator to capture a moving target is presented. This is mainly used to solve the time-delay problem of the visual servoing control system and the motion uncertainty of the target satellite. To verify and test the feasibility and reliability of the method in three-dimensional (3D) operating space, a set of ground hardware-in-the-loop simulation verification systems is developed, which adopts the end-tip kinematics equivalence and dynamics simulation method.

Findings

The results of the ground hardware-in-the-loop simulation experiment validate the reliability of the eye-in-hand visual system in the 3D operating space and prove the validity of the visual servoing motion planning method with time-delay compensation. At the same time, owing to the dynamics simulator of the space robot added in the ground hardware-in-the-loop verification system, the base disturbance can be considered during the approaching and capturing procedure, which makes the ground verification system realistic and credible.

Originality/value

The ground verification experiment system includes the real controller of space manipulator, the eye-in-hand camera and the dynamics simulator, which can veritably simulate the capturing process based on the visual servoing in space and consider the effect of time delay and the free-floating base disturbance.

Details

Industrial Robot: An International Journal, vol. 41 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 November 2006

Raul Wirz, Raul Marin and Pedro J. Sanz

The authors of this paper aim to describe the design of distributed architectures for the remote control of multirobot systems. A very good example of remote robot programming in…

Abstract

Purpose

The authors of this paper aim to describe the design of distributed architectures for the remote control of multirobot systems. A very good example of remote robot programming in order to validate these architectures is in fact the remote visual servoing control. It uses sequences of camera inputs in order to bring the robots to the desired position, in an iterative way. In fact, in this paper, we enabled the students and scientists in our university to experiment with their remote visual servoing algorithms through a remote real environment instead of using simulation tools.

Design/methodology/approach

Since 2001, the authors have been using the UJI‐TeleLab as a tool to allow students and scientists to program remotely several vision‐based network robots. During this period it has been learnt that multithread remote programming combined with a distributed multirobot architecture, as well as advanced multimedia user interfaces, are very convenient, flexible and profitable for the design of a Tele‐Laboratory. The distributed system architecture permits any external algorithm to have access to almost every feature of several network robots.

Findings

Presents the multirobot system architecture and its performance by programming two closed loop experiments using the Internet as communication media between the user algorithm and the remote robots (i.e. remote visual servoing). They show which conditions of Internet latencies and bandwidth are appropriate for the visual servoing loop. We must take into account that the real images are taken from the remote robot scenario and the experiment algorithm is executed from the client side at the user place. Moreover, the distributed multirobot architecture is validated by performing a multirobot programming example using two manipulators and a mobile robot.

Research limitations/implications

Future work will pursue the development of more sophisticated visual servoing loops using external cameras, pan/tilt and also stereo cameras. Indeed, the stereo cameras control introduces an interesting difficulty related to their synchronization during the loop, which introduces the need to implement Real Time Streaming Protocol (RTSP) based camera monitoring. By using camera servers that support RTSP (e.g. Helix Producer, etc.) it means sending the differences between the frames instead of sending the whole frame information for every iteration.

Practical implications

The distributed multirobot architecture has been validated since 2003 within the education and training scenario. Students and researchers are able to use the system as a tool to rapidly implement complex algorithms in a simple manner. The distributed multirobot architecture is being applied as well within the industrial robotics area in order to program remotely two synchonized robots.

Originality/value

This paper is an original contribution to the network robots field, since it presents a generic architecture to program remotelly a set of heterogeneous robots. The concept of network robot recently came up at the Workshop “network robots” within the IEEE ICRA 2005 World Congress.

Details

Industrial Robot: An International Journal, vol. 33 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 4 March 2019

Yu Qiu, Baoquan Li, Wuxi Shi and Yimei Chen

The purpose of this paper is to present a visual servo tracking strategy for the wheeled mobile robot, where the unknown feature depth information can be identified simultaneously…

Abstract

Purpose

The purpose of this paper is to present a visual servo tracking strategy for the wheeled mobile robot, where the unknown feature depth information can be identified simultaneously in the visual servoing process.

Design/methodology/approach

By using reference, desired and current images, system errors are constructed by measurable signals that are obtained by decomposing Euclidean homographies. Subsequently, by taking the advantage of the concurrent learning framework, both historical and current system data are used to construct an adaptive updating mechanism for recovering the unknown feature depth. Then, the kinematic controller is designed for the mobile robot to achieve the visual servo trajectory tracking task. Lyapunov techniques and LaSalle’s invariance principle are used to prove that system errors and the depth estimation error converge to zero synchronously.

Findings

The concurrent learning-based visual servo tracking and identification technology is found to be reliable, accurate and efficient with both simulation and comparative experimental results. Both trajectory tracking and depth estimation errors converge to zero successfully.

Originality/value

On the basis of the concurrent learning framework, an adaptive control strategy is developed for the mobile robot to successfully identify the unknown scene depth while accomplishing the visual servo trajectory tracking task.

Details

Assembly Automation, vol. 39 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 3 June 2019

Hua Liu, Weidong Zhu, Huiyue Dong and Yinglin Ke

To gain accurate support for large aircraft structures by ball joints in aircraft digital assembly, this paper aims to propose a novel approach based on visual servoing such that…

339

Abstract

Purpose

To gain accurate support for large aircraft structures by ball joints in aircraft digital assembly, this paper aims to propose a novel approach based on visual servoing such that the positioner’s ball-socket can automatically and adaptively approach the ball-head fixed on the aircraft structures.

Design/methodology/approach

Image moments of circular marker labeled on the ball-head are selected as visual features to control the three translational degrees of freedom (DOFs) of the positioner, where the composite Jacobian matrix is full rank. Kalman–Bucy filter is adopted for its online estimation, which makes the control scheme more flexible without system calibration. A combination of proportional control with sliding mode control is proposed to improve the system stability and compensate uncertainties of the system.

Findings

The ball-socket can accurately and smoothly reach its desired position in a finite time (50 s). Positional deviations between the spherical centers of ball-head and ball-socket in the X-Y plane can be controlled within 0.05 mm which meets the design requirement.

Practical implications

The proposed approach has been integrated into the pose alignment system. It has shown great potential to be widely applied in the leading support for large aircraft structures in aircraft digital assembly.

Originality/value

An adaptive approach for accurate support of large aircraft structures is proposed, which possesses characteristics of high precision, high efficiency and excellent stability.

Details

Assembly Automation, vol. 39 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 29 July 2020

Megha G. Krishnan, Abhilash T. Vijayan and Ashok S.

Real-time implementation of sophisticated algorithms on robotic systems demands a rewarding interface between hardware and software components. Individual robot manufacturers have…

Abstract

Purpose

Real-time implementation of sophisticated algorithms on robotic systems demands a rewarding interface between hardware and software components. Individual robot manufacturers have dedicated controllers and languages. However, robot operation would require either the knowledge of additional software or expensive add-on installations for effective communication between the robot controller and the computation software. This paper aims to present a novel method of interfacing the commercial robot controllers with most widely used simulation platform, e.g. MATLAB in real-time with a demonstration of visual predictive controller.

Design/methodology/approach

A remote personal computer (PC), running MATLAB, is connected with the IRC5 controller of an ABB robotic arm through the File Transfer Protocol (FTP). FTP server on the IRC5 responds to a request from an FTP client (MATLAB) on a remote computer. MATLAB provides the basic platform for programming and control algorithm development. The controlled output is transferred to the robot controller through Ethernet port as files and, thereby, the proposed scheme ensures connection and control of the robot using the control algorithms developed by the researchers without the additional cost of buying add-on packages or mastering vendor-specific programming languages.

Findings

New control strategies and contrivances can be developed with numerous conditions and constraints in simulation platforms. When the results are to be implemented in real-time systems, the proposed method helps to establish a simple, fast and cost-effective communication with commercial robot controllers for validating the real-time performance of the developed control algorithm.

Practical implications

The proposed method is used for real-time implementation of visual servo control with predictive controller, for accurate pick-and-place application with different initial conditions. The same strategy has been proven effective in supervisory control using two cameras and artificial neural network-based visual control of robotic manipulators.

Originality/value

This paper elaborates a real-time example using visual servoing for researchers working with industrial robots, enabling them to understand and explore the possibilities of robot communication.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 8 March 2011

Umer Khan, Ibrar Jan, Naeem Iqbal and Jian Dai

The purpose of this paper is to present the control of six degrees of freedom (PUMA560) robotic arm using visual servoing, based upon linear matrix inequality (LMI). The aim lies…

Abstract

Purpose

The purpose of this paper is to present the control of six degrees of freedom (PUMA560) robotic arm using visual servoing, based upon linear matrix inequality (LMI). The aim lies in developing such a method that neither involves camera calibration parameters nor inverse kinematics. The approach adopted in this paper includes transpose Jacobian control; thus, inverse of the Jacobian matrix is no longer required. By invoking the Lyapunov's direct method, closed‐loop stability of the system is ensured. Simulation results are shown for three different cases, which exhibit the system stability and convergence even in the presence of large errors.

Design/methodology/approach

The paper presents LMI‐based visual servo control of PUMA560 robotic arm.

Findings

The proposed method is implementable in the dynamic environment due to its independence to camera and object model.

Research limitations/implications

Visibility constraint is not included during servoing – this may cause features to leave the camera field of view (fov).

Originality/value

LMI optimization is employed for visual servo control in an uncalibrated environment. Lyapunov's direct method is utilized which ensures system stability and convergence.

Details

Industrial Robot: An International Journal, vol. 38 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 15 May 2017

Zhenyu Li, Bin Wang, Haitao Yang and Hong Liu

Rapid satellite capture by a free-floating space robot is a challenge problem because of no-fixed base and time-delay issues. This paper aims to present a modified target…

Abstract

Purpose

Rapid satellite capture by a free-floating space robot is a challenge problem because of no-fixed base and time-delay issues. This paper aims to present a modified target capturing control scheme for improving the control performance.

Design/methodology/approach

For handling such control problem including time delay, the modified scheme is achieved by adding a delay calibration algorithm into the visual servoing loop. To identify end-effector motions in real time, a motion predictor is developed by partly linearizing the space robot kinematics equation. By this approach, only ground-fixed robot kinematics are involved in the predicting computation excluding the complex space robot kinematics calculations. With the newly developed predictor, a delay compensator is designed to take error control into account. For determining the compensation parameters, the asymptotic stability condition of the proposed compensation algorithm is also presented.

Findings

The proposed method is conducted by a credible three-dimensional ground experimental system, and the experimental results illustrate the effectiveness of the proposed method.

Practical implications

Because the delayed camera signals are compensated with only ground-fixed robot kinematics, this proposed satellite capturing scheme is particularly suitable for commercial on-orbit services with cheaper on-board computers.

Originality/value

This paper is original as an attempt trying to compensate the time delay by taking both space robot motion predictions and compensation error control into consideration and is valuable for rapid and accurate satellite capture tasks.

Details

Industrial Robot: An International Journal, vol. 44 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 June 1999

Mamoru Minami, Julien Agbanhan and Toshiyuki Asakura

This paper presents the real‐time visual servoing of a manipulator and its tracking strategy of a fish, by employing a genetic algorithm (GA) and the unprocessed gray‐scale image…

Abstract

This paper presents the real‐time visual servoing of a manipulator and its tracking strategy of a fish, by employing a genetic algorithm (GA) and the unprocessed gray‐scale image termed here as “raw‐image”. The raw‐image is employed to shorten the control period, since it has more tolerance of contrast variations occurring within an object, and between one input image and the next one. GA is employed in a method called 1‐step‐GA evolution. In this way, for every generational step of the GA process, the found results, which express the deviation of the target in the camera frame, are output for control purposes. These results are then used to determine the control inputs of the PD‐type controller. Our proposed GA‐based visual servoing has been implemented in a real system, and the results have shown its effectiveness by successfully tracking a moving target fish.

Details

Industrial Robot: An International Journal, vol. 26 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 26 February 2021

Juncheng Zou

The purpose of this paper is to propose a new video prediction-based methodology to solve the manufactural occlusion problem, which causes the loss of input images and uncertain…

Abstract

Purpose

The purpose of this paper is to propose a new video prediction-based methodology to solve the manufactural occlusion problem, which causes the loss of input images and uncertain controller parameters for the robot visual servo control.

Design/methodology/approach

This paper has put forward a method that can simultaneously generate images and controller parameter increments. Then, this paper also introduced target segmentation and designed a new comprehensive loss. Finally, this paper combines offline training to generate images and online training to generate controller parameter increments.

Findings

The data set experiments to prove that this method is better than the other four methods, and it can better restore the occluded situation of the human body in six manufactural scenarios. The simulation experiment proves that it can simultaneously generate image and controller parameter variations to improve the position accuracy of tracking under occlusions in manufacture.

Originality/value

The proposed method can effectively solve the occlusion problem in visual servo control.

Details

Assembly Automation, vol. 41 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 13 July 2023

Haolin Fei, Ziwei Wang, Stefano Tedeschi and Andrew Kennedy

This paper aims to evaluate and compare the performance of different computer vision algorithms in the context of visual servoing for augmented robot perception and autonomy.

Abstract

Purpose

This paper aims to evaluate and compare the performance of different computer vision algorithms in the context of visual servoing for augmented robot perception and autonomy.

Design/methodology/approach

The authors evaluated and compared three different approaches: a feature-based approach, a hybrid approach and a machine-learning-based approach. To evaluate the performance of the approaches, experiments were conducted in a simulated environment using the PyBullet physics simulator. The experiments included different levels of complexity, including different numbers of distractors, varying lighting conditions and highly varied object geometry.

Findings

The experimental results showed that the machine-learning-based approach outperformed the other two approaches in terms of accuracy and robustness. The approach could detect and locate objects in complex scenes with high accuracy, even in the presence of distractors and varying lighting conditions. The hybrid approach showed promising results but was less robust to changes in lighting and object appearance. The feature-based approach performed well in simple scenes but struggled in more complex ones.

Originality/value

This paper sheds light on the superiority of a hybrid algorithm that incorporates a deep neural network in a feature detector for image-based visual servoing, which demonstrates stronger robustness in object detection and location against distractors and lighting conditions.

Details

Robotic Intelligence and Automation, vol. 43 no. 4
Type: Research Article
ISSN: 2754-6969

Keywords

1 – 10 of 756