Search results

1 – 10 of 61
Article
Publication date: 6 February 2023

Changle Li, Chong Yao, Shuo Xu, Leifeng Zhang, Yilun Fan and Jie Zhao

With the rapid development of the 3C industry, the problem of automated operation of 3C wire is becoming increasingly prominent. However, the 3C wire has high flexibility, and its…

Abstract

Purpose

With the rapid development of the 3C industry, the problem of automated operation of 3C wire is becoming increasingly prominent. However, the 3C wire has high flexibility, and its deformation is difficult to model and control. How to realize the automation operation of flexible wire in 3C products is still an important issue that restricts the development of the 3C industry. Therefore, this paper designs a system that aims to improve the automation level of the 3C industry.

Design/methodology/approach

This paper designed a visual servo control system. Based on the perception of the flexible wire, a Jacobi matrix is used to relate the deformation of the wire to the action of the robot end; by building and optimizing the Jacobi matrix, the robot can control the flexible wire.

Findings

By using the visual servo control system, the shape and deformation of the flexible wire are perceived, and based on this, the robot can control the deformation of the flexible wire well. The experimental environment was built to evaluate the accuracy and stability of the system for controlling the deformation of the flexible wire.

Originality/value

An image-based visual servo system is proposed to operate the flexible wire, including the vision system, visual controller and joint velocity controller. It is a scheme suitable for flexible wire operation, which has helped to automate flexible wire-related industries. Its core is to correlate the motion of the robot end with the deformation of the flexible wire through the Jacobian matrix.

Details

Robotic Intelligence and Automation, vol. 43 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 20 December 2019

Chicheng Liu, Libin Song, Ken Chen and Jing Xu

This paper aims to present an image-based visual servoing algorithm for a multiple pin-in-hole assembly. This paper also aims to avoid the matching and tracking of image features…

1371

Abstract

Purpose

This paper aims to present an image-based visual servoing algorithm for a multiple pin-in-hole assembly. This paper also aims to avoid the matching and tracking of image features and the remaining robust against image defects.

Design/methodology/approach

The authors derive a novel model in the set space and design three image errors to control the 3 degrees of freedom (DOF) of a single-lug workpiece in the alignment task. Analytic computations of the interaction matrix that link the time variations of the image errors to the single-lug workpiece motions are performed. The authors introduce two approximate hypotheses so that the interaction matrix has a decoupled form, and an auto-adaptive algorithm is designed to estimate the interaction matrix.

Findings

Image-based visual servoing in the set space avoids the matching and tracking of image features, and these methods are not sensitive to image effects. The control law using the auto-adaptive algorithm is more efficient than that using a static interaction matrix. Simulations and real-world experiments are performed to demonstrate the effectiveness of the proposed algorithm.

Originality/value

This paper proposes a new visual servoing method to achieve pin-in-hole assembly tasks. The main advantage of this new approach is that it does not require tracking or matching of the image features, and its supplementary advantage is that it is not sensitive to image defects.

Details

Assembly Automation, vol. 40 no. 6
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 5 October 2018

Fan Xu, Hesheng Wang, Weidong Chen and Jingchuan Wang

Soft robotics, regarded as a new research branch of robotics, has generated increasing interests in this decade and has demonstrated its outperformance in addressing safety issues…

Abstract

Purpose

Soft robotics, regarded as a new research branch of robotics, has generated increasing interests in this decade and has demonstrated its outperformance in addressing safety issues when cooperating with human beings. However, there is still lack of accurate close-loop control because of the difficulty in acquiring feedback information and accurately modeling the system, especially in interactive environments. To this end, this paper aims to improve the controllability of the soft robot working in specific underwater environment. The system dynamics, which takes complicated hydrodynamics into account, is solved using Kane’s method. The dynamics-based adaptive visual servoing controller is proposed to realize accurate sensorimotor control.

Design/methodology/approach

This paper presents an image-based visual servoing control scheme for a cable-driven soft robot with a fixed camera observing the motions. The intrinsic and extrinsic parameters of the camera can be adapted online so that tedious camera calibration work can be eliminated. It is acknowledged that kinematics-based control can be only applied into tasks in the free space and has limitation in accelerating the motion speed of robot arms. That is, one must consider the unneglectable interaction effects generated from the environment and objectives when operating soft robots in such interactive control tasks. To extend the application of soft robots into underwater environment, the study models system dynamics considering complicated hydrodynamic effects. With the pre-knowledge of the external effects, the performance of the robot can be further improved by adding the compensation term into the controller.

Findings

The proposed controller has theoretically proved its convergence of image error, adaptive estimation error and the stability of the dynamical system based on Lyapunov’s analysis. The authors also validate the performance of the controller in positioning control task in an underwater environment. The controller shows its capacity of rapid convergence to and accurate tracking performance of a static image target in a physical experiment.

Originality/value

To the best of the authors’ knowledge, there is no such research before that has developed dynamics-based visual servoing controller which takes into account the environment interactions. This work can thus improve the control accuracy and enhance the applicability of soft robotics when operating in complicated environments.

Details

Assembly Automation, vol. 38 no. 5
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 3 January 2017

Iryna Borshchova and Siu O’Young

The purpose of this paper is to develop a method for a vision-based automatic landing of a multi-rotor unmanned aerial vehicle (UAV) on a moving platform. The landing system must…

Abstract

Purpose

The purpose of this paper is to develop a method for a vision-based automatic landing of a multi-rotor unmanned aerial vehicle (UAV) on a moving platform. The landing system must be highly accurate and meet the size, weigh, and power restrictions of a small UAV.

Design/methodology/approach

The vision-based landing system consists of a pattern of red markers placed on a moving target, an image processing algorithm for pattern detection, and a servo-control for tracking. The suggested approach uses a color-based object detection and image-based visual servoing.

Findings

The developed prototype system has demonstrated the capability of landing within 25 cm of the desired point of touchdown. This auto-landing system is small (100×100 mm), light-weight (100 g), and consumes little power (under 2 W).

Originality/value

The novelty and the main contribution of the suggested approach are a creative combination of work in two fields: image processing and controls as applied to the UAV landing. The developed image processing algorithm has low complexity as compared to other known methods, which allows its implementation on general-purpose low-cost hardware. The theoretical design has been verified systematically via simulations and then outdoors field tests.

Details

International Journal of Intelligent Unmanned Systems, vol. 5 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 13 July 2023

Haolin Fei, Ziwei Wang, Stefano Tedeschi and Andrew Kennedy

This paper aims to evaluate and compare the performance of different computer vision algorithms in the context of visual servoing for augmented robot perception and autonomy.

Abstract

Purpose

This paper aims to evaluate and compare the performance of different computer vision algorithms in the context of visual servoing for augmented robot perception and autonomy.

Design/methodology/approach

The authors evaluated and compared three different approaches: a feature-based approach, a hybrid approach and a machine-learning-based approach. To evaluate the performance of the approaches, experiments were conducted in a simulated environment using the PyBullet physics simulator. The experiments included different levels of complexity, including different numbers of distractors, varying lighting conditions and highly varied object geometry.

Findings

The experimental results showed that the machine-learning-based approach outperformed the other two approaches in terms of accuracy and robustness. The approach could detect and locate objects in complex scenes with high accuracy, even in the presence of distractors and varying lighting conditions. The hybrid approach showed promising results but was less robust to changes in lighting and object appearance. The feature-based approach performed well in simple scenes but struggled in more complex ones.

Originality/value

This paper sheds light on the superiority of a hybrid algorithm that incorporates a deep neural network in a feature detector for image-based visual servoing, which demonstrates stronger robustness in object detection and location against distractors and lighting conditions.

Details

Robotic Intelligence and Automation, vol. 43 no. 4
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 7 November 2019

Megha G. Krishnan, Abhilash T. Vijayan and Ashok Sankar

This paper aims to improve the performance of a two-camera robotic feedback system designed for automatic pick and place application by modifying its velocity profile during…

Abstract

Purpose

This paper aims to improve the performance of a two-camera robotic feedback system designed for automatic pick and place application by modifying its velocity profile during switching of control.

Design/methodology/approach

Cooperation of global and local vision sensors ensures visibility of the target for a two-camera robotic system. The master camera, monitoring the workspace, guides the robot such that image-based visual servoing (IBVS) by the eye-in-hand camera transcends its inherent shortcomings. A hybrid control law steers the robot until the system switches to IBVS in a region proven for its asymptotic stability and convergence through a qualitative overview of the scheme. Complementary gain factors can ensure a smooth transition in velocity during switching considering the versatility and range of the workspace.

Findings

The proposed strategy is verified through simulation studies and implemented on a 6-DOF industrial robot ABB IRB 1200 to validate the practicality of adaptive gain approach while switching in a hybrid visual feedback system. This approach can be extended to any control problem with uneven switching surfaces or coarse/fine controllers which are subjected to discrete time events.

Practical implications

In complex workspace where robots operate in parallel with other robots/humans and share workspaces, the supervisory control scheme ensures convergence. This study proves that hybrid control laws are more effective than conventional approaches in unstructured environments and visibility constraints can be overcome by the integration of multiple vision sensors.

Originality/value

The supervisory control is designed to combine the visual feedback data from eye-in-hand and eye-to-hand sensors. A gain adaptive approach smoothens the velocity characteristics of the end-effector while switching the control from master camera to the end-effector camera.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 8 March 2011

Umer Khan, Ibrar Jan, Naeem Iqbal and Jian Dai

The purpose of this paper is to present the control of six degrees of freedom (PUMA560) robotic arm using visual servoing, based upon linear matrix inequality (LMI). The aim lies…

Abstract

Purpose

The purpose of this paper is to present the control of six degrees of freedom (PUMA560) robotic arm using visual servoing, based upon linear matrix inequality (LMI). The aim lies in developing such a method that neither involves camera calibration parameters nor inverse kinematics. The approach adopted in this paper includes transpose Jacobian control; thus, inverse of the Jacobian matrix is no longer required. By invoking the Lyapunov's direct method, closed‐loop stability of the system is ensured. Simulation results are shown for three different cases, which exhibit the system stability and convergence even in the presence of large errors.

Design/methodology/approach

The paper presents LMI‐based visual servo control of PUMA560 robotic arm.

Findings

The proposed method is implementable in the dynamic environment due to its independence to camera and object model.

Research limitations/implications

Visibility constraint is not included during servoing – this may cause features to leave the camera field of view (fov).

Originality/value

LMI optimization is employed for visual servo control in an uncalibrated environment. Lyapunov's direct method is utilized which ensures system stability and convergence.

Details

Industrial Robot: An International Journal, vol. 38 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 3 June 2019

Hua Liu, Weidong Zhu, Huiyue Dong and Yinglin Ke

To gain accurate support for large aircraft structures by ball joints in aircraft digital assembly, this paper aims to propose a novel approach based on visual servoing such that…

336

Abstract

Purpose

To gain accurate support for large aircraft structures by ball joints in aircraft digital assembly, this paper aims to propose a novel approach based on visual servoing such that the positioner’s ball-socket can automatically and adaptively approach the ball-head fixed on the aircraft structures.

Design/methodology/approach

Image moments of circular marker labeled on the ball-head are selected as visual features to control the three translational degrees of freedom (DOFs) of the positioner, where the composite Jacobian matrix is full rank. Kalman–Bucy filter is adopted for its online estimation, which makes the control scheme more flexible without system calibration. A combination of proportional control with sliding mode control is proposed to improve the system stability and compensate uncertainties of the system.

Findings

The ball-socket can accurately and smoothly reach its desired position in a finite time (50 s). Positional deviations between the spherical centers of ball-head and ball-socket in the X-Y plane can be controlled within 0.05 mm which meets the design requirement.

Practical implications

The proposed approach has been integrated into the pose alignment system. It has shown great potential to be widely applied in the leading support for large aircraft structures in aircraft digital assembly.

Originality/value

An adaptive approach for accurate support of large aircraft structures is proposed, which possesses characteristics of high precision, high efficiency and excellent stability.

Details

Assembly Automation, vol. 39 no. 2
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 29 July 2020

Megha G. Krishnan, Abhilash T. Vijayan and Ashok S.

Real-time implementation of sophisticated algorithms on robotic systems demands a rewarding interface between hardware and software components. Individual robot manufacturers have…

Abstract

Purpose

Real-time implementation of sophisticated algorithms on robotic systems demands a rewarding interface between hardware and software components. Individual robot manufacturers have dedicated controllers and languages. However, robot operation would require either the knowledge of additional software or expensive add-on installations for effective communication between the robot controller and the computation software. This paper aims to present a novel method of interfacing the commercial robot controllers with most widely used simulation platform, e.g. MATLAB in real-time with a demonstration of visual predictive controller.

Design/methodology/approach

A remote personal computer (PC), running MATLAB, is connected with the IRC5 controller of an ABB robotic arm through the File Transfer Protocol (FTP). FTP server on the IRC5 responds to a request from an FTP client (MATLAB) on a remote computer. MATLAB provides the basic platform for programming and control algorithm development. The controlled output is transferred to the robot controller through Ethernet port as files and, thereby, the proposed scheme ensures connection and control of the robot using the control algorithms developed by the researchers without the additional cost of buying add-on packages or mastering vendor-specific programming languages.

Findings

New control strategies and contrivances can be developed with numerous conditions and constraints in simulation platforms. When the results are to be implemented in real-time systems, the proposed method helps to establish a simple, fast and cost-effective communication with commercial robot controllers for validating the real-time performance of the developed control algorithm.

Practical implications

The proposed method is used for real-time implementation of visual servo control with predictive controller, for accurate pick-and-place application with different initial conditions. The same strategy has been proven effective in supervisory control using two cameras and artificial neural network-based visual control of robotic manipulators.

Originality/value

This paper elaborates a real-time example using visual servoing for researchers working with industrial robots, enabling them to understand and explore the possibilities of robot communication.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 19 January 2015

Bingxi Jia, Shan Liu and Yi Liu

The purpose of this paper is to propose a more efficient strategy, which is easier to implement, i.e. the engineer can directly operate the target object without the robot to do a…

Abstract

Purpose

The purpose of this paper is to propose a more efficient strategy, which is easier to implement, i.e. the engineer can directly operate the target object without the robot to do a demonstration, and the manipulator is regulated to track the trajectory using vision feedback repetitively. Generally, the applications of industrial robotic manipulators are based on teaching playback strategy, i.e. the engineer should directly operate the manipulator to perform a demonstration and then the manipulator uses the recorded driving signals to perform repetitive tasks.

Design/methodology/approach

In the teaching process, the engineer grasps the object with a camera on it to do a demonstration, during which a series of images are recorded. The desired trajectory is defined by the homography between the images captured at current and final poses. Tracking error is directly defined by the homography matrix, without 3D reconstruction. Model-free feedback-assisted iterative learning control strategy is used for repetitive tracking, where feed-forward control signal is generated by iterative learning control strategy and feedback control signal is generated by direct feedback control.

Findings

The proposed framework is able to perform precise trajectory tracking by iterative learning, and is model-free so that the singularity problem is avoided which often occurs in conventional Jacobean-based visual servo systems. Besides, the framework is robust to image noise, which is shown in simulations and experiments.

Originality/value

The proposed framework is model-free, so that it is more flexible for industrial use and easier to implement. Satisfactory tracking performance can be achieved in the presence of image noise. System convergence is analyzed and experiments are provided for evaluation.

Details

Industrial Robot: An International Journal, vol. 42 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of 61