Search results
1 – 10 of over 83000J. Henry and C. Preston
A case study by IBM of machine vision implementation in the robotic assembly area of the Automated Logistic Production System used in manufacturing computers.
Looks at the move towards integrating robots with highperformance, fullyprogrammable vision systems. Outlines the problems of traditionalvision‐aided robotics and the advantage of…
Abstract
Looks at the move towards integrating robots with highperformance, fully programmable vision systems. Outlines the problems of traditional vision‐aided robotics and the advantage of modern machine vision technology. The latest generation of machine vision systems combine the capabilities of the “C” program system with graphic “point‐and Click” application development environments based on Microsoft Windows: the Checkpoint system. Describes how the Checkpoint vision systems works and the applications of the new vision guided robots. Concludes that the new systems now make it possible for users and system integrators to being the advantages of vision‐guided robotics to general manufacturing.
Details
Keywords
Clive Loughlin looks at some of the systems available to industry, and analyses their strengths and weaknesses.
Chetan Jalendra, B.K. Rout and Amol Marathe
Industrial robots are extensively used in the robotic assembly of rigid objects, whereas the assembly of flexible objects using the same robot becomes cumbersome and challenging…
Abstract
Purpose
Industrial robots are extensively used in the robotic assembly of rigid objects, whereas the assembly of flexible objects using the same robot becomes cumbersome and challenging due to transient disturbance. The transient disturbance causes vibration in the flexible object during robotic manipulation and assembly. This is an important problem as the quick suppression of undesired vibrations reduces the cycle time and increases the efficiency of the assembly process. Thus, this study aims to propose a contactless robot vision-based real-time active vibration suppression approach to handle such a scenario.
Design/methodology/approach
A robot-assisted camera calibration method is developed to determine the extrinsic camera parameters with respect to the robot position. Thereafter, an innovative robot vision method is proposed to identify a flexible beam grasped by the robot gripper using a virtual marker and obtain the dimension, tip deflection as well as velocity of the same. To model the dynamic behaviour of the flexible beam, finite element method (FEM) is used. The measured dimensions, tip deflection and velocity of a flexible beam are fed to the FEM model to predict the maximum deflection. The difference between the maximum deflection and static deflection of the beam is used to compute the maximum error. Subsequently, the maximum error is used in the proposed predictive maximum error-based second-stage controller to send the control signal for vibration suppression. The control signal in form of trajectory is communicated to the industrial robot controller that accommodates various types of delays present in the system.
Findings
The effectiveness and robustness of the proposed controller have been validated using simulation and experimental implementation on an Asea Brown Boveri make IRB 1410 industrial robot with a standard low frame rate camera sensor. In this experiment, two metallic flexible beams of different dimensions with the same material properties have been considered. The robot vision method measures the dimension within an acceptable error limit i.e. ±3%. The controller can suppress vibration amplitude up to approximately 97% in an average time of 4.2 s and reduces the stability time up to approximately 93% while comparing with control and without control suppression time. The vibration suppression performance is also compared with the results of classical control method and some recent results available in literature.
Originality/value
The important contributions of the current work are the following: an innovative robot-assisted camera calibration method is proposed to determine the extrinsic camera parameters that eliminate the need for any reference such as a checkerboard, robotic assembly, vibration suppression, second-stage controller, camera calibration, flexible beam and robot vision; an approach for robot vision method is developed to identify the object using a virtual marker and measure its dimension grasped by the robot gripper accommodating perspective view; the developed robot vision-based controller works along with FEM model of the flexible beam to predict the tip position and helps in handling different dimensions and material types; an approach has been proposed to handle different types of delays that are part of implementation for effective suppression of vibration; proposed method uses a low frame rate and low-cost camera for the second-stage controller and the controller does not interfere with the internal controller of the industrial robot.
Details
Keywords
Nikolaos Papanikolopoulos and Christopher E. Smith
Many research efforts have turned to sensing, and in particular computer vision, to create more flexible robotic systems. Computer vision is often required to provide data for the…
Abstract
Many research efforts have turned to sensing, and in particular computer vision, to create more flexible robotic systems. Computer vision is often required to provide data for the grasping of a target. Using a vision system for grasping of static or moving objects presents several issues with respect to sensing, control, and system configuration. This paper presents some of these issues in concept with the options available to the researcher and the trade‐offs to be expected when integrating a vision system with a robotic system for the purpose of grasping objects. The paper includes a description of our experimental system and contains experimental results from a particular configuration that characterize the type and frequency of errors encountered while performing various vision‐guided grasping tasks. These error classes and their frequency of occurrence lend insight into the problems encountered during visual grasping and into the possible solution of these problems.
Details
Keywords
Discusses the background of robot vision systems and examines whyvision‐guided motion for robots hasn’t lived up to the earlypromise. Outlines the different types of robot vision…
Abstract
Discusses the background of robot vision systems and examines why vision‐guided motion for robots hasn’t lived up to the early promise. Outlines the different types of robot vision available and considers the limitation of “computer vision” in most commercial applications. Looks at the difficulties of making effective use of information from a two‐dimensional vision system to guide a robot working in a 3‐dimensional environment and at some of the possible solutions. Discusses future developments and concludes that in the short term, it is probably the opening up of programming to a larger group of potential users, with the facility of graphic user interface, which will have the greatest impact on the uptake of vision for robots.
Details
Keywords
Vision guided robotics (VGR) is a fast growing technology and a way to reduce manpower and retain production, especially in countries with high manufacturing overheads and labour…
Abstract
Purpose
Vision guided robotics (VGR) is a fast growing technology and a way to reduce manpower and retain production, especially in countries with high manufacturing overheads and labour costs. This paper aims to provide information on a new VGR system.
Design/methodology/approach
The paper describes the new automation system of the Swedish company SVIA.
Findings
Shows that the need to position components to a set pick‐up position is eliminated – the vision system determining the position of randomly fed products by a recycling conveyor system. The vision system and control software gives the robot exact coordinates of the components, which are spread out randomly beneath the camera field of vision, enabling the robot arm to move to a selected component and pick from the conveyor belt.
Originality/value
Describes how the modules are easy to utilise when products or production lines change.
Details
Keywords
Haibo Feng, Yanwu Zhai and Yili Fu
Surgical robot systems have been used in single-port laparoscopy (SPL) surgery to improve patient outcomes. This study aims to develop a vision robot system for SPL surgery to…
Abstract
Purpose
Surgical robot systems have been used in single-port laparoscopy (SPL) surgery to improve patient outcomes. This study aims to develop a vision robot system for SPL surgery to effectively improve the visualization of surgical robot systems for relatively complex surgical procedures.
Design/methodology/approach
In this paper, a new master-slave magnetic anchoring vision robotic system for SPL surgery was proposed. A lighting distribution analysis for the imaging unit of the vision robot was carried out to guarantee illumination uniformity in the workspace during SPL surgery. Moreover, cleaning force for the lens of the camera was measured to assess safety for an abdominal wall, and performance assessment of the system was performed.
Findings
Extensive experimental results for illumination, control, cleaning force and functionality test have indicated that the proposed system has an excellent performance in providing the visual feedback.
Originality/value
The main contribution of this paper lies in the development of a magnetic anchoring vision robot system that successfully improves the ability of cleaning the lens and avoiding the blind area in a field of view.
Details
Keywords
Biao Mei, Weidong Zhu and Yinglin Ke
Aircraft assembly demands high position accuracy of drilled fastener holes. Automated drilling is a key technology to fulfill the requirement. The purpose of the paper is to…
Abstract
Purpose
Aircraft assembly demands high position accuracy of drilled fastener holes. Automated drilling is a key technology to fulfill the requirement. The purpose of the paper is to conduct positioning variation analysis and control for an automated drilling to achieve a high positioning accuracy.
Design/methodology/approach
The nominal and varied connective models of automated drilling are constructed for positioning variation analysis regarding automated drilling. The principle of a strategy for reducing positioning variation in drilling, which shortens the positioning variation chain with the aid of an industrial camera-based vision system, is explored. Moreover, other strategies for positioning variation control are developed based on mathematical analysis to further reduce the position errors of the drilled fastener holes.
Findings
The propagation and accumulation of an automated drilling system’s positioning variation are explored. The principle of reducing positioning variation in an automated drilling using a monocular vision system is discussed from the view of variation chain.
Practical implications
The strategies for reducing positioning variation, rooted in the constructed positioning variation models, have been applied to a machine-tool based automated drilling system. The system is developed for a wing assembly of an aircraft in the Aviation Industry Corporation of China.
Originality/value
Propagation, accumulation and control of positioning variation in an automated drilling are comprehensively explored. Based on this, the positioning accuracy in an automated drilling is controlled below 0.13 mm, which can meet the requirement for the assembly of the aircraft.
Details
Keywords
Bambang Rilanto Trilaksono, Ryan Triadhitama, Widyawardana Adiprawita, Artiko Wibowo and Anavatti Sreenatha
The purpose of this paper is to present the development of hardware‐in‐the‐loop simulation (HILS) for visual target tracking of an octorotor unmanned aerial vehicle (UAV) with…
Abstract
Purpose
The purpose of this paper is to present the development of hardware‐in‐the‐loop simulation (HILS) for visual target tracking of an octorotor unmanned aerial vehicle (UAV) with onboard computer vision.
Design/methodology/approach
HILS for visual target tracking of an octorotor UAV is developed by integrating real embedded computer vision hardware and camera to software simulation of the UAV dynamics, flight control and navigation systems run on Simulink. Visualization of the visual target tracking is developed using FlightGear. The computer vision system is used to recognize and track a moving target using feature correlation between captured scene images and object images stored in the database. Features of the captured images are extracted using speed‐up robust feature (SURF) algorithm, and subsequently matched with features extracted from object image using fast library for approximate nearest neighbor (FLANN) algorithm. Kalman filter is applied to predict the position of the moving target on image plane. The integrated HILS environment is developed to allow real‐time testing and evaluation of onboard embedded computer vision for UAV's visual target tracking.
Findings
Utilization of HILS is found to be useful in evaluating functionality and performance of the real machine vision software and hardware prior to its operation in a flight test. Integrating computer vision with UAV enables the construction of an unmanned system with the capability of tracking a moving object.
Practical implications
HILS for visual target tracking of UAV described in this paper could be applied in practice to minimize trial and error in various parameters tuning of the machine vision algorithm as well as of the autopilot and navigation system. It also could reduce development costs, in addition to reducing the risk of crashing the UAV in a flight test.
Originality/value
A HILS integrated environment for octorotor UAV's visual target tracking for real‐time testing and evaluation of onboard computer vision is proposed. Another contribution involves implementation of SURF, FLANN, and Kalman filter algorithms on an onboard embedded PC and its integration with navigation and flight control systems which enables the UAV to track a moving object.
Details