Search results

1 – 10 of over 63000
Article
Publication date: 1 February 1988

J. Henry and C. Preston

A case study by IBM of machine vision implementation in the robotic assembly area of the Automated Logistic Production System used in manufacturing computers.

Abstract

A case study by IBM of machine vision implementation in the robotic assembly area of the Automated Logistic Production System used in manufacturing computers.

Details

Sensor Review, vol. 8 no. 2
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 1 December 1995

Justin Testa

Looks at the move towards integrating robots with highperformance, fullyprogrammable vision systems. Outlines the problems of traditionalvision‐aided robotics and the advantage of…

192

Abstract

Looks at the move towards integrating robots with highperformance, fully programmable vision systems. Outlines the problems of traditional vision‐aided robotics and the advantage of modern machine vision technology. The latest generation of machine vision systems combine the capabilities of the “C” program system with graphic “point‐and Click” application development environments based on Microsoft Windows: the Checkpoint system. Describes how the Checkpoint vision systems works and the applications of the new vision guided robots. Concludes that the new systems now make it possible for users and system integrators to being the advantages of vision‐guided robotics to general manufacturing.

Details

Industrial Robot: An International Journal, vol. 22 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 23 November 2022

Chetan Jalendra, B.K. Rout and Amol Marathe

Industrial robots are extensively used in the robotic assembly of rigid objects, whereas the assembly of flexible objects using the same robot becomes cumbersome and challenging…

Abstract

Purpose

Industrial robots are extensively used in the robotic assembly of rigid objects, whereas the assembly of flexible objects using the same robot becomes cumbersome and challenging due to transient disturbance. The transient disturbance causes vibration in the flexible object during robotic manipulation and assembly. This is an important problem as the quick suppression of undesired vibrations reduces the cycle time and increases the efficiency of the assembly process. Thus, this study aims to propose a contactless robot vision-based real-time active vibration suppression approach to handle such a scenario.

Design/methodology/approach

A robot-assisted camera calibration method is developed to determine the extrinsic camera parameters with respect to the robot position. Thereafter, an innovative robot vision method is proposed to identify a flexible beam grasped by the robot gripper using a virtual marker and obtain the dimension, tip deflection as well as velocity of the same. To model the dynamic behaviour of the flexible beam, finite element method (FEM) is used. The measured dimensions, tip deflection and velocity of a flexible beam are fed to the FEM model to predict the maximum deflection. The difference between the maximum deflection and static deflection of the beam is used to compute the maximum error. Subsequently, the maximum error is used in the proposed predictive maximum error-based second-stage controller to send the control signal for vibration suppression. The control signal in form of trajectory is communicated to the industrial robot controller that accommodates various types of delays present in the system.

Findings

The effectiveness and robustness of the proposed controller have been validated using simulation and experimental implementation on an Asea Brown Boveri make IRB 1410 industrial robot with a standard low frame rate camera sensor. In this experiment, two metallic flexible beams of different dimensions with the same material properties have been considered. The robot vision method measures the dimension within an acceptable error limit i.e. ±3%. The controller can suppress vibration amplitude up to approximately 97% in an average time of 4.2 s and reduces the stability time up to approximately 93% while comparing with control and without control suppression time. The vibration suppression performance is also compared with the results of classical control method and some recent results available in literature.

Originality/value

The important contributions of the current work are the following: an innovative robot-assisted camera calibration method is proposed to determine the extrinsic camera parameters that eliminate the need for any reference such as a checkerboard, robotic assembly, vibration suppression, second-stage controller, camera calibration, flexible beam and robot vision; an approach for robot vision method is developed to identify the object using a virtual marker and measure its dimension grasped by the robot gripper accommodating perspective view; the developed robot vision-based controller works along with FEM model of the flexible beam to predict the tip position and helps in handling different dimensions and material types; an approach has been proposed to handle different types of delays that are part of implementation for effective suppression of vibration; proposed method uses a low frame rate and low-cost camera for the second-stage controller and the controller does not interfere with the internal controller of the industrial robot.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 March 1987

Clive Loughlin looks at some of the systems available to industry, and analyses their strengths and weaknesses.

Abstract

Clive Loughlin looks at some of the systems available to industry, and analyses their strengths and weaknesses.

Details

Sensor Review, vol. 7 no. 3
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 1 April 1998

Nikolaos Papanikolopoulos and Christopher E. Smith

Many research efforts have turned to sensing, and in particular computer vision, to create more flexible robotic systems. Computer vision is often required to provide data for the…

Abstract

Many research efforts have turned to sensing, and in particular computer vision, to create more flexible robotic systems. Computer vision is often required to provide data for the grasping of a target. Using a vision system for grasping of static or moving objects presents several issues with respect to sensing, control, and system configuration. This paper presents some of these issues in concept with the options available to the researcher and the trade‐offs to be expected when integrating a vision system with a robotic system for the purpose of grasping objects. The paper includes a description of our experimental system and contains experimental results from a particular configuration that characterize the type and frequency of errors encountered while performing various vision‐guided grasping tasks. These error classes and their frequency of occurrence lend insight into the problems encountered during visual grasping and into the possible solution of these problems.

Details

Industrial Robot: An International Journal, vol. 25 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 December 1995

Don Braggins

Discusses the background of robot vision systems and examines whyvision‐guided motion for robots hasn’t lived up to the earlypromise. Outlines the different types of robot vision…

Abstract

Discusses the background of robot vision systems and examines why vision‐guided motion for robots hasn’t lived up to the early promise. Outlines the different types of robot vision available and considers the limitation of “computer vision” in most commercial applications. Looks at the difficulties of making effective use of information from a two‐dimensional vision system to guide a robot working in a 3‐dimensional environment and at some of the possible solutions. Discusses future developments and concludes that in the short term, it is probably the opening up of programming to a larger group of potential users, with the facility of graphic user interface, which will have the greatest impact on the uptake of vision for robots.

Details

Industrial Robot: An International Journal, vol. 22 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 July 2006

Andrew Perks

Vision guided robotics (VGR) is a fast growing technology and a way to reduce manpower and retain production, especially in countries with high manufacturing overheads and labour…

1137

Abstract

Purpose

Vision guided robotics (VGR) is a fast growing technology and a way to reduce manpower and retain production, especially in countries with high manufacturing overheads and labour costs. This paper aims to provide information on a new VGR system.

Design/methodology/approach

The paper describes the new automation system of the Swedish company SVIA.

Findings

Shows that the need to position components to a set pick‐up position is eliminated – the vision system determining the position of randomly fed products by a recycling conveyor system. The vision system and control software gives the robot exact coordinates of the components, which are spread out randomly beneath the camera field of vision, enabling the robot arm to move to a selected component and pick from the conveyor belt.

Originality/value

Describes how the modules are easy to utilise when products or production lines change.

Details

Assembly Automation, vol. 26 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 28 June 2018

Haibo Feng, Yanwu Zhai and Yili Fu

Surgical robot systems have been used in single-port laparoscopy (SPL) surgery to improve patient outcomes. This study aims to develop a vision robot system for SPL surgery to…

Abstract

Purpose

Surgical robot systems have been used in single-port laparoscopy (SPL) surgery to improve patient outcomes. This study aims to develop a vision robot system for SPL surgery to effectively improve the visualization of surgical robot systems for relatively complex surgical procedures.

Design/methodology/approach

In this paper, a new master-slave magnetic anchoring vision robotic system for SPL surgery was proposed. A lighting distribution analysis for the imaging unit of the vision robot was carried out to guarantee illumination uniformity in the workspace during SPL surgery. Moreover, cleaning force for the lens of the camera was measured to assess safety for an abdominal wall, and performance assessment of the system was performed.

Findings

Extensive experimental results for illumination, control, cleaning force and functionality test have indicated that the proposed system has an excellent performance in providing the visual feedback.

Originality/value

The main contribution of this paper lies in the development of a magnetic anchoring vision robot system that successfully improves the ability of cleaning the lens and avoiding the blind area in a field of view.

Details

Industrial Robot: An International Journal, vol. 45 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 March 1987

Bill Vogeley

Many industrial applications could benefit from line imaging‐based edge sensors rather than full‐scale vision systems, a leading specialist argues.

Abstract

Many industrial applications could benefit from line imaging‐based edge sensors rather than full‐scale vision systems, a leading specialist argues.

Details

Sensor Review, vol. 7 no. 3
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 26 October 2018

Biao Mei, Weidong Zhu and Yinglin Ke

Aircraft assembly demands high position accuracy of drilled fastener holes. Automated drilling is a key technology to fulfill the requirement. The purpose of the paper is to…

298

Abstract

Purpose

Aircraft assembly demands high position accuracy of drilled fastener holes. Automated drilling is a key technology to fulfill the requirement. The purpose of the paper is to conduct positioning variation analysis and control for an automated drilling to achieve a high positioning accuracy.

Design/methodology/approach

The nominal and varied connective models of automated drilling are constructed for positioning variation analysis regarding automated drilling. The principle of a strategy for reducing positioning variation in drilling, which shortens the positioning variation chain with the aid of an industrial camera-based vision system, is explored. Moreover, other strategies for positioning variation control are developed based on mathematical analysis to further reduce the position errors of the drilled fastener holes.

Findings

The propagation and accumulation of an automated drilling system’s positioning variation are explored. The principle of reducing positioning variation in an automated drilling using a monocular vision system is discussed from the view of variation chain.

Practical implications

The strategies for reducing positioning variation, rooted in the constructed positioning variation models, have been applied to a machine-tool based automated drilling system. The system is developed for a wing assembly of an aircraft in the Aviation Industry Corporation of China.

Originality/value

Propagation, accumulation and control of positioning variation in an automated drilling are comprehensively explored. Based on this, the positioning accuracy in an automated drilling is controlled below 0.13 mm, which can meet the requirement for the assembly of the aircraft.

1 – 10 of over 63000