Search results

1 – 10 of over 4000
To view the access options for this content please click here
Article
Publication date: 1 December 1995

Don Braggins

Discusses the background of robot vision systems and examines whyvision‐guided motion for robots hasn’t lived up to the earlypromise. Outlines the different types of robot

Abstract

Discusses the background of robot vision systems and examines why vision‐guided motion for robots hasn’t lived up to the early promise. Outlines the different types of robot vision available and considers the limitation of “computer vision” in most commercial applications. Looks at the difficulties of making effective use of information from a two‐dimensional vision system to guide a robot working in a 3‐dimensional environment and at some of the possible solutions. Discusses future developments and concludes that in the short term, it is probably the opening up of programming to a larger group of potential users, with the facility of graphic user interface, which will have the greatest impact on the uptake of vision for robots.

Details

Industrial Robot: An International Journal, vol. 22 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 January 1991

D.F.H. Wolfe, S.W. Wijesoma and R.J. Richards

Tasks in automated manufacturing and assembly increasingly involve robot operations guided by vision systems. The traditional “look‐and‐move” approach to linking machine…

Abstract

Tasks in automated manufacturing and assembly increasingly involve robot operations guided by vision systems. The traditional “look‐and‐move” approach to linking machine vision systems and robot manipulators which is generally used in these operations relies heavily on accurate camera to real‐world calibration processes and on highly accurate robot arms with well‐known kinematics. As a consequence, the cost of robot automation has not been justifiable in many applications. This article describes a novel real‐time vision control strategy giving “eye‐to‐hand co‐ordination” which offers good performance even in the presence of significant vision system miscalibrations and kinematic model parametric errors. This strategy offers the potential for low cost vision‐guided robots.

Details

Assembly Automation, vol. 11 no. 1
Type: Research Article
ISSN: 0144-5154

To view the access options for this content please click here
Article
Publication date: 28 June 2018

Haibo Feng, Yanwu Zhai and Yili Fu

Surgical robot systems have been used in single-port laparoscopy (SPL) surgery to improve patient outcomes. This study aims to develop a vision robot system for SPL…

Abstract

Purpose

Surgical robot systems have been used in single-port laparoscopy (SPL) surgery to improve patient outcomes. This study aims to develop a vision robot system for SPL surgery to effectively improve the visualization of surgical robot systems for relatively complex surgical procedures.

Design/methodology/approach

In this paper, a new master-slave magnetic anchoring vision robotic system for SPL surgery was proposed. A lighting distribution analysis for the imaging unit of the vision robot was carried out to guarantee illumination uniformity in the workspace during SPL surgery. Moreover, cleaning force for the lens of the camera was measured to assess safety for an abdominal wall, and performance assessment of the system was performed.

Findings

Extensive experimental results for illumination, control, cleaning force and functionality test have indicated that the proposed system has an excellent performance in providing the visual feedback.

Originality/value

The main contribution of this paper lies in the development of a magnetic anchoring vision robot system that successfully improves the ability of cleaning the lens and avoiding the blind area in a field of view.

Details

Industrial Robot: An International Journal, vol. 45 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 December 1995

Justin Testa

Looks at the move towards integrating robots with highperformance, fullyprogrammable vision systems. Outlines the problems of traditionalvision‐aided robotics and the…

Downloads
161

Abstract

Looks at the move towards integrating robots with highperformance, fully programmable vision systems. Outlines the problems of traditional vision‐aided robotics and the advantage of modern machine vision technology. The latest generation of machine vision systems combine the capabilities of the “C” program system with graphic “point‐and Click” application development environments based on Microsoft Windows: the Checkpoint system. Describes how the Checkpoint vision systems works and the applications of the new vision guided robots. Concludes that the new systems now make it possible for users and system integrators to being the advantages of vision‐guided robotics to general manufacturing.

Details

Industrial Robot: An International Journal, vol. 22 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 April 1994

John Pretlove

Describes research into the use of external sensors for robot systems toallow them to react intelligently to unforeseen events in the productionprocess and irregularities…

Downloads
214

Abstract

Describes research into the use of external sensors for robot systems to allow them to react intelligently to unforeseen events in the production process and irregularities in products. Examines the use of active vision systems with robot controllers and the integration of the two systems. Concludes that this enhances the ability of an industrial robot system to cope with variations and unforeseen circumstances in the workcell or the workpiece.

Details

Industrial Robot: An International Journal, vol. 21 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 15 June 2012

Xi‐Zhang Chen, Yu‐Ming Huang and Shan‐ben Chen

Stereo vision technique simulates the function of the human eyes to observe the world, which can be used to compute the spatial information of weld seam in the robot

Abstract

Purpose

Stereo vision technique simulates the function of the human eyes to observe the world, which can be used to compute the spatial information of weld seam in the robot welding field. It is a typical kind of application to fix two cameras on the end effector of robot when stereo vision is used in intelligent robot welding. In order to analyse the effect of vision system configuration on vision computing, an accuracy analysis model of vision computing is constructed, which is a good guide for the construction and application of stereo vision system in welding robot field.

Design/methodology/approach

A typical stereo vision system fixed on welding robot is designed and constructed to compute the position information of spatial seam. A simplified error analysis model of the two arbitrary putting cameras is built to analyze the effect of sensors' structural parameter on vision computing accuracy. The methodology of model analysis and experimental verification are used in the research. And experiments related with image extraction, robot movement accuracy is also designed to analyze the effect of equipment accuracy and related processed procedure in vision technology.

Findings

Effect of repeatability positioning accuracy and TCP calibration error of welding robot for visual computing are also analyzed and tested. The results show that effect of the repeatability on computing accuracy is not bigger than 0.3 mm. However, TCP affected the computing accuracy greatly, when the calibrated error of TCP is bigger than 0.5, the re‐calibration is very necessary. The accuracy analysis and experimental technique in this paper can guide the research of three‐dimensional information computing by stereo vision and improve the computed accuracy.

Originality/value

The accuracy of seam position information is affected by many interactional factors, the systematic experiments and a simplified error analysis model are designed and established, the main factors such as the sensor's configurable parameters, the accuracy of arc welding robot and the accuracy of image recognition, are included in the model and experiments. The model and experimental method are significant for design of visual sensor and improvement of computing accuracy.

Details

Industrial Robot: An International Journal, vol. 39 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 December 2005

Christine Connolly

Examines a recently launched integration of smart cameras into industrial robots to make them responsive to a changing environment.

Abstract

Purpose

Examines a recently launched integration of smart cameras into industrial robots to make them responsive to a changing environment.

Design/methodology/approach

Reviews the capabilities of the vision‐enabled robot, citing installations in Sweden and the UK, then describes the robot and vision programming procedure.

Findings

Vision integration opens up a range of new possibilities such as simultaneous product handling and inspection, as well as providing real‐time robot guidance. Standardisation plays an extremely valuable role in building integrated systems from disparate technological elements. Here ActiveX web standards, ethernet connectivity, a standard interchangeable family of cameras and a common controller for a whole range of robots are the keys to the synthesis of a powerful new combination of robot and machine vision.

Originality/value

Draws to the attention of industrial engineers the availability of a family of robots with integrated machine vision.

Details

Industrial Robot: An International Journal, vol. 32 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 11 January 2008

Richard Bloss

This paper aims to review the 2007 Bi‐Annual International Robotics and Vision Show in Chicago.

Downloads
1510

Abstract

Purpose

This paper aims to review the 2007 Bi‐Annual International Robotics and Vision Show in Chicago.

Design/methodology/approach

The paper presents in‐depth interviews with exhibitors of robots and robotic vision systems.

Findings

The marriage of vision and robotics is changing the nature of robotics. In the past, a robot was a “dumb” mechanism which followed pre‐programmed directions. With the addition of vision and added computing power, the robot is starting to “find its way” in the application. Innovations in grippers, robotic design and simulation also shared the stage.

Practical implications

Applications, which previously did not lend themselves to a robotic solution because the requirement could not be completely pre‐programmed, may now be possible. With vision, the robot can adjust for uncertainties in real‐time and perform other tasks such as inspection.

Originality/value

This paper reviews the 2007 Bi‐Annual International Robotics and Vision Show, Chicago.

Details

Industrial Robot: An International Journal, vol. 35 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 January 1984

ASEA, the Swedish robot builder, has introduced a robust robot vision system which is easy to program by the person on the shopfloor. John Mortimer reports.

Abstract

ASEA, the Swedish robot builder, has introduced a robust robot vision system which is easy to program by the person on the shopfloor. John Mortimer reports.

Details

Sensor Review, vol. 4 no. 1
Type: Research Article
ISSN: 0260-2288

To view the access options for this content please click here
Article
Publication date: 1 March 1999

Ulrich Nehmzow

Robot learning ‐ be it unsupervised, supervised or self‐supervised ‐ is one method of dealing with noisy, inconsistent, or contradictory data that has proven useful in…

Abstract

Robot learning ‐ be it unsupervised, supervised or self‐supervised ‐ is one method of dealing with noisy, inconsistent, or contradictory data that has proven useful in mobile robotics. In all but the simplest cases of robot learning, raw sensor data cannot be used directly as input to the learning process. Instead, some “meaningful” preprocessing has to be applied to the raw data, before the learning controller can use the sensory perceptions as input. In this paper, two instances of supervised and unsupervised robot learning experiments, using vision input are presented. The vision sensor signal preprocessing necessary to achieve successful learning is also discussed.

Details

Industrial Robot: An International Journal, vol. 26 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 4000