Search results

21 – 30 of over 80000
Article
Publication date: 1 January 1983

The first 2½D vision system in an industrial application in the UK went into use last month when the UK's 600 Group officially opened its highly sophisticated flexible…

Abstract

The first 2½D vision system in an industrial application in the UK went into use last month when the UK's 600 Group officially opened its highly sophisticated flexible manufacturing system, known as SCAMP (Six hundred group Computer Aided Manufacturing Project) at its new company SCAMP Systems Ltd. in Colchester. Anna Kochan talks to British Robotics Systems Ltd., the company which developed SCAMP's vision system.

Details

Sensor Review, vol. 3 no. 1
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 1 September 2003

Paul G. Ranky

According to a recent study by Active‐Media Research, USA, total sales in advanced vision systems, including vision‐guided mobile devices/robots are expected to soar from US $665…

1142

Abstract

According to a recent study by Active‐Media Research, USA, total sales in advanced vision systems, including vision‐guided mobile devices/robots are expected to soar from US $665 million in the year 2000 to more than US $17 billion by 2005. The key technologies contributing to this massive growth include major advancements in vision imaging science and control, integrated with increasingly “machine intelligent” and robust industrial robotics. This article gives an overview of key machine vision advancements, and demonstrates some advanced vision application examples.

Details

Sensor Review, vol. 23 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 1 June 2002

Anna Kochan

Outlines the factors causing the automotive industry to increase machine vision application, reviews new developments in vision technology that are targeted at expanding and…

1103

Abstract

Outlines the factors causing the automotive industry to increase machine vision application, reviews new developments in vision technology that are targeted at expanding and improving it use in the automotive industry, reports on an innovative application of vision guided robotics at DaimlerChrysler

Details

Sensor Review, vol. 22 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 1 October 2005

B.P. Amavasai, F. Caparrelli, A. Selvan, M. Boissenin, J.R. Travis and S. Meikle

To develop customised machine vision methods for closed‐loop micro‐robotic control systems. The micro‐robots have applications in areas that require micro‐manipulation and…

Abstract

Purpose

To develop customised machine vision methods for closed‐loop micro‐robotic control systems. The micro‐robots have applications in areas that require micro‐manipulation and micro‐assembly in the micron and sub‐micron range.

Design/methodology/approach

Several novel techniques have been developed to perform calibration, object recognition and object tracking in real‐time under a customised high‐magnification camera system. These new methods combine statistical, neural and morphological approaches.

Findings

An in‐depth view of the machine vision sub‐system that was designed for the European MiCRoN project (project no. IST‐2001‐33567) is provided. The issue of cooperation arises when several robots with a variety of on‐board tools are placed in the working environment. By combining multiple vision methods, the information obtained can be used effectively to guide the robots in achieving the pre‐planned tasks.

Research limitations/implications

Some of these techniques were developed for micro‐vision but could be extended to macro‐vision. The techniques developed here are robust to noise and occlusion so they can be applied to a variety of macro‐vision areas suffering from similar limitations.

Practical implications

The work here will expand the use of micro‐robots as tools to manipulate and assemble objects and devices in the micron range. It is foreseen that, as the requirement for micro‐manufacturing increases, techniques like those developed in this paper will play an important role for industrial automation.

Originality/value

This paper extends the use of machine vision methods into the micron range.

Details

Kybernetes, vol. 34 no. 9/10
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 1 February 1988

A leading consultant offers advice on the biggest question facing would‐be vision system users: how to choose a system in the first place.

Abstract

A leading consultant offers advice on the biggest question facing would‐be vision system users: how to choose a system in the first place.

Details

Sensor Review, vol. 8 no. 2
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 1 September 2005

Mario Peña‐Cabrera, Ismael Lopez‐Juarez, Reyes Rios‐Cabrera and Jorge Corona‐Castuera

Outcome with a novel methodology for online recognition and classification of pieces in robotic assembly tasks and its application into an intelligent manufacturing cell.

1794

Abstract

Purpose

Outcome with a novel methodology for online recognition and classification of pieces in robotic assembly tasks and its application into an intelligent manufacturing cell.

Design/methodology/approach

The performance of industrial robots working in unstructured environments can be improved using visual perception and learning techniques. The object recognition is accomplished using an artificial neural network (ANN) architecture which receives a descriptive vector called CFD&POSE as the input. Experimental results were done within a manufacturing cell and assembly parts.

Findings

Find this vector represents an innovative methodology for classification and identification of pieces in robotic tasks, obtaining fast recognition and pose estimation information in real time. The vector compresses 3D object data from assembly parts and it is invariant to scale, rotation and orientation, and it also supports a wide range of illumination levels.

Research limitations/implications

Provides vision guidance in assembly tasks, current work addresses the use of ANN's for assembly and object recognition separately, future work is oriented to use the same neural controller for all different sensorial modes.

Practical implications

Intelligent manufacturing cells developed with multimodal sensor capabilities, might use this methodology for future industrial applications including robotics fixtureless assembly. The approach in combination with the fast learning capability of ART networks indicates the suitability for industrial robot applications as it is demonstrated through experimental results.

Originality/value

This paper introduces a novel method which uses collections of 2D images to obtain a very fast feature data – ”current frame descriptor vector” – of an object by using image projections and canonical forms geometry grouping for invariant object recognition.

Details

Assembly Automation, vol. 25 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 1 January 1983

Clive Loughlin and Ed Hudson

The advent of low cost miniature solid state cameras now makes eye‐in‐hand robot vision a practical possibility. This paper discusses the advantages of eye‐in‐hand vision and…

Abstract

The advent of low cost miniature solid state cameras now makes eye‐in‐hand robot vision a practical possibility. This paper discusses the advantages of eye‐in‐hand vision and shows that with the Unimation VAL operating system it is easier to use than is possible with static overhead cameras.

Details

Sensor Review, vol. 3 no. 1
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 28 August 2007

Hai Chao Li, Hong Ming Gao and Lin Wu

This paper aims to develop a performing approach for telerobotic arc welding in an unstructured environment.

Abstract

Purpose

This paper aims to develop a performing approach for telerobotic arc welding in an unstructured environment.

Design/methodology/approach

A teleteaching approach is presented for an arc welding telerobotic system in an unstructured environment. Improved laser vision sensor enhances the precision of teleteaching welding seam. Stereoscopic vision display system is developed to provide the perception information of remote environment that increased the dexterity of the teleteaching process. Operator interacts with the system by welding multi‐modal human‐machine interface, which integrated the teleteaching operation window, status display window and space mouse.

Findings

The sensor‐based teleteaching approach, which integrated laser vision sensing and stereoscopic vision display, can perform arc welding of most welding seam trajectory in an unstructured environment. The approach releases the payload of human operator and improves adaptability of the arc welding system.

Research limitations/implications

The paper provides the remote welding telerobotic approach that is gentle to most unstructured environments.

Practical implications

The sensor‐based teleteaching approach provides the capability of a telerobotic system used in remote welding field, which can shorten the incident response time and maintenance period of nuclear plants, space and underwater.

Originality/value

This paper introduces the sensor‐based teleteaching concept and performing procedure to be used for remote telerobotic arc welding.

Details

Industrial Robot: An International Journal, vol. 34 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 February 1993

Kathy McWalter

Visual feedback is integral to the PCB assembly process for a variety of tasks, including board alignment, component identification, and guidance of components onto boards. Most…

Abstract

Visual feedback is integral to the PCB assembly process for a variety of tasks, including board alignment, component identification, and guidance of components onto boards. Most of those visually oriented tasks, however, have become increasingly challenging due to trends towards faster production lines, increased integration of fine‐pitch SMDs, and greater reliance on factory automation.

Details

Assembly Automation, vol. 13 no. 2
Type: Research Article
ISSN: 0144-5154

Article
Publication date: 16 February 2021

Elena Villaespesa and Seth Crider

Based on the highlights of The Metropolitan Museum of Art's collection, the purpose of this paper is to examine the similarities and differences between the subject keywords tags…

Abstract

Purpose

Based on the highlights of The Metropolitan Museum of Art's collection, the purpose of this paper is to examine the similarities and differences between the subject keywords tags assigned by the museum and those produced by three computer vision systems.

Design/methodology/approach

This paper uses computer vision tools to generate the data and the Getty Research Institute's Art and Architecture Thesaurus (AAT) to compare the subject keyword tags.

Findings

This paper finds that there are clear opportunities to use computer vision technologies to automatically generate tags that expand the terms used by the museum. This brings a new perspective to the collection that is different from the traditional art historical one. However, the study also surfaces challenges about the accuracy and lack of context within the computer vision results.

Practical implications

This finding has important implications on how these machine-generated tags complement the current taxonomies and vocabularies inputted in the collection database. In consequence, the museum needs to consider the selection process for choosing which computer vision system to apply to their collection. Furthermore, they also need to think critically about the kind of tags they wish to use, such as colors, materials or objects.

Originality/value

The study results add to the rapidly evolving field of computer vision within the art information context and provide recommendations of aspects to consider before selecting and implementing these technologies.

Details

Journal of Documentation, vol. 77 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

21 – 30 of over 80000