Search results

11 – 20 of over 83000
Article
Publication date: 1 March 1987

Bill Vogeley

Many industrial applications could benefit from line imaging‐based edge sensors rather than full‐scale vision systems, a leading specialist argues.

Abstract

Many industrial applications could benefit from line imaging‐based edge sensors rather than full‐scale vision systems, a leading specialist argues.

Details

Sensor Review, vol. 7 no. 3
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 1 September 2003

Leigh Simpson

The recent introduction of low‐cost vision sensors has greatly increased the range of applications for vision. Within the arena of automated assembly there are a number of tasks…

Abstract

The recent introduction of low‐cost vision sensors has greatly increased the range of applications for vision. Within the arena of automated assembly there are a number of tasks that vision is suited to and these are outlined. Also the idea of distributing vision throughout the assembly process together with networking via Ethernet is examined.

Details

Assembly Automation, vol. 23 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 1 January 1984

ASEA, the Swedish robot builder, has introduced a robust robot vision system which is easy to program by the person on the shopfloor. John Mortimer reports.

Abstract

ASEA, the Swedish robot builder, has introduced a robust robot vision system which is easy to program by the person on the shopfloor. John Mortimer reports.

Details

Sensor Review, vol. 4 no. 1
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 12 May 2020

Jing Bai, Yuchang Zhang, Xiansheng Qin, Zhanxi Wang and Chen Zheng

The purpose of this paper is to present a visual detection approach to predict the poses of target objects placed in arbitrary positions before completing the corresponding tasks…

Abstract

Purpose

The purpose of this paper is to present a visual detection approach to predict the poses of target objects placed in arbitrary positions before completing the corresponding tasks in mobile robotic manufacturing systems.

Design/methodology/approach

A hybrid visual detection approach that combines monocular vision and laser ranging is proposed based on an eye-in-hand vision system. The laser displacement sensor is adopted to achieve normal alignment for an arbitrary plane and obtain depth information. The monocular camera measures the two-dimensional image information. In addition, a robot hand-eye relationship calibration method is presented in this paper.

Findings

First, a hybrid visual detection approach for mobile robotic manufacturing systems is proposed. This detection approach is based on an eye-in-hand vision system consisting of one monocular camera and three laser displacement sensors and it can achieve normal alignment for an arbitrary plane and spatial positioning of the workpiece. Second, based on this vision system, a robot hand-eye relationship calibration method is presented and it was successfully applied to a mobile robotic manufacturing system designed by the authors’ team. As a result, the relationship between the workpiece coordinate system and the end-effector coordinate system could be established accurately.

Practical implications

This approach can quickly and accurately establish the relationship between the coordinate system of the workpiece and that of the end-effector. The normal alignment accuracy of the hand-eye vision system was less than 0.5° and the spatial positioning accuracy could reach 0.5 mm.

Originality/value

This approach can achieve normal alignment for arbitrary planes and spatial positioning of the workpiece and it can quickly establish the pose relationship between the workpiece and end-effector coordinate systems. Moreover, the proposed approach can significantly improve the work efficiency, flexibility and intelligence of mobile robotic manufacturing systems.

Details

Industrial Robot: the international journal of robotics research and application, vol. 47 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 April 1994

John Pretlove

Describes research into the use of external sensors for robot systems toallow them to react intelligently to unforeseen events in the productionprocess and irregularities in…

218

Abstract

Describes research into the use of external sensors for robot systems to allow them to react intelligently to unforeseen events in the production process and irregularities in products. Examines the use of active vision systems with robot controllers and the integration of the two systems. Concludes that this enhances the ability of an industrial robot system to cope with variations and unforeseen circumstances in the workcell or the workpiece.

Details

Industrial Robot: An International Journal, vol. 21 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 March 1995

Kevin Duarte and Steven LeBlanc

Describes how a computer disk and storage media company [KAO Infosystemsof the USA]uses machine vision technology to maintain the quality of itsproducts by isolating problems and…

Abstract

Describes how a computer disk and storage media company [KAO Infosystems of the USA]uses machine vision technology to maintain the quality of its products by isolating problems and identifying ways of improving the manufacturing process. Emphasises the need to fully define applications and evaluate the technology before introducing a new element to an automation process and stresses the need to integrate the vision system hardware with the plant’s existing manufacturing equipment.

Details

Sensor Review, vol. 15 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 28 June 2011

Caixia Yan and Qiang Zhan

The purpose of this paper is to describe how the authors designed a small satellite formation ground test bed in order to study the small satellite formation flying technologies…

Abstract

Purpose

The purpose of this paper is to describe how the authors designed a small satellite formation ground test bed in order to study the small satellite formation flying technologies, such as autonomous formation control and network communication. As one of the subsystems, the vision detection system is responsible for the pose (position and orientation) detection of the three small satellite simulators, each of which is composed of a wheeled mobile robot and an on‐board micro control unit. In this paper, the rapid vision locating of the three small satellite simulators in the wide field is discussed.

Design/methodology/approach

The scene size required by the test bed has exceeded the scope of one camera, thus how to obtain the complete scene becomes a difficulty. On the base of image mosaic, a vision system composed of two cameras is designed to capture the scene simultaneously. After the two overlapped images are rapidly stitched, the real‐time view of the big scene is attained. Second, the new color tag representing the pose of small satellite simulators is designed, which can be easily identified.

Findings

A real‐time multiple mobile robots visual locating system is introduced, in which the global search algorithm and track search algorithm are combined together to identify the real‐time pose of multiple mobile robots. The switching strategy between the two algorithms is given to ensure the accuracy and improve retrieval speed.

Originality/value

The paper shows how, without camera calibration, the pose of each small satellite simulator in the world coordinate system can be directly calculated by the coordinate transformation from the image coordinate system to the world coordinate system based on relative measurement. The accuracy and real‐time performance of the vision detection system have been validated by experiments on locating static tags and dynamic tracking three small satellite simulators.

Details

Sensor Review, vol. 31 no. 3
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 30 January 2020

Guoyang Wan, Fudong Li, Wenjun Zhu and Guofeng Wang

The positioning and grasping of large-size objects have always had problems of low positioning accuracy, slow grasping speed and high application cost compared with ordinary small…

Abstract

Purpose

The positioning and grasping of large-size objects have always had problems of low positioning accuracy, slow grasping speed and high application cost compared with ordinary small parts tasks. This paper aims to propose and implement a binocular vision-guided grasping system for large-size object with industrial robot.

Design/methodology/approach

To guide the industrial robot to grasp the object with high position and pose accuracy, this study measures the pose of the object by extracting and reconstructing three non-collinear feature points on it. To improve the precision and the robustness of the pose measuring, a coarse-to-fine positioning strategy is proposed. First, a coarse but stable feature is chosen to locate the object in the image and provide initial regions for the fine features. Second, three circular holes are chosen to be the fine features whose centers are extracted with a robust ellipse fitting strategy and thus determine the precise pose and position of the object.

Findings

Experimental results show that the proposed system has achieved high robustness and high positioning accuracy of −1 mm and pose accuracy of −0.5 degree.

Originality/value

It is a high accuracy method that can be used for industrial robot vision-guided and grasp location.

Details

Sensor Review, vol. 40 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 1 February 1986

Sarah Gardner

‘Machine intelligence and vision systems in practice’ was the subject of a recent seminar held by the Institution of Production Engineering.

Abstract

‘Machine intelligence and vision systems in practice’ was the subject of a recent seminar held by the Institution of Production Engineering.

Details

Sensor Review, vol. 6 no. 2
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 15 June 2012

Xi‐Zhang Chen, Yu‐Ming Huang and Shan‐ben Chen

Stereo vision technique simulates the function of the human eyes to observe the world, which can be used to compute the spatial information of weld seam in the robot welding…

Abstract

Purpose

Stereo vision technique simulates the function of the human eyes to observe the world, which can be used to compute the spatial information of weld seam in the robot welding field. It is a typical kind of application to fix two cameras on the end effector of robot when stereo vision is used in intelligent robot welding. In order to analyse the effect of vision system configuration on vision computing, an accuracy analysis model of vision computing is constructed, which is a good guide for the construction and application of stereo vision system in welding robot field.

Design/methodology/approach

A typical stereo vision system fixed on welding robot is designed and constructed to compute the position information of spatial seam. A simplified error analysis model of the two arbitrary putting cameras is built to analyze the effect of sensors' structural parameter on vision computing accuracy. The methodology of model analysis and experimental verification are used in the research. And experiments related with image extraction, robot movement accuracy is also designed to analyze the effect of equipment accuracy and related processed procedure in vision technology.

Findings

Effect of repeatability positioning accuracy and TCP calibration error of welding robot for visual computing are also analyzed and tested. The results show that effect of the repeatability on computing accuracy is not bigger than 0.3 mm. However, TCP affected the computing accuracy greatly, when the calibrated error of TCP is bigger than 0.5, the re‐calibration is very necessary. The accuracy analysis and experimental technique in this paper can guide the research of three‐dimensional information computing by stereo vision and improve the computed accuracy.

Originality/value

The accuracy of seam position information is affected by many interactional factors, the systematic experiments and a simplified error analysis model are designed and established, the main factors such as the sensor's configurable parameters, the accuracy of arc welding robot and the accuracy of image recognition, are included in the model and experiments. The model and experimental method are significant for design of visual sensor and improvement of computing accuracy.

Details

Industrial Robot: An International Journal, vol. 39 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

11 – 20 of over 83000