Search results

1 – 10 of over 1000
Content available
135

Abstract

Details

Industrial Robot: An International Journal, vol. 33 no. 4
Type: Research Article
ISSN: 0143-991X

Keywords

Content available
Article
Publication date: 1 April 2005

55

Abstract

Details

Industrial Robot: An International Journal, vol. 32 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 7 August 2007

Heping Chen, George Zhang, Hui Zhang and Thomas A. Fuhlbrigge

This paper aims to develop a strategy for high‐precision assembly in a semi‐structured environment based on vision and force control.

Abstract

Purpose

This paper aims to develop a strategy for high‐precision assembly in a semi‐structured environment based on vision and force control.

Design/methodology/approach

The position and orientation of a part are identified using the vision system. The force/torque control algorithm is then applied to perform a tight‐tolerance assembly that is a high‐precision assembly.

Findings

The tight tolerance assembly in a semi‐structured environment is successfully implemented using vision guidance and force/torque‐control strategy.

Practical implications

The developed methodology can be applied to tight tolerance assembly, such as forward‐clutch assembly, torque‐converter assembly, etc.

Originality/value

An industrial assembly methodology has been developed and implemented for high‐precision assembly in a semi‐structured environment. This innovation has many potential applications in automotive manufacturing.

Details

Assembly Automation, vol. 27 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Content available
Article
Publication date: 21 August 2009

37

Abstract

Details

Industrial Robot: An International Journal, vol. 36 no. 5
Type: Research Article
ISSN: 0143-991X

Article
Publication date: 1 December 2005

Christine Connolly

Examines a recently launched integration of smart cameras into industrial robots to make them responsive to a changing environment.

Abstract

Purpose

Examines a recently launched integration of smart cameras into industrial robots to make them responsive to a changing environment.

Design/methodology/approach

Reviews the capabilities of the vision‐enabled robot, citing installations in Sweden and the UK, then describes the robot and vision programming procedure.

Findings

Vision integration opens up a range of new possibilities such as simultaneous product handling and inspection, as well as providing real‐time robot guidance. Standardisation plays an extremely valuable role in building integrated systems from disparate technological elements. Here ActiveX web standards, ethernet connectivity, a standard interchangeable family of cameras and a common controller for a whole range of robots are the keys to the synthesis of a powerful new combination of robot and machine vision.

Originality/value

Draws to the attention of industrial engineers the availability of a family of robots with integrated machine vision.

Details

Industrial Robot: An International Journal, vol. 32 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Content available
Article
Publication date: 11 September 2009

37

Abstract

Details

Sensor Review, vol. 29 no. 4
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 1 October 2004

John Fulton and Joanne Pransky

For the time in its 45‐year history, the Defense Advanced Research Projects Agency (DARPA) reached beyond its standard defense contractors and out to the public, and in 2004 held…

Abstract

For the time in its 45‐year history, the Defense Advanced Research Projects Agency (DARPA) reached beyond its standard defense contractors and out to the public, and in 2004 held the DARPA Grand Challenge in an effort to attract innovation in order to achieve a military mandate of having one‐third of America's ground combat vehicles unmanned by the year 2015. DARPA offered a cash prize of US$1 million to an automonous robotic vehicle that could navigate a 142 mile course in the Mojave desert in less than 10 h. Over 100 applications were submitted, and after further evaluations, DARPA narrowed the field to 25 finalists. After qualifying trials, 15 vehicles confronted the starting line on 13 March 2004. Though the farthest a vehicle got was 7.4 miles, the event was viewed as a technological breakthrough. This paper describes the systems that were set‐up to monitor and control the event, and features of the various robots.

Details

Industrial Robot: An International Journal, vol. 31 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 12 January 2010

M. Habibi and S.M. Sayedi

The purpose of this paper is to present a novel image‐labeling CMOS sensor for modulated marker detection.

Abstract

Purpose

The purpose of this paper is to present a novel image‐labeling CMOS sensor for modulated marker detection.

Design/methodology/approach

An image scene with multiple objects, each identified by a flashing light‐emitting diode (LED), is captured by the sensor. The LED's frequency is a representation of the object ID‐tag. The sensor detects and labels the objects by identifying the signal frequencies. The processing is performed in‐pixel and, since the object detection task is simplified, power dissipation is reduced. A 64×64 pixel sensor is designed in the 0.6 μm CMOS technology.

Findings

Simulation results show successful object identification. At a frame rate of 250 fps the measured power consumption is 11 mW, which is less than those of the previously reported object detection solutions. The application of the presented sensor is shown in several different robotic fields such as unmanned aerial vehicles (UAVs) vision, household robots and industrial robots. It is also explained how the sensor can be used for low‐power localization and position detection of the robot vehicles.

Originality/value

The paper shows that the sensor is a suitable solution for low‐power landmark detection and robot localization.

Details

Industrial Robot: An International Journal, vol. 37 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Content available
Article
Publication date: 1 June 2000

82

Abstract

Details

Industrial Robot: An International Journal, vol. 27 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 February 2005

E. Boivin and I. Sharf

The capability to perform dexterous operations in an autonomous manner would greatly enhance the productivity of robotic operations. In this paper, we present a new methodology…

Abstract

Purpose

The capability to perform dexterous operations in an autonomous manner would greatly enhance the productivity of robotic operations. In this paper, we present a new methodology for vision‐based grasping of objects or parts using a three‐finger hand as a gripper of a robotic manipulator.

Design/methodology/approach

The hand employed in our work, called SARAH, was designed for robotic operations on the space station, however, the main steps of our procedure can be applied for tasks in a manufacturing environment. Our methodology involves two principal stages: automatic synthesis of grasps for planar and revolute objects with SARAH and vision‐based pose estimation of the object to be grasped. For both stages, we assume that a model of the object is available off‐line.

Findings

In the paper, numerical results are presented for grasp synthesis of several objects with SARAH to demonstrate the feasibility and optimality of the synthesized grasps. Experimental results are also obtained with SARAH as the end‐effector of a seven‐degree‐of‐freedom robotic arm, demonstrating the feasibility of the integrated vision‐based grasping.

Research limitations/implications

The methodology described in the paper, although represents a substantial step towards automated grasping with a robotic manipulator, still requires some decision making from the user. Further work can improve the pose identification aspects of the algorithm to make them more robust and free of human intervention. As well, the grasp synthesis procedure can be expanded to handle more complex and possibly moving objects, as well as to allow for different grasp types than those considered here.

Practical implications

The work demonstrates feasibility of autonomous grasp execution in industrial setting by using a three‐finger hand as a robotic gripper.

Originality/value

The results presented in the paper demonstrate the feasibility of synthesising optimised grasps which take into account the kinematics of the gripper. We also demonstrate a real implementation of vision‐based grasping by using a robotic manipulator with a three‐finger hand.

Details

Industrial Robot: An International Journal, vol. 32 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 1000