To read this content please select one of the options below:

Computer vision in interactive robotics

Roberto Cipolla (University of Cambridge, Department of Engineering, Trumpington Street, Cambridge)
Nicholas Hollinghurst (University of Cambridge, Department of Engineering, Trumpington Street, Cambridge)
Andrew Gee (University of Cambridge, Department of Engineering, Trumpington Street, Cambridge)
Robert Dowland (University of Cambridge, Department of Engineering, Trumpington Street, Cambridge)

Assembly Automation

ISSN: 0144-5154

Article publication date: 1 March 1996

604

Abstract

Computer vision provides many opportunities for novel man‐machine interfaces. Pointing and face gestures can be used as a simple, passive means of interfacing with computers and robots. We describe two novel algorithms to track the position and orientation of the user’s hand or face in video images. This information is used to determine where the hand or face is pointing. This can be used in interactive robotics to allow a user with manipulation disabilities or working in hazardous environments to guide a robot manipulator to pick up a simple object of interest.

Keywords

Citation

Cipolla, R., Hollinghurst, N., Gee, A. and Dowland, R. (1996), "Computer vision in interactive robotics", Assembly Automation, Vol. 16 No. 1, pp. 18-24. https://doi.org/10.1108/01445159610110642

Publisher

:

MCB UP Ltd

Copyright © 1996, MCB UP Limited

Related articles