To read this content please select one of the options below:

Pick‐and‐place application development using voice and visual commands

Sebastian van Delden (Division of Mathematics and Computer Science, University of South Carolina Upstate, Spartanburg, South Carolina, USA)
Michael Umrysh (Livingston and Haven Manufacturing, Charlotte, North Carolina, USA)
Carlos Rosario (Division of Mathematics and Computer Science, University of South Carolina Upstate, Spartanburg, South Carolina, USA)
Gregory Hess (Division of Mathematics and Computer Science, University of South Carolina Upstate, Spartanburg, South Carolina, USA)

Industrial Robot

ISSN: 0143-991x

Article publication date: 12 October 2012

646

Abstract

Purpose

The purpose of this paper is to design an interactive industrial robotic system which can be used to assist a “layperson” in re‐casting a generic pick‐and‐place application. A user can program a pick‐and‐place application simply by pointing to objects in the work area and speaking simple and intuitive natural language commands.

Design/methodology/approach

The system was implemented in C# using the EMGU wrapper classes for OpenCV as well as the MS Speech Recognition API. The target language to be recognized was modelled using traditional augmented transition networks which were implemented as XML Grammars. The authors developed an original finger‐pointing algorithm using a unique combination of standard morphological and image processing techniques. Recognized voice commands trigger the vision component to capture what a user is pointing at. If the specified action requires robot movement, the required information is sent to the robot control component of the system, which then transmits the commands to the robot controller for execution.

Findings

The voice portion of the system was tested on the factory floor in a “typical” manufacturing environment, which was right at the maximum allowable average decibel level specified by OSHA. The findings show that a modern/standard MS Speech API voice recognition system can achieve a 100 per cent accuracy of simple commands; although at the noisy levels of 89 decibels on average, every one out of six commands had to be repeated. The vision component was test of 72 test subjects who had no prior knowledge of this work. The system accurately recognized what the test subjects were pointing at 95 per cent of the time within five seconds of hand readjusting.

Research limitations/implications

The vision component suffers from the “typical” problems: very shiny surfaces can cause problems; very poor contrast between the pointing hand and the background; and occlusions. Currently the system can only handle a limited amount of depth recovery using a spring mounted gripper. A second camera (future work) needs to be incorporated in order to handle large depth variations in the work area.

Practical implications

This system could have a huge impact on how factory floor workers interact with robotic equipment.

Originality/value

The testing of the voice system on a factory floor, although simple, is very important. It proves the viability of this component of the system and debunks arguments that factories are simply too noisy for current voice technology. The unique finger‐pointing algorithm developed by the authors is also an important contribution to the field. In particular, the manner in which the pointing vector was constructed. Furthermore, very few papers report results of non‐experts using their pointing algorithms. The paper reports concrete results that show the system is intuitive and user friendly to “laypersons”.

Keywords

Citation

van Delden, S., Umrysh, M., Rosario, C. and Hess, G. (2012), "Pick‐and‐place application development using voice and visual commands", Industrial Robot, Vol. 39 No. 6, pp. 592-600. https://doi.org/10.1108/01439911211268796

Publisher

:

Emerald Group Publishing Limited

Copyright © 2012, Emerald Group Publishing Limited

Related articles