To read this content please select one of the options below:

Integrated vision-based system for efficient, semi-automated control of a robotic manipulator

Hairong Jiang (School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA)
Juan P. Wachs (School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA)
Bradley S. Duerstock (Weldon School of Biomedical Engineering, School of Industrial Engineering, Purdue University, West Lafayette, Indiana, USA)

International Journal of Intelligent Computing and Cybernetics

ISSN: 1756-378X

Article publication date: 5 August 2014

419

Abstract

Purpose

The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a gesture recognition interface system was developed specially for individuals with upper-level spinal cord injuries including object tracking and face recognition to function as an efficient, hands-free WMRM controller.

Design/methodology/approach

Two Kinect® cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures and locate the operator's face for object positioning, and then send those as commands to control the WMRM. The other sensor was used to automatically recognize different daily living objects selected by the subjects. An object recognition module employing the Speeded Up Robust Features algorithm was implemented and recognition results were sent as a commands for “coarse positioning” of the robotic arm near the selected object. Automatic face detection was provided as a shortcut enabling the positing of the objects close by the subject's face.

Findings

The gesture recognition interface incorporated hand detection, tracking and recognition algorithms, and yielded a recognition accuracy of 97.5 percent for an eight-gesture lexicon. Tasks’ completion time were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection, and object recognition) WMRM control modes. The use of automatic face and object detection significantly reduced the completion times for retrieving a variety of daily living objects.

Originality/value

Integration of three computer vision modules were used to construct an effective and hand-free interface for individuals with upper-limb mobility impairments to control a WMRM.

Keywords

Acknowledgements

This work was performed at the Institute for Accessible Science through the NIH Director's Pathfinder Award to Promote Diversity in the Scientific Workforce, funded by the American Recovery and Reinvestment Act and administered by the National Institute of General Medical Sciences (Grant No. 1DP4GM096842-01). The authors are grateful for the assistance of Jamie Nolan from the Institute for Accessible Science and Mithun Jacob from the Intelligent System and Assistive Technology (ISAT) lab at Purdue University.

Citation

Jiang, H., P. Wachs, J. and S. Duerstock, B. (2014), "Integrated vision-based system for efficient, semi-automated control of a robotic manipulator", International Journal of Intelligent Computing and Cybernetics, Vol. 7 No. 3, pp. 253-266. https://doi.org/10.1108/IJICC-09-2013-0042

Publisher

:

Emerald Group Publishing Limited

Copyright © 2014, Emerald Group Publishing Limited

Related articles