Achieving natural interactions by means of vision and speech between humans and robots is one of the major goals that many researchers are working on. This paper aims to describe a gesture‐based human‐robot interaction (HRI) system using a knowledge‐based software platform.
A frame‐based knowledge model is defined for the gesture interpretation and HRI. In this knowledge model, necessary frames are defined for the known users, robots, poses, gestures and robot behaviors. First, the system identifies the user using the eigenface method. Then, face and hand poses are segmented from the camera frame buffer using the person's specific skin color information and classified by the subspace method.
The system is capable of recognizing static gestures comprised of the face and hand poses, and dynamic gestures of face in motion. The system combines computer vision and knowledge‐based approaches in order to improve the adaptability to different people.
Provides information on an experimental HRI system that has been implemented in the frame‐based software platform for agent and knowledge management using the AIBO entertainment robot, and this has been demonstrated to be useful and efficient within a limited situation.
Hasanuzzaman, ., Zhang, T., Ampornaramveth, V. and Ueno, H. (2006), "Gesture‐based human‐robot interaction using a knowledge‐based software platform", Industrial Robot, Vol. 33 No. 1, pp. 37-49. https://doi.org/10.1108/01439910610638216Download as .RIS
Emerald Group Publishing Limited
Copyright © 2006, Emerald Group Publishing Limited