Gesture‐based human‐robot interaction using a knowledge‐based software platform
Abstract
Purpose
Achieving natural interactions by means of vision and speech between humans and robots is one of the major goals that many researchers are working on. This paper aims to describe a gesture‐based human‐robot interaction (HRI) system using a knowledge‐based software platform.
Design/methodology/approach
A frame‐based knowledge model is defined for the gesture interpretation and HRI. In this knowledge model, necessary frames are defined for the known users, robots, poses, gestures and robot behaviors. First, the system identifies the user using the eigenface method. Then, face and hand poses are segmented from the camera frame buffer using the person's specific skin color information and classified by the subspace method.
Findings
The system is capable of recognizing static gestures comprised of the face and hand poses, and dynamic gestures of face in motion. The system combines computer vision and knowledge‐based approaches in order to improve the adaptability to different people.
Originality/value
Provides information on an experimental HRI system that has been implemented in the frame‐based software platform for agent and knowledge management using the AIBO entertainment robot, and this has been demonstrated to be useful and efficient within a limited situation.
Keywords
Citation
Hasanuzzaman, Zhang, T., Ampornaramveth, V. and Ueno, H. (2006), "Gesture‐based human‐robot interaction using a knowledge‐based software platform", Industrial Robot, Vol. 33 No. 1, pp. 37-49. https://doi.org/10.1108/01439910610638216
Publisher
:Emerald Group Publishing Limited
Copyright © 2006, Emerald Group Publishing Limited