Search results

1 – 10 of over 2000
To view the access options for this content please click here
Article
Publication date: 7 September 2015

Ryo Izuta, Kazuya Murao, Tsutomu Terada and Masahiko Tsukamoto

This paper aims to propose a gesture recognition method at an early stage. An accelerometer is installed in most current mobile phones, such as iPhones, Android-powered…

Abstract

Purpose

This paper aims to propose a gesture recognition method at an early stage. An accelerometer is installed in most current mobile phones, such as iPhones, Android-powered devices and video game controllers for the Wii or PS3, which enables easy and intuitive operations. Therefore, many gesture-based user interfaces that use accelerometers are expected to appear in the future. Gesture recognition systems with an accelerometer generally have to construct models with user’s gesture data before use and recognize unknown gestures by comparing them with the models. Because the recognition process generally starts after the gesture has finished, the output of the recognition result and feedback delay, which may cause users to retry gestures, degrades the interface usability.

Design/methodology/approach

The simplest way to achieve early recognition is to start it at a fixed time after a gesture starts. However, the degree of accuracy would decrease if a gesture in an early stage was similar to the others. Moreover, the timing of a recognition has to be capped by the length of the shortest gesture, which may be too early for longer gestures. On the other hand, retreated recognition timing will exceed the length of the shorter gestures. In addition, a proper length of training data has to be found, as the full length of training data does not fit the input data until halfway. To recognize gestures in an early stage, proper recognition timing and a proper length of training data have to be decided. This paper proposes a gesture recognition method used in the early stages that sequentially calculates the distance between the input and training data. The proposed method outputs the recognition result when one candidate has a stronger likelihood of recognition than the other candidates so that similar incorrect gestures are not output.

Findings

The proposed method was experimentally evaluated on 27 kinds of gestures and it was confirmed that the recognition process finished 1,000 msec before the end of the gestures on average without deteriorating the level of accuracy. Gestures were recognized in an early stage of motion, which would lead to an improvement in the interface usability and a reduction in the number of incorrect operations such as retried gestures. Moreover, a gesture-based photo viewer was implemented as a useful application of our proposed method, the proposed early gesture recognition system was used in a live unscripted performance and its effectiveness is ensured.

Originality/value

Gesture recognition methods with accelerometers generally learn a given user’s gesture data before using the system, then recognizes any unknown gestures by comparing them with the training data. The recognition process starts after a gesture has finished, and therefore, any interaction or feedback depending on the recognition result is delayed. For example, an image on a smartphone screen rotates a few seconds after the device has been tilted, which may cause the user to retry tilting the smartphone even if the first one was correctly recognized. Although many studies on gesture recognition using accelerometers have been done, to the best of the authors’ knowledge, none of these studies has taken the potential delays in output into consideration.

Details

International Journal of Pervasive Computing and Communications, vol. 11 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

To view the access options for this content please click here
Article
Publication date: 7 November 2016

Ing-Jr Ding and Zong-Gui Wu

The Kinect sensor released by Microsoft is well-known for its effectiveness on human gesture recognition. Gesture recognition by Kinect has been proved to be an efficient…

Abstract

Purpose

The Kinect sensor released by Microsoft is well-known for its effectiveness on human gesture recognition. Gesture recognition by Kinect has been proved to be an efficient command operation and provides an additional human-computer interface in addition to the traditional speech recognition. For Kinect gesture recognition in the application of gesture command operations, recognition of the active user making the gesture command to Kinect will be an extremely crucial problem. The purpose of this paper is to propose a recognition method for recognizing the person identity of an active user using combined eigenspace and Gaussian mixture model (GMM) with Kinect-extracted action gesture features.

Design/methodology/approach

Several Kinect-derived gesture features will be explored for determining the effective pattern features in the active user recognition task. In this work, a separate Kinect-derived feature design for eigenspace recognition and GMM classification is presented for achieving the optimal performance of each individual classifier. In addition to Kinect-extracted feature designs for active user recognition, this study will further develop a combined recognition method, called combined eigenspace-GMM, which properly hybridizes the decision information of both the eigenspace and the GMM for making a more reliable user recognition result.

Findings

Active user recognition using an effective combination of eigenspace and GMM with well-developed active gesture features in Kinect-based active user recognition will have an outstanding performance on the recognition accuracy. The presented Kinect-based user recognition system using the presented approach will further have the competitive benefits of recognition on both gesture commands and providing users of gesture commands.

Originality/value

A hybridized scheme of eigenspace and GMM performs better than eigenspace-alone or GMM-alone on recognition accuracy of active user recognition; a separate Kinect-derived feature design for eigenspace recognition and GMM classification is presented for achieving the optimal performance of the individual classifier; combined eigenspace-GMM active user recognition belonging to model-based active user recognition design has a fine extension on increasing the recognition rate by adjusting recognition models.

To view the access options for this content please click here
Article
Publication date: 5 August 2014

Hairong Jiang, Juan P. Wachs and Bradley S. Duerstock

The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a…

Abstract

Purpose

The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a gesture recognition interface system was developed specially for individuals with upper-level spinal cord injuries including object tracking and face recognition to function as an efficient, hands-free WMRM controller.

Design/methodology/approach

Two Kinect® cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures and locate the operator's face for object positioning, and then send those as commands to control the WMRM. The other sensor was used to automatically recognize different daily living objects selected by the subjects. An object recognition module employing the Speeded Up Robust Features algorithm was implemented and recognition results were sent as a commands for “coarse positioning” of the robotic arm near the selected object. Automatic face detection was provided as a shortcut enabling the positing of the objects close by the subject's face.

Findings

The gesture recognition interface incorporated hand detection, tracking and recognition algorithms, and yielded a recognition accuracy of 97.5 percent for an eight-gesture lexicon. Tasks’ completion time were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection, and object recognition) WMRM control modes. The use of automatic face and object detection significantly reduced the completion times for retrieving a variety of daily living objects.

Originality/value

Integration of three computer vision modules were used to construct an effective and hand-free interface for individuals with upper-limb mobility impairments to control a WMRM.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 7 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 28 January 2014

Ognjan Luzanin and Miroslav Plancak

Main purpose is to present methodology which allows efficient hand gesture recognition using low-budget, 5-sensor data glove. To allow widespread use of low-budget data…

Abstract

Purpose

Main purpose is to present methodology which allows efficient hand gesture recognition using low-budget, 5-sensor data glove. To allow widespread use of low-budget data gloves in engineering virtual reality (VR) applications, gesture dictionaries must be enhanced with more ergonomic and symbolically meaningful hand gestures, while providing high gesture recognition rates when used by different seen and unseen users.

Design/methodology/approach

The simple boundary-value gesture recognition methodology was replaced by a probabilistic neural network (PNN)-based gesture recognition system able to process simple and complex static gestures. In order to overcome problems inherent to PNN – primarily, slow execution with large training data sets – the proposed gesture recognition system uses clustering ensemble to reduce the training data set without significant deterioration of the quality of training. The reduction of training data set is efficiently performed using three types of clustering algorithms, yielding small number of input vectors that represent the original population very well.

Findings

The proposed methodology is capable of providing efficient recognition of simple and complex static gestures and was also successfully tested with gestures of an unseen user, i.e. person who took no part in the training phase.

Practical implications

The hand gesture recognition system based on the proposed methodology enables the use of affordable data gloves with a small number of sensors in VR engineering applications which require complex static gestures, including assembly and maintenance simulations.

Originality/value

According to literature, there are no similar solutions that allow efficient recognition of simple and complex static hand gestures, based on a 5-sensor data glove.

Details

Assembly Automation, vol. 34 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article
Publication date: 8 March 2010

Pedro Neto, J. Norberto Pires and A. Paulo Moreira

Most industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. This is a tedious and time‐consuming task that…

Abstract

Purpose

Most industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. This is a tedious and time‐consuming task that requires some technical expertise, and hence new approaches to robot programming are required. The purpose of this paper is to present a robotic system that allows users to instruct and program a robot with a high‐level of abstraction from the robot language.

Design/methodology/approach

The paper presents in detail a robotic system that allows users, especially non‐expert programmers, to instruct and program a robot just showing it what it should do, in an intuitive way. This is done using the two most natural human interfaces (gestures and speech), a force control system and several code generation techniques. Special attention will be given to the recognition of gestures, where the data extracted from a motion sensor (three‐axis accelerometer) embedded in the Wii remote controller was used to capture human hand behaviours. Gestures (dynamic hand positions) as well as manual postures (static hand positions) are recognized using a statistical approach and artificial neural networks.

Findings

It is shown that the robotic system presented is suitable to enable users without programming expertise to rapidly create robot programs. The experimental tests showed that the developed system can be customized for different users and robotic platforms.

Research limitations/implications

The proposed system is tested on two different robotic platforms. Since the options adopted are mainly based on standards, it can be implemented with other robot controllers without significant changes. Future work will focus on improving the recognition rate of gestures and continuous gesture recognition.

Practical implications

The key contribution of this paper is that it offers a practical method to program robots by means of gestures and speech, improving work efficiency and saving time.

Originality/value

This paper presents an alternative to the typical robot teaching process, extending the concept of human‐robot interaction and co‐worker scenario. Since most companies do not have engineering resources to make changes or add new functionalities to their robotic manufacturing systems, this system constitutes a major advantage for small‐ to medium‐sized enterprises.

Details

Industrial Robot: An International Journal, vol. 37 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 16 October 2017

Jiajun Li, Jianguo Tao, Liang Ding, Haibo Gao, Zongquan Deng, Yang Luo and Zhandong Li

The purpose of this paper is to extend the usage of stroke gestures in manipulation tasks to make the interaction between human and robot more efficient.

Abstract

Purpose

The purpose of this paper is to extend the usage of stroke gestures in manipulation tasks to make the interaction between human and robot more efficient.

Design/methodology/approach

In this paper, a set of stroke gestures is designed for typical manipulation tasks. A gesture recognition and parameter extraction system is proposed to exploit the information in stroke gestures drawn by the users.

Findings

The results show that the designed gesture recognition subsystem can reach a recognition accuracy of 99.00 per cent. The parameter extraction subsystem can successfully extract parameters needed for typical manipulation tasks with a success rate about 86.30 per cent. The system shows an acceptable performance in the experiments.

Practical implications

Using stroke gesture in manipulation tasks can make the transmission of human intentions to the robots more efficient. The proposed gesture recognition subsystem is based on convolutional neural network which is robust to different input. The parameter extraction subsystem can extract the spatial information encoded in stroke gestures.

Originality/value

The author designs stroke gestures for manipulation tasks which is an extension of the usage of stroke gestures. The proposed gesture recognition and parameter extraction system can make use of stroke gestures to get the type of the task and important parameters for the task simultaneously.

Details

Industrial Robot: An International Journal, vol. 44 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 17 August 2015

Gilbert Tang, Seemal Asif and Phil Webb

The purpose of this paper is to describe the integration of a gesture control system for industrial collaborative robot. Human and robot collaborative systems can be a…

Abstract

Purpose

The purpose of this paper is to describe the integration of a gesture control system for industrial collaborative robot. Human and robot collaborative systems can be a viable manufacturing solution, but efficient control and communication are required for operations to be carried out effectively and safely.

Design/methodology/approach

The integrated system consists of facial recognition, static pose recognition and dynamic hand motion tracking. Each sub-system has been tested in isolation before integration and demonstration of a sample task.

Findings

It is demonstrated that the combination of multiple gesture control methods can increase its potential applications for industrial robots.

Originality/value

The novelty of the system is the combination of a dual gesture controls method which allows operators to command an industrial robot by posing hand gestures as well as control the robot motion by moving one of their hands in front of the sensor. A facial verification system is integrated to improve the robustness, reliability and security of the control system which also allows assignment of permission levels to different users.

Details

Industrial Robot: An International Journal, vol. 42 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 4 April 2016

Ediz Saykol, Halit Talha Türe, Ahmet Mert Sirvanci and Mert Turan

The purpose of this paper to classify a set of Turkish sign language (TSL) gestures by posture labeling based finite-state automata (FSA) that utilize depth values in…

Abstract

Purpose

The purpose of this paper to classify a set of Turkish sign language (TSL) gestures by posture labeling based finite-state automata (FSA) that utilize depth values in location-based features. Gesture classification/recognition is crucial not only in communicating visually impaired people but also for educational purposes. The paper also demonstrates the practical use of the techniques for TSL.

Design/methodology/approach

Gesture classification is based on the sequence of posture labels that are assigned by location-based features, which are invariant under rotation and scale. Grid-based signing space clustering scheme is proposed to guide the feature extraction step. Gestures are then recognized by FSA that process temporally ordered posture labels.

Findings

Gesture classification accuracies and posture labeling performance are compared to k-nearest neighbor to show that the technique provides a reasonable framework for recognition of TSL gestures. A challenging set of gestures is tested, however the technique is extendible, and extending the training set will increase the performance.

Practical implications

The outcomes can be utilized as a system for educational purposes especially for visually impaired children. Besides, a communication system would be designed based on this framework.

Originality/value

The posture labeling scheme, which is inspired from keyframe labeling concept of video processing, is the original part of the proposed gesture classification framework. The search space is reduced to single dimension instead of 3D signing space, which also facilitates design of recognition schemes. Grid-based clustering scheme and location-based features are also new and depth values are received from Kinect. The paper is of interest for researchers in pattern recognition and computer vision.

Details

Kybernetes, vol. 45 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 December 2010

Caroline Langensiepen, Ahmad Lotfi and Scott Higgins

The world has an ageing population who want to stay at home, many of whom are unable to care for themselves without help. As the number of available carers is becoming…

Abstract

The world has an ageing population who want to stay at home, many of whom are unable to care for themselves without help. As the number of available carers is becoming saturated by demand, research is being carried out into how technology could assist elderly people in the home. A barrier preventing wide adoption is that this audience can find controlling assistive technology difficult, as they may be less dexterous and computer literate. This paper explores the use of gestures to control home automation, hoping to provide a more natural and intuitive interface to help bridge the gap between technology and older users. A prototype was created, and then trialled with a small panel of older users. Using the Nintendo Wii Remote (Wiimote) technology, gestures performed in the air were captured using an infrared camera. Computational intelligence techniques were then used to recognise and learn the gestures. This resulted in sending the command to standard home automation X10 units to control a number of attached electrical devices. It was found that although older people could readily use gestures to control devices, configuration of a home system might remain a task for carers or technicians.

Details

Journal of Assistive Technologies, vol. 4 no. 4
Type: Research Article
ISSN: 1754-9450

Keywords

To view the access options for this content please click here
Article
Publication date: 1 January 2006

Hasanuzzaman, T. Zhang, V. Ampornaramveth and H. Ueno

Achieving natural interactions by means of vision and speech between humans and robots is one of the major goals that many researchers are working on. This paper aims to…

Abstract

Purpose

Achieving natural interactions by means of vision and speech between humans and robots is one of the major goals that many researchers are working on. This paper aims to describe a gesture‐based human‐robot interaction (HRI) system using a knowledge‐based software platform.

Design/methodology/approach

A frame‐based knowledge model is defined for the gesture interpretation and HRI. In this knowledge model, necessary frames are defined for the known users, robots, poses, gestures and robot behaviors. First, the system identifies the user using the eigenface method. Then, face and hand poses are segmented from the camera frame buffer using the person's specific skin color information and classified by the subspace method.

Findings

The system is capable of recognizing static gestures comprised of the face and hand poses, and dynamic gestures of face in motion. The system combines computer vision and knowledge‐based approaches in order to improve the adaptability to different people.

Originality/value

Provides information on an experimental HRI system that has been implemented in the frame‐based software platform for agent and knowledge management using the AIBO entertainment robot, and this has been demonstrated to be useful and efficient within a limited situation.

Details

Industrial Robot: An International Journal, vol. 33 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 2000