Search results

1 – 10 of over 7000
To view the access options for this content please click here
Article
Publication date: 1 December 2010

Caroline Langensiepen, Ahmad Lotfi and Scott Higgins

The world has an ageing population who want to stay at home, many of whom are unable to care for themselves without help. As the number of available carers is becoming…

Abstract

The world has an ageing population who want to stay at home, many of whom are unable to care for themselves without help. As the number of available carers is becoming saturated by demand, research is being carried out into how technology could assist elderly people in the home. A barrier preventing wide adoption is that this audience can find controlling assistive technology difficult, as they may be less dexterous and computer literate. This paper explores the use of gestures to control home automation, hoping to provide a more natural and intuitive interface to help bridge the gap between technology and older users. A prototype was created, and then trialled with a small panel of older users. Using the Nintendo Wii Remote (Wiimote) technology, gestures performed in the air were captured using an infrared camera. Computational intelligence techniques were then used to recognise and learn the gestures. This resulted in sending the command to standard home automation X10 units to control a number of attached electrical devices. It was found that although older people could readily use gestures to control devices, configuration of a home system might remain a task for carers or technicians.

Details

Journal of Assistive Technologies, vol. 4 no. 4
Type: Research Article
ISSN: 1754-9450

Keywords

To view the access options for this content please click here
Article
Publication date: 6 November 2017

Mustafa S. Aljumaily and Ghaida A. Al-Suhail

Recently, many researches have been devoted to studying the possibility of using wireless signals of the Wi-Fi networks in human-gesture recognition. They focus on…

Downloads
2160

Abstract

Purpose

Recently, many researches have been devoted to studying the possibility of using wireless signals of the Wi-Fi networks in human-gesture recognition. They focus on classifying gestures despite who is performing them, and only a few of the previous work make use of the wireless channel state information in identifying humans. This paper aims to recognize different humans and their multiple gestures in an indoor environment.

Design/methodology/approach

The authors designed a gesture recognition system that consists of channel state information data collection, preprocessing, features extraction and classification to guess the human and the gesture in the vicinity of a Wi-Fi-enabled device with modified Wi-Fi-device driver to collect the channel state information, and process it in real time.

Findings

The proposed system proved to work well for different humans and different gestures with an accuracy that ranges from 87 per cent for multiple humans and multiple gestures to 98 per cent for individual humans’ gesture recognition.

Originality/value

This paper used new preprocessing and filtering techniques, proposed new features to be extracted from the data and new classification method that have not been used in this field before.

Details

International Journal of Pervasive Computing and Communications, vol. 13 no. 4
Type: Research Article
ISSN: 1742-7371

Keywords

To view the access options for this content please click here
Article
Publication date: 8 March 2021

Neethu P.S., Suguna R. and Palanivel Rajan S.

This paper aims to propose a novel methodology for classifying the gestures using support vector machine (SVM) classification method. Initially, the Red Green Blue color…

Downloads
182

Abstract

Purpose

This paper aims to propose a novel methodology for classifying the gestures using support vector machine (SVM) classification method. Initially, the Red Green Blue color hand gesture image is converted into YCbCr image in preprocessing stage and then palm with finger region is segmented by threshold process. Then, distance transformation method is applied on the palm with finger segmented image. Further, the center point (centroid) of palm region is detected and the fingertips are detected using SVM classification algorithm based on the detected centroids of the detected palm region.

Design/methodology/approach

Gesture is a physical indication of the body to convey information. Though any bodily movement can be considered a gesture, generally it originates from the movement of hand or face or combination of both. Combined gestures are quiet complex and difficult for a machine to classify. This paper proposes a novel methodology for classifying the gestures using SVM classification method. Initially, the color hand gesture image is converted into YCbCr image in preprocessing stage and then palm with finger region is segmented by threshold process. Then, distance transformation method is applied on the palm with finger segmented image. Further, the center point of the palm region is detected and the fingertips are detected using SVM classification algorithm. The proposed hand gesture image classification system is applied and tested on “Jochen Triesch,” “Sebastien Marcel” and “11Khands” data set hand gesture images to evaluate the efficiency of the proposed system. The performance of the proposed system is analyzed with respect to sensitivity, specificity, accuracy and recognition rate. The simulation results of the proposed method on these different data sets are compared with the conventional methods.

Findings

This paper proposes a novel methodology for classifying the gestures using SVM classification method. Distance transform method is used to detect the center point of the segmented palm region. The proposed hand gesture detection methodology achieves 96.5% of sensitivity, 97.1% of specificity, 96.9% of accuracy and 99.3% of recognition rate on “Jochen Triesch” data set. The proposed hand gesture detection methodology achieves 94.6% of sensitivity, 95.4% of specificity, 95.3% of accuracy and 97.8% of recognition rate on “Sebastien Marcel” data set. The proposed hand gesture detection methodology achieves 97% of sensitivity, 98% of specificity, 98.1% of accuracy and 98.8% of recognition rate on “11Khands” data set. The proposed hand gesture detection methodology consumes 0.52 s as recognition time on “Jochen Triesch” data set images, 0.71 s as recognition time on “Sebastien Marcel” data set images and 0.22 s as recognition time on “11Khands” data set images. It is very clear that the proposed hand gesture detection methodology consumes less recognition rate on “11Khands” data set when compared with other data set images. Hence, this data set is very suitable for real-time hand gesture applications with multi background environments.

Originality/value

The modern world requires more numbers of automated systems for improving our daily routine activities in an efficient manner. This present day technology emerges touch screen methodology for operating or functioning many devices or machines with or without wire connections. This also makes impact on automated vehicles where the vehicles can be operated without any interfacing with the driver. This is possible through hand gesture recognition system. This hand gesture recognition system captures the real-time hand gestures, a physical movement of human hand, as a digital image and recognizes them with the pre stored set of hand gestures.

Details

Circuit World, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0305-6120

Keywords

To view the access options for this content please click here
Article
Publication date: 21 September 2015

Linda Wulf, Markus Garschall, Michael Klein and Manfred Tscheligi

The purpose of this paper is to gain deeper insights into performance differences of younger and older users when performing touch gestures, as well as the influence of…

Abstract

Purpose

The purpose of this paper is to gain deeper insights into performance differences of younger and older users when performing touch gestures, as well as the influence of tablet device orientation (portrait vs landscape).

Design/methodology/approach

The authors performed a comparative study involving 20 younger (25-45 years) and 20 older participants (65-85 years). Each participant executed six gestures with each device orientation. Age was set as a between-subject factor. The dependent variables were task completion time and error rates (missed target rate and finger lift rate). To measure various performance characteristics, the authors implemented an application for the iPad that logged completion time and error rates of the participants when performing six gestural tasks – tap, drag, pinch, pinch-pan, rotate left and rotate right – for both device orientations.

Findings

The results show a significant effect of age on completion time and error rates. Means reveal faster completion times and lower error rates for younger users than for older users. In addition, a significant effect of device orientation on error rates could be stated. Means show higher error rates for portrait orientation than for landscape orientation. Qualitative results reveal a clear preference for landscape orientation in both age groups and a lower acceptance of rotation gestures among older participants.

Originality/value

In this study the authors were able to show the importance of device orientation as an influencing factor on touch interaction performance, indicating that age is not the exclusive influencing factor.

Details

Journal of Assistive Technologies, vol. 9 no. 3
Type: Research Article
ISSN: 1754-9450

Keywords

To view the access options for this content please click here
Article
Publication date: 23 August 2019

Yiqun Kuang, Hong Cheng, Yali Zheng, Fang Cui and Rui Huang

This paper aims to present a one-shot gesture recognition approach which can be a high-efficient communication channel in human–robot collaboration systems.

Abstract

Purpose

This paper aims to present a one-shot gesture recognition approach which can be a high-efficient communication channel in human–robot collaboration systems.

Design/methodology/approach

This paper applies dynamic time warping (DTW) to align two gesture sequences in temporal domain with a novel frame-wise distance measure which matches local features in spatial domain. Furthermore, a novel and robust bidirectional attention region extraction method is proposed to retain information in both movement and hold phase of a gesture.

Findings

The proposed approach is capable of providing efficient one-shot gesture recognition without elaborately designed features. The experiments on a social robot (JiaJia) demonstrate that the proposed approach can be used in a human–robot collaboration system flexibly.

Originality/value

According to previous literature, there are no similar solutions that can achieve an efficient gesture recognition with simple local feature descriptor and combine the advantages of local features with DTW.

Details

Assembly Automation, vol. 40 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

To view the access options for this content please click here
Article
Publication date: 7 September 2015

Ryo Izuta, Kazuya Murao, Tsutomu Terada and Masahiko Tsukamoto

This paper aims to propose a gesture recognition method at an early stage. An accelerometer is installed in most current mobile phones, such as iPhones, Android-powered…

Downloads
287

Abstract

Purpose

This paper aims to propose a gesture recognition method at an early stage. An accelerometer is installed in most current mobile phones, such as iPhones, Android-powered devices and video game controllers for the Wii or PS3, which enables easy and intuitive operations. Therefore, many gesture-based user interfaces that use accelerometers are expected to appear in the future. Gesture recognition systems with an accelerometer generally have to construct models with user’s gesture data before use and recognize unknown gestures by comparing them with the models. Because the recognition process generally starts after the gesture has finished, the output of the recognition result and feedback delay, which may cause users to retry gestures, degrades the interface usability.

Design/methodology/approach

The simplest way to achieve early recognition is to start it at a fixed time after a gesture starts. However, the degree of accuracy would decrease if a gesture in an early stage was similar to the others. Moreover, the timing of a recognition has to be capped by the length of the shortest gesture, which may be too early for longer gestures. On the other hand, retreated recognition timing will exceed the length of the shorter gestures. In addition, a proper length of training data has to be found, as the full length of training data does not fit the input data until halfway. To recognize gestures in an early stage, proper recognition timing and a proper length of training data have to be decided. This paper proposes a gesture recognition method used in the early stages that sequentially calculates the distance between the input and training data. The proposed method outputs the recognition result when one candidate has a stronger likelihood of recognition than the other candidates so that similar incorrect gestures are not output.

Findings

The proposed method was experimentally evaluated on 27 kinds of gestures and it was confirmed that the recognition process finished 1,000 msec before the end of the gestures on average without deteriorating the level of accuracy. Gestures were recognized in an early stage of motion, which would lead to an improvement in the interface usability and a reduction in the number of incorrect operations such as retried gestures. Moreover, a gesture-based photo viewer was implemented as a useful application of our proposed method, the proposed early gesture recognition system was used in a live unscripted performance and its effectiveness is ensured.

Originality/value

Gesture recognition methods with accelerometers generally learn a given user’s gesture data before using the system, then recognizes any unknown gestures by comparing them with the training data. The recognition process starts after a gesture has finished, and therefore, any interaction or feedback depending on the recognition result is delayed. For example, an image on a smartphone screen rotates a few seconds after the device has been tilted, which may cause the user to retry tilting the smartphone even if the first one was correctly recognized. Although many studies on gesture recognition using accelerometers have been done, to the best of the authors’ knowledge, none of these studies has taken the potential delays in output into consideration.

Details

International Journal of Pervasive Computing and Communications, vol. 11 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

To view the access options for this content please click here
Article
Publication date: 5 August 2014

Hairong Jiang, Juan P. Wachs and Bradley S. Duerstock

The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a…

Abstract

Purpose

The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a gesture recognition interface system was developed specially for individuals with upper-level spinal cord injuries including object tracking and face recognition to function as an efficient, hands-free WMRM controller.

Design/methodology/approach

Two Kinect® cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures and locate the operator's face for object positioning, and then send those as commands to control the WMRM. The other sensor was used to automatically recognize different daily living objects selected by the subjects. An object recognition module employing the Speeded Up Robust Features algorithm was implemented and recognition results were sent as a commands for “coarse positioning” of the robotic arm near the selected object. Automatic face detection was provided as a shortcut enabling the positing of the objects close by the subject's face.

Findings

The gesture recognition interface incorporated hand detection, tracking and recognition algorithms, and yielded a recognition accuracy of 97.5 percent for an eight-gesture lexicon. Tasks’ completion time were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection, and object recognition) WMRM control modes. The use of automatic face and object detection significantly reduced the completion times for retrieving a variety of daily living objects.

Originality/value

Integration of three computer vision modules were used to construct an effective and hand-free interface for individuals with upper-limb mobility impairments to control a WMRM.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 7 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 17 August 2015

Gilbert Tang, Seemal Asif and Phil Webb

The purpose of this paper is to describe the integration of a gesture control system for industrial collaborative robot. Human and robot collaborative systems can be a…

Abstract

Purpose

The purpose of this paper is to describe the integration of a gesture control system for industrial collaborative robot. Human and robot collaborative systems can be a viable manufacturing solution, but efficient control and communication are required for operations to be carried out effectively and safely.

Design/methodology/approach

The integrated system consists of facial recognition, static pose recognition and dynamic hand motion tracking. Each sub-system has been tested in isolation before integration and demonstration of a sample task.

Findings

It is demonstrated that the combination of multiple gesture control methods can increase its potential applications for industrial robots.

Originality/value

The novelty of the system is the combination of a dual gesture controls method which allows operators to command an industrial robot by posing hand gestures as well as control the robot motion by moving one of their hands in front of the sensor. A facial verification system is integrated to improve the robustness, reliability and security of the control system which also allows assignment of permission levels to different users.

Details

Industrial Robot: An International Journal, vol. 42 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 1 January 2006

Hasanuzzaman, T. Zhang, V. Ampornaramveth and H. Ueno

Achieving natural interactions by means of vision and speech between humans and robots is one of the major goals that many researchers are working on. This paper aims to…

Abstract

Purpose

Achieving natural interactions by means of vision and speech between humans and robots is one of the major goals that many researchers are working on. This paper aims to describe a gesture‐based human‐robot interaction (HRI) system using a knowledge‐based software platform.

Design/methodology/approach

A frame‐based knowledge model is defined for the gesture interpretation and HRI. In this knowledge model, necessary frames are defined for the known users, robots, poses, gestures and robot behaviors. First, the system identifies the user using the eigenface method. Then, face and hand poses are segmented from the camera frame buffer using the person's specific skin color information and classified by the subspace method.

Findings

The system is capable of recognizing static gestures comprised of the face and hand poses, and dynamic gestures of face in motion. The system combines computer vision and knowledge‐based approaches in order to improve the adaptability to different people.

Originality/value

Provides information on an experimental HRI system that has been implemented in the frame‐based software platform for agent and knowledge management using the AIBO entertainment robot, and this has been demonstrated to be useful and efficient within a limited situation.

Details

Industrial Robot: An International Journal, vol. 33 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 28 January 2014

Ognjan Luzanin and Miroslav Plancak

Main purpose is to present methodology which allows efficient hand gesture recognition using low-budget, 5-sensor data glove. To allow widespread use of low-budget data…

Abstract

Purpose

Main purpose is to present methodology which allows efficient hand gesture recognition using low-budget, 5-sensor data glove. To allow widespread use of low-budget data gloves in engineering virtual reality (VR) applications, gesture dictionaries must be enhanced with more ergonomic and symbolically meaningful hand gestures, while providing high gesture recognition rates when used by different seen and unseen users.

Design/methodology/approach

The simple boundary-value gesture recognition methodology was replaced by a probabilistic neural network (PNN)-based gesture recognition system able to process simple and complex static gestures. In order to overcome problems inherent to PNN – primarily, slow execution with large training data sets – the proposed gesture recognition system uses clustering ensemble to reduce the training data set without significant deterioration of the quality of training. The reduction of training data set is efficiently performed using three types of clustering algorithms, yielding small number of input vectors that represent the original population very well.

Findings

The proposed methodology is capable of providing efficient recognition of simple and complex static gestures and was also successfully tested with gestures of an unseen user, i.e. person who took no part in the training phase.

Practical implications

The hand gesture recognition system based on the proposed methodology enables the use of affordable data gloves with a small number of sensors in VR engineering applications which require complex static gestures, including assembly and maintenance simulations.

Originality/value

According to literature, there are no similar solutions that allow efficient recognition of simple and complex static hand gestures, based on a 5-sensor data glove.

Details

Assembly Automation, vol. 34 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

1 – 10 of over 7000