Search results

1 – 10 of over 2000
Article
Publication date: 8 March 2021

Neethu P.S., Suguna R. and Palanivel Rajan S.

This paper aims to propose a novel methodology for classifying the gestures using support vector machine (SVM) classification method. Initially, the Red Green Blue color hand

270

Abstract

Purpose

This paper aims to propose a novel methodology for classifying the gestures using support vector machine (SVM) classification method. Initially, the Red Green Blue color hand gesture image is converted into YCbCr image in preprocessing stage and then palm with finger region is segmented by threshold process. Then, distance transformation method is applied on the palm with finger segmented image. Further, the center point (centroid) of palm region is detected and the fingertips are detected using SVM classification algorithm based on the detected centroids of the detected palm region.

Design/methodology/approach

Gesture is a physical indication of the body to convey information. Though any bodily movement can be considered a gesture, generally it originates from the movement of hand or face or combination of both. Combined gestures are quiet complex and difficult for a machine to classify. This paper proposes a novel methodology for classifying the gestures using SVM classification method. Initially, the color hand gesture image is converted into YCbCr image in preprocessing stage and then palm with finger region is segmented by threshold process. Then, distance transformation method is applied on the palm with finger segmented image. Further, the center point of the palm region is detected and the fingertips are detected using SVM classification algorithm. The proposed hand gesture image classification system is applied and tested on “Jochen Triesch,” “Sebastien Marcel” and “11Khands” data set hand gesture images to evaluate the efficiency of the proposed system. The performance of the proposed system is analyzed with respect to sensitivity, specificity, accuracy and recognition rate. The simulation results of the proposed method on these different data sets are compared with the conventional methods.

Findings

This paper proposes a novel methodology for classifying the gestures using SVM classification method. Distance transform method is used to detect the center point of the segmented palm region. The proposed hand gesture detection methodology achieves 96.5% of sensitivity, 97.1% of specificity, 96.9% of accuracy and 99.3% of recognition rate on “Jochen Triesch” data set. The proposed hand gesture detection methodology achieves 94.6% of sensitivity, 95.4% of specificity, 95.3% of accuracy and 97.8% of recognition rate on “Sebastien Marcel” data set. The proposed hand gesture detection methodology achieves 97% of sensitivity, 98% of specificity, 98.1% of accuracy and 98.8% of recognition rate on “11Khands” data set. The proposed hand gesture detection methodology consumes 0.52 s as recognition time on “Jochen Triesch” data set images, 0.71 s as recognition time on “Sebastien Marcel” data set images and 0.22 s as recognition time on “11Khands” data set images. It is very clear that the proposed hand gesture detection methodology consumes less recognition rate on “11Khands” data set when compared with other data set images. Hence, this data set is very suitable for real-time hand gesture applications with multi background environments.

Originality/value

The modern world requires more numbers of automated systems for improving our daily routine activities in an efficient manner. This present day technology emerges touch screen methodology for operating or functioning many devices or machines with or without wire connections. This also makes impact on automated vehicles where the vehicles can be operated without any interfacing with the driver. This is possible through hand gesture recognition system. This hand gesture recognition system captures the real-time hand gestures, a physical movement of human hand, as a digital image and recognizes them with the pre stored set of hand gestures.

Details

Circuit World, vol. 48 no. 2
Type: Research Article
ISSN: 0305-6120

Keywords

Article
Publication date: 17 August 2015

Gilbert Tang, Seemal Asif and Phil Webb

The purpose of this paper is to describe the integration of a gesture control system for industrial collaborative robot. Human and robot collaborative systems can be a viable…

Abstract

Purpose

The purpose of this paper is to describe the integration of a gesture control system for industrial collaborative robot. Human and robot collaborative systems can be a viable manufacturing solution, but efficient control and communication are required for operations to be carried out effectively and safely.

Design/methodology/approach

The integrated system consists of facial recognition, static pose recognition and dynamic hand motion tracking. Each sub-system has been tested in isolation before integration and demonstration of a sample task.

Findings

It is demonstrated that the combination of multiple gesture control methods can increase its potential applications for industrial robots.

Originality/value

The novelty of the system is the combination of a dual gesture controls method which allows operators to command an industrial robot by posing hand gestures as well as control the robot motion by moving one of their hands in front of the sensor. A facial verification system is integrated to improve the robustness, reliability and security of the control system which also allows assignment of permission levels to different users.

Details

Industrial Robot: An International Journal, vol. 42 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 5 August 2014

Hairong Jiang, Juan P. Wachs and Bradley S. Duerstock

The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a gesture

Abstract

Purpose

The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a gesture recognition interface system was developed specially for individuals with upper-level spinal cord injuries including object tracking and face recognition to function as an efficient, hands-free WMRM controller.

Design/methodology/approach

Two Kinect® cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures and locate the operator's face for object positioning, and then send those as commands to control the WMRM. The other sensor was used to automatically recognize different daily living objects selected by the subjects. An object recognition module employing the Speeded Up Robust Features algorithm was implemented and recognition results were sent as a commands for “coarse positioning” of the robotic arm near the selected object. Automatic face detection was provided as a shortcut enabling the positing of the objects close by the subject's face.

Findings

The gesture recognition interface incorporated hand detection, tracking and recognition algorithms, and yielded a recognition accuracy of 97.5 percent for an eight-gesture lexicon. Tasks’ completion time were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection, and object recognition) WMRM control modes. The use of automatic face and object detection significantly reduced the completion times for retrieving a variety of daily living objects.

Originality/value

Integration of three computer vision modules were used to construct an effective and hand-free interface for individuals with upper-limb mobility impairments to control a WMRM.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 7 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 28 January 2014

Ognjan Luzanin and Miroslav Plancak

Main purpose is to present methodology which allows efficient hand gesture recognition using low-budget, 5-sensor data glove. To allow widespread use of low-budget data gloves in…

Abstract

Purpose

Main purpose is to present methodology which allows efficient hand gesture recognition using low-budget, 5-sensor data glove. To allow widespread use of low-budget data gloves in engineering virtual reality (VR) applications, gesture dictionaries must be enhanced with more ergonomic and symbolically meaningful hand gestures, while providing high gesture recognition rates when used by different seen and unseen users.

Design/methodology/approach

The simple boundary-value gesture recognition methodology was replaced by a probabilistic neural network (PNN)-based gesture recognition system able to process simple and complex static gestures. In order to overcome problems inherent to PNN – primarily, slow execution with large training data sets – the proposed gesture recognition system uses clustering ensemble to reduce the training data set without significant deterioration of the quality of training. The reduction of training data set is efficiently performed using three types of clustering algorithms, yielding small number of input vectors that represent the original population very well.

Findings

The proposed methodology is capable of providing efficient recognition of simple and complex static gestures and was also successfully tested with gestures of an unseen user, i.e. person who took no part in the training phase.

Practical implications

The hand gesture recognition system based on the proposed methodology enables the use of affordable data gloves with a small number of sensors in VR engineering applications which require complex static gestures, including assembly and maintenance simulations.

Originality/value

According to literature, there are no similar solutions that allow efficient recognition of simple and complex static hand gestures, based on a 5-sensor data glove.

Details

Assembly Automation, vol. 34 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 8 March 2010

Pedro Neto, J. Norberto Pires and A. Paulo Moreira

Most industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. This is a tedious and time‐consuming task that requires…

1315

Abstract

Purpose

Most industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. This is a tedious and time‐consuming task that requires some technical expertise, and hence new approaches to robot programming are required. The purpose of this paper is to present a robotic system that allows users to instruct and program a robot with a high‐level of abstraction from the robot language.

Design/methodology/approach

The paper presents in detail a robotic system that allows users, especially non‐expert programmers, to instruct and program a robot just showing it what it should do, in an intuitive way. This is done using the two most natural human interfaces (gestures and speech), a force control system and several code generation techniques. Special attention will be given to the recognition of gestures, where the data extracted from a motion sensor (three‐axis accelerometer) embedded in the Wii remote controller was used to capture human hand behaviours. Gestures (dynamic hand positions) as well as manual postures (static hand positions) are recognized using a statistical approach and artificial neural networks.

Findings

It is shown that the robotic system presented is suitable to enable users without programming expertise to rapidly create robot programs. The experimental tests showed that the developed system can be customized for different users and robotic platforms.

Research limitations/implications

The proposed system is tested on two different robotic platforms. Since the options adopted are mainly based on standards, it can be implemented with other robot controllers without significant changes. Future work will focus on improving the recognition rate of gestures and continuous gesture recognition.

Practical implications

The key contribution of this paper is that it offers a practical method to program robots by means of gestures and speech, improving work efficiency and saving time.

Originality/value

This paper presents an alternative to the typical robot teaching process, extending the concept of human‐robot interaction and co‐worker scenario. Since most companies do not have engineering resources to make changes or add new functionalities to their robotic manufacturing systems, this system constitutes a major advantage for small‐ to medium‐sized enterprises.

Details

Industrial Robot: An International Journal, vol. 37 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 1 January 2006

Hasanuzzaman, T. Zhang, V. Ampornaramveth and H. Ueno

Achieving natural interactions by means of vision and speech between humans and robots is one of the major goals that many researchers are working on. This paper aims to describe…

Abstract

Purpose

Achieving natural interactions by means of vision and speech between humans and robots is one of the major goals that many researchers are working on. This paper aims to describe a gesture‐based human‐robot interaction (HRI) system using a knowledge‐based software platform.

Design/methodology/approach

A frame‐based knowledge model is defined for the gesture interpretation and HRI. In this knowledge model, necessary frames are defined for the known users, robots, poses, gestures and robot behaviors. First, the system identifies the user using the eigenface method. Then, face and hand poses are segmented from the camera frame buffer using the person's specific skin color information and classified by the subspace method.

Findings

The system is capable of recognizing static gestures comprised of the face and hand poses, and dynamic gestures of face in motion. The system combines computer vision and knowledge‐based approaches in order to improve the adaptability to different people.

Originality/value

Provides information on an experimental HRI system that has been implemented in the frame‐based software platform for agent and knowledge management using the AIBO entertainment robot, and this has been demonstrated to be useful and efficient within a limited situation.

Details

Industrial Robot: An International Journal, vol. 33 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 2 September 2019

Bo Zhang, Guanglong Du, Wenming Shen and Fang Li

The purpose of this paper is the research of a novel gesture-based dual-robot collaborative interaction interface, which achieves the gesture recognition when both hands overlap…

Abstract

Purpose

The purpose of this paper is the research of a novel gesture-based dual-robot collaborative interaction interface, which achieves the gesture recognition when both hands overlap. This paper designs a hybrid-sensor gesture recognition platform to detect the both-hand data for dual-robot control.

Design/methodology/approach

This paper uses a combination of Leap Motion and PrimeSense in the vertical direction, which detects both-hand data in real time. When there is occlusion between hands, each hand is detected by one of the sensors, and a quaternion-based algorithm is used to realize the conversion of two sensors corresponding to different coordinate systems. When there is no occlusion, the data are fused by a self-adaptive weight fusion algorithm. Then the collision detection algorithm is used to detect the collision between robots to ensure safety. Finally, the data are transmitted to the dual robots.

Findings

This interface is implemented on a dual-robot system consisting of two 6-DOF robots. The dual-robot cooperative experiment indicates that the proposed interface is feasible and effective, and it takes less time to operate and has higher interaction efficiency.

Originality/value

A novel gesture-based dual-robot collaborative interface is proposed. It overcomes the problem of gesture occlusion in two-hand interaction with low computational complexity and low equipment cost. The proposed interface can perform a long-term stable tracking of the two-hand gestures even if there is occlusion between the hands. Meanwhile, it reduces the number of hand reset to reduce the operation time. The proposed interface achieves a natural and safe interaction between the human and the dual robot.

Details

Industrial Robot: the international journal of robotics research and application, vol. 46 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 5 September 2016

JingRong Li, YuHua Xu, JianLong Ni and QingHui Wang

Hand gesture-based interaction can provide far more intuitive, natural and immersive feelings for users to manipulate 3D objects for virtual assembly (VA). A mechanical assembly…

Abstract

Purpose

Hand gesture-based interaction can provide far more intuitive, natural and immersive feelings for users to manipulate 3D objects for virtual assembly (VA). A mechanical assembly consists of mostly general-purpose machine elements or mechanical parts that can be defined into four types based on their geometric features and functionalities. For different types of machine elements, engineers formulate corresponding grasping gestures based on their domain knowledge or customs for ease of assembly. Therefore, this paper aims to support a virtual hand to assemble mechanical parts.

Design/methodology/approach

It proposes a novel glove-based virtual hand grasping approach for virtual mechanical assembly. The kinematic model of virtual hand is set up first by analyzing the hand structure and possible movements, and then four types of grasping gestures are defined with joint angles of fingers for connectors and three types of parts, respectively. The recognition of virtual hand grasping is developed based on collision detection and gesture matching. Moreover, stable grasping conditions are discussed.

Findings

A prototype system is designed and developed to implement the proposed approach. The case study on VA of a two-stage gear reducer demonstrates the functionality of the system. From the users’ feedback, it is found that more natural and stable hand grasping interaction for VA of mechanical parts can be achieved.

Originality/value

It proposes a novel glove-based virtual hand grasping approach for virtual mechanical assembly.

Details

Assembly Automation, vol. 36 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 4 April 2016

Ediz Saykol, Halit Talha Türe, Ahmet Mert Sirvanci and Mert Turan

The purpose of this paper to classify a set of Turkish sign language (TSL) gestures by posture labeling based finite-state automata (FSA) that utilize depth values in…

Abstract

Purpose

The purpose of this paper to classify a set of Turkish sign language (TSL) gestures by posture labeling based finite-state automata (FSA) that utilize depth values in location-based features. Gesture classification/recognition is crucial not only in communicating visually impaired people but also for educational purposes. The paper also demonstrates the practical use of the techniques for TSL.

Design/methodology/approach

Gesture classification is based on the sequence of posture labels that are assigned by location-based features, which are invariant under rotation and scale. Grid-based signing space clustering scheme is proposed to guide the feature extraction step. Gestures are then recognized by FSA that process temporally ordered posture labels.

Findings

Gesture classification accuracies and posture labeling performance are compared to k-nearest neighbor to show that the technique provides a reasonable framework for recognition of TSL gestures. A challenging set of gestures is tested, however the technique is extendible, and extending the training set will increase the performance.

Practical implications

The outcomes can be utilized as a system for educational purposes especially for visually impaired children. Besides, a communication system would be designed based on this framework.

Originality/value

The posture labeling scheme, which is inspired from keyframe labeling concept of video processing, is the original part of the proposed gesture classification framework. The search space is reduced to single dimension instead of 3D signing space, which also facilitates design of recognition schemes. Grid-based clustering scheme and location-based features are also new and depth values are received from Kinect. The paper is of interest for researchers in pattern recognition and computer vision.

Details

Kybernetes, vol. 45 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 16 October 2017

Jiajun Li, Jianguo Tao, Liang Ding, Haibo Gao, Zongquan Deng, Yang Luo and Zhandong Li

The purpose of this paper is to extend the usage of stroke gestures in manipulation tasks to make the interaction between human and robot more efficient.

Abstract

Purpose

The purpose of this paper is to extend the usage of stroke gestures in manipulation tasks to make the interaction between human and robot more efficient.

Design/methodology/approach

In this paper, a set of stroke gestures is designed for typical manipulation tasks. A gesture recognition and parameter extraction system is proposed to exploit the information in stroke gestures drawn by the users.

Findings

The results show that the designed gesture recognition subsystem can reach a recognition accuracy of 99.00 per cent. The parameter extraction subsystem can successfully extract parameters needed for typical manipulation tasks with a success rate about 86.30 per cent. The system shows an acceptable performance in the experiments.

Practical implications

Using stroke gesture in manipulation tasks can make the transmission of human intentions to the robots more efficient. The proposed gesture recognition subsystem is based on convolutional neural network which is robust to different input. The parameter extraction subsystem can extract the spatial information encoded in stroke gestures.

Originality/value

The author designs stroke gestures for manipulation tasks which is an extension of the usage of stroke gestures. The proposed gesture recognition and parameter extraction system can make use of stroke gestures to get the type of the task and important parameters for the task simultaneously.

Details

Industrial Robot: An International Journal, vol. 44 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 2000