Search results

1 – 10 of over 7000
Article
Publication date: 1 January 2006

Hasanuzzaman, T. Zhang, V. Ampornaramveth and H. Ueno

Achieving natural interactions by means of vision and speech between humans and robots is one of the major goals that many researchers are working on. This paper aims to describe…

Abstract

Purpose

Achieving natural interactions by means of vision and speech between humans and robots is one of the major goals that many researchers are working on. This paper aims to describe a gesturebased human‐robot interaction (HRI) system using a knowledge‐based software platform.

Design/methodology/approach

A frame‐based knowledge model is defined for the gesture interpretation and HRI. In this knowledge model, necessary frames are defined for the known users, robots, poses, gestures and robot behaviors. First, the system identifies the user using the eigenface method. Then, face and hand poses are segmented from the camera frame buffer using the person's specific skin color information and classified by the subspace method.

Findings

The system is capable of recognizing static gestures comprised of the face and hand poses, and dynamic gestures of face in motion. The system combines computer vision and knowledge‐based approaches in order to improve the adaptability to different people.

Originality/value

Provides information on an experimental HRI system that has been implemented in the frame‐based software platform for agent and knowledge management using the AIBO entertainment robot, and this has been demonstrated to be useful and efficient within a limited situation.

Details

Industrial Robot: An International Journal, vol. 33 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 4 April 2016

Ediz Saykol, Halit Talha Türe, Ahmet Mert Sirvanci and Mert Turan

The purpose of this paper to classify a set of Turkish sign language (TSL) gestures by posture labeling based finite-state automata (FSA) that utilize depth values in location…

Abstract

Purpose

The purpose of this paper to classify a set of Turkish sign language (TSL) gestures by posture labeling based finite-state automata (FSA) that utilize depth values in location-based features. Gesture classification/recognition is crucial not only in communicating visually impaired people but also for educational purposes. The paper also demonstrates the practical use of the techniques for TSL.

Design/methodology/approach

Gesture classification is based on the sequence of posture labels that are assigned by location-based features, which are invariant under rotation and scale. Grid-based signing space clustering scheme is proposed to guide the feature extraction step. Gestures are then recognized by FSA that process temporally ordered posture labels.

Findings

Gesture classification accuracies and posture labeling performance are compared to k-nearest neighbor to show that the technique provides a reasonable framework for recognition of TSL gestures. A challenging set of gestures is tested, however the technique is extendible, and extending the training set will increase the performance.

Practical implications

The outcomes can be utilized as a system for educational purposes especially for visually impaired children. Besides, a communication system would be designed based on this framework.

Originality/value

The posture labeling scheme, which is inspired from keyframe labeling concept of video processing, is the original part of the proposed gesture classification framework. The search space is reduced to single dimension instead of 3D signing space, which also facilitates design of recognition schemes. Grid-based clustering scheme and location-based features are also new and depth values are received from Kinect. The paper is of interest for researchers in pattern recognition and computer vision.

Details

Kybernetes, vol. 45 no. 4
Type: Research Article
ISSN: 0368-492X

Keywords

Article
Publication date: 2 September 2019

Bo Zhang, Guanglong Du, Wenming Shen and Fang Li

The purpose of this paper is the research of a novel gesture-based dual-robot collaborative interaction interface, which achieves the gesture recognition when both hands overlap…

Abstract

Purpose

The purpose of this paper is the research of a novel gesture-based dual-robot collaborative interaction interface, which achieves the gesture recognition when both hands overlap. This paper designs a hybrid-sensor gesture recognition platform to detect the both-hand data for dual-robot control.

Design/methodology/approach

This paper uses a combination of Leap Motion and PrimeSense in the vertical direction, which detects both-hand data in real time. When there is occlusion between hands, each hand is detected by one of the sensors, and a quaternion-based algorithm is used to realize the conversion of two sensors corresponding to different coordinate systems. When there is no occlusion, the data are fused by a self-adaptive weight fusion algorithm. Then the collision detection algorithm is used to detect the collision between robots to ensure safety. Finally, the data are transmitted to the dual robots.

Findings

This interface is implemented on a dual-robot system consisting of two 6-DOF robots. The dual-robot cooperative experiment indicates that the proposed interface is feasible and effective, and it takes less time to operate and has higher interaction efficiency.

Originality/value

A novel gesture-based dual-robot collaborative interface is proposed. It overcomes the problem of gesture occlusion in two-hand interaction with low computational complexity and low equipment cost. The proposed interface can perform a long-term stable tracking of the two-hand gestures even if there is occlusion between the hands. Meanwhile, it reduces the number of hand reset to reduce the operation time. The proposed interface achieves a natural and safe interaction between the human and the dual robot.

Details

Industrial Robot: the international journal of robotics research and application, vol. 46 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 5 September 2016

JingRong Li, YuHua Xu, JianLong Ni and QingHui Wang

Hand gesture-based interaction can provide far more intuitive, natural and immersive feelings for users to manipulate 3D objects for virtual assembly (VA). A mechanical assembly…

Abstract

Purpose

Hand gesture-based interaction can provide far more intuitive, natural and immersive feelings for users to manipulate 3D objects for virtual assembly (VA). A mechanical assembly consists of mostly general-purpose machine elements or mechanical parts that can be defined into four types based on their geometric features and functionalities. For different types of machine elements, engineers formulate corresponding grasping gestures based on their domain knowledge or customs for ease of assembly. Therefore, this paper aims to support a virtual hand to assemble mechanical parts.

Design/methodology/approach

It proposes a novel glove-based virtual hand grasping approach for virtual mechanical assembly. The kinematic model of virtual hand is set up first by analyzing the hand structure and possible movements, and then four types of grasping gestures are defined with joint angles of fingers for connectors and three types of parts, respectively. The recognition of virtual hand grasping is developed based on collision detection and gesture matching. Moreover, stable grasping conditions are discussed.

Findings

A prototype system is designed and developed to implement the proposed approach. The case study on VA of a two-stage gear reducer demonstrates the functionality of the system. From the users’ feedback, it is found that more natural and stable hand grasping interaction for VA of mechanical parts can be achieved.

Originality/value

It proposes a novel glove-based virtual hand grasping approach for virtual mechanical assembly.

Details

Assembly Automation, vol. 36 no. 4
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 5 August 2014

Hairong Jiang, Juan P. Wachs and Bradley S. Duerstock

The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a gesture

Abstract

Purpose

The purpose of this paper is to develop an integrated, computer vision-based system to operate a commercial wheelchair-mounted robotic manipulator (WMRM). In addition, a gesture recognition interface system was developed specially for individuals with upper-level spinal cord injuries including object tracking and face recognition to function as an efficient, hands-free WMRM controller.

Design/methodology/approach

Two Kinect® cameras were used synergistically to perform a variety of simple object retrieval tasks. One camera was used to interpret the hand gestures and locate the operator's face for object positioning, and then send those as commands to control the WMRM. The other sensor was used to automatically recognize different daily living objects selected by the subjects. An object recognition module employing the Speeded Up Robust Features algorithm was implemented and recognition results were sent as a commands for “coarse positioning” of the robotic arm near the selected object. Automatic face detection was provided as a shortcut enabling the positing of the objects close by the subject's face.

Findings

The gesture recognition interface incorporated hand detection, tracking and recognition algorithms, and yielded a recognition accuracy of 97.5 percent for an eight-gesture lexicon. Tasks’ completion time were conducted to compare manual (gestures only) and semi-manual (gestures, automatic face detection, and object recognition) WMRM control modes. The use of automatic face and object detection significantly reduced the completion times for retrieving a variety of daily living objects.

Originality/value

Integration of three computer vision modules were used to construct an effective and hand-free interface for individuals with upper-limb mobility impairments to control a WMRM.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 7 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 28 January 2014

Ognjan Luzanin and Miroslav Plancak

Main purpose is to present methodology which allows efficient hand gesture recognition using low-budget, 5-sensor data glove. To allow widespread use of low-budget data gloves in…

Abstract

Purpose

Main purpose is to present methodology which allows efficient hand gesture recognition using low-budget, 5-sensor data glove. To allow widespread use of low-budget data gloves in engineering virtual reality (VR) applications, gesture dictionaries must be enhanced with more ergonomic and symbolically meaningful hand gestures, while providing high gesture recognition rates when used by different seen and unseen users.

Design/methodology/approach

The simple boundary-value gesture recognition methodology was replaced by a probabilistic neural network (PNN)-based gesture recognition system able to process simple and complex static gestures. In order to overcome problems inherent to PNN – primarily, slow execution with large training data sets – the proposed gesture recognition system uses clustering ensemble to reduce the training data set without significant deterioration of the quality of training. The reduction of training data set is efficiently performed using three types of clustering algorithms, yielding small number of input vectors that represent the original population very well.

Findings

The proposed methodology is capable of providing efficient recognition of simple and complex static gestures and was also successfully tested with gestures of an unseen user, i.e. person who took no part in the training phase.

Practical implications

The hand gesture recognition system based on the proposed methodology enables the use of affordable data gloves with a small number of sensors in VR engineering applications which require complex static gestures, including assembly and maintenance simulations.

Originality/value

According to literature, there are no similar solutions that allow efficient recognition of simple and complex static hand gestures, based on a 5-sensor data glove.

Details

Assembly Automation, vol. 34 no. 1
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 16 March 2015

Marco Porta

The purpose of this paper is to consider the two main existing text input techniques based on “eye gestures” – namely EyeWrite and Eye-S – and compare them to each other and to…

Abstract

Purpose

The purpose of this paper is to consider the two main existing text input techniques based on “eye gestures” – namely EyeWrite and Eye-S – and compare them to each other and to the traditional “virtual keyboard” approach.

Design/methodology/approach

The study primarily aims to assess user performance at the very beginning of the learning process. However, a partial longitudinal evaluation is also provided. Two kinds of experiments have been implemented involving 14 testers.

Findings

Results show that while the virtual keyboard is faster, EyeWrite and Eye-S are also appreciated and can be viable alternatives (after a proper training period).

Practical implications

Writing methods based on eye gestures deserve special attention, as they require less screen space and need limited tracking precision. This study highlights the fact that gesture-based techniques imply a greater initial effort, and require proper training not only to gain knowledge of eye interaction per se, but also for learning the gesture alphabet. The author thinks that the investigation can drive the designers of gaze-controlled writing techniques based on gestures to put more consideration on the intuitiveness of gestures themselves, as they may greatly influence user performance in the first stages of the learning process.

Originality/value

This is the first study comparing EyeWrite and Eye-S. Moreover, unlike other analyses, the investigation is mainly aimed at assessing user performance with the three text entry methods at the inception of the learning procedure.

Details

Journal of Assistive Technologies, vol. 9 no. 1
Type: Research Article
ISSN: 1754-9450

Keywords

Article
Publication date: 7 November 2016

Ing-Jr Ding and Zong-Gui Wu

The Kinect sensor released by Microsoft is well-known for its effectiveness on human gesture recognition. Gesture recognition by Kinect has been proved to be an efficient command…

142

Abstract

Purpose

The Kinect sensor released by Microsoft is well-known for its effectiveness on human gesture recognition. Gesture recognition by Kinect has been proved to be an efficient command operation and provides an additional human-computer interface in addition to the traditional speech recognition. For Kinect gesture recognition in the application of gesture command operations, recognition of the active user making the gesture command to Kinect will be an extremely crucial problem. The purpose of this paper is to propose a recognition method for recognizing the person identity of an active user using combined eigenspace and Gaussian mixture model (GMM) with Kinect-extracted action gesture features.

Design/methodology/approach

Several Kinect-derived gesture features will be explored for determining the effective pattern features in the active user recognition task. In this work, a separate Kinect-derived feature design for eigenspace recognition and GMM classification is presented for achieving the optimal performance of each individual classifier. In addition to Kinect-extracted feature designs for active user recognition, this study will further develop a combined recognition method, called combined eigenspace-GMM, which properly hybridizes the decision information of both the eigenspace and the GMM for making a more reliable user recognition result.

Findings

Active user recognition using an effective combination of eigenspace and GMM with well-developed active gesture features in Kinect-based active user recognition will have an outstanding performance on the recognition accuracy. The presented Kinect-based user recognition system using the presented approach will further have the competitive benefits of recognition on both gesture commands and providing users of gesture commands.

Originality/value

A hybridized scheme of eigenspace and GMM performs better than eigenspace-alone or GMM-alone on recognition accuracy of active user recognition; a separate Kinect-derived feature design for eigenspace recognition and GMM classification is presented for achieving the optimal performance of the individual classifier; combined eigenspace-GMM active user recognition belonging to model-based active user recognition design has a fine extension on increasing the recognition rate by adjusting recognition models.

Article
Publication date: 21 August 2017

Yassine Bouteraa and Ismail Ben Abdallah

The idea is to exploit the natural stability and performance of the human arm during movement, execution and manipulation. The purpose of this paper is to remotely control a…

Abstract

Purpose

The idea is to exploit the natural stability and performance of the human arm during movement, execution and manipulation. The purpose of this paper is to remotely control a handling robot with a low cost but effective solution.

Design/methodology/approach

The developed approach is based on three different techniques to be able to ensure movement and pattern recognition of the operator’s arm as well as an effective control of the object manipulation task. In the first, the methodology works on the kinect-based gesture recognition of the operator’s arm. However, using only the vision-based approach for hand posture recognition cannot be the suitable solution mainly when the hand is occluded in such situations. The proposed approach supports the vision-based system by an electromyography (EMG)-based biofeedback system for posture recognition. Moreover, the novel approach appends to the vision system-based gesture control and the EMG-based posture recognition a force feedback to inform operator of the real grasping state.

Findings

The main finding is to have a robust method able to gesture-based control a robot manipulator during movement, manipulation and grasp. The proposed approach uses a real-time gesture control technique based on a kinect camera that can provide the exact position of each joint of the operator’s arm. The developed solution integrates also an EMG biofeedback and a force feedback in its control loop. In addition, the authors propose a high-friendly human-machine-interface (HMI) which allows user to control in real time a robotic arm. Robust trajectory tracking challenge has been solved by the implementation of the sliding mode controller. A fuzzy logic controller has been implemented to manage the grasping task based on the EMG signal. Experimental results have shown a high efficiency of the proposed approach.

Research limitations/implications

There are some constraints when applying the proposed method, such as the sensibility of the desired trajectory generated by the human arm even in case of random and unwanted movements. This can damage the manipulated object during the teleoperation process. In this case, such operator skills are highly required.

Practical implications

The developed control approach can be used in all applications, which require real-time human robot cooperation.

Originality/value

The main advantage of the developed approach is that it benefits at the same time of three various techniques: EMG biofeedback, vision-based system and haptic feedback. In such situation, using only vision-based approaches mainly for the hand postures recognition is not effective. Therefore, the recognition should be based on the biofeedback naturally generated by the muscles responsible of each posture. Moreover, the use of force sensor in closed-loop control scheme without operator intervention is ineffective in the special cases in which the manipulated objects vary in a wide range with different metallic characteristics. Therefore, the use of human-in-the-loop technique can imitate the natural human postures in the grasping task.

Details

Industrial Robot: An International Journal, vol. 44 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 8 March 2010

Pedro Neto, J. Norberto Pires and A. Paulo Moreira

Most industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. This is a tedious and time‐consuming task that requires…

1315

Abstract

Purpose

Most industrial robots are still programmed using the typical teaching process, through the use of the robot teach pendant. This is a tedious and time‐consuming task that requires some technical expertise, and hence new approaches to robot programming are required. The purpose of this paper is to present a robotic system that allows users to instruct and program a robot with a high‐level of abstraction from the robot language.

Design/methodology/approach

The paper presents in detail a robotic system that allows users, especially non‐expert programmers, to instruct and program a robot just showing it what it should do, in an intuitive way. This is done using the two most natural human interfaces (gestures and speech), a force control system and several code generation techniques. Special attention will be given to the recognition of gestures, where the data extracted from a motion sensor (three‐axis accelerometer) embedded in the Wii remote controller was used to capture human hand behaviours. Gestures (dynamic hand positions) as well as manual postures (static hand positions) are recognized using a statistical approach and artificial neural networks.

Findings

It is shown that the robotic system presented is suitable to enable users without programming expertise to rapidly create robot programs. The experimental tests showed that the developed system can be customized for different users and robotic platforms.

Research limitations/implications

The proposed system is tested on two different robotic platforms. Since the options adopted are mainly based on standards, it can be implemented with other robot controllers without significant changes. Future work will focus on improving the recognition rate of gestures and continuous gesture recognition.

Practical implications

The key contribution of this paper is that it offers a practical method to program robots by means of gestures and speech, improving work efficiency and saving time.

Originality/value

This paper presents an alternative to the typical robot teaching process, extending the concept of human‐robot interaction and co‐worker scenario. Since most companies do not have engineering resources to make changes or add new functionalities to their robotic manufacturing systems, this system constitutes a major advantage for small‐ to medium‐sized enterprises.

Details

Industrial Robot: An International Journal, vol. 37 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of over 7000