Search results
1 – 10 of over 3000Damien Brun, Susan M. Ferreira, Charles Gouin-Vallerand and Sébastien George
Smart eyewear, such as augmented or virtual reality headset, allows the projection of virtual content through a display worn on the user’s head. This paper aims to present a…
Abstract
Purpose
Smart eyewear, such as augmented or virtual reality headset, allows the projection of virtual content through a display worn on the user’s head. This paper aims to present a mobile platform, named “CARTON”, which transforms a smartphone into smart eyewear, following a do-it-yourself (DIY) approach. This platform is composed of three main components: a blueprint to build the hardware prototype with very simple materials and regular tools; a software development kit (SDK) to help with the development of new applications (e.g. augmented reality app); and, finally, a second SDK (ControlWear) to interact with mobile applications through a Smartwatch.
Design/methodology/approach
User experiments were conducted, in which participants were asked to create, by themselves, the CARTON’s hardware part and perform usability tests with their own creation. A second round of experimentation was conducted to evaluate three different interaction modalities.
Findings
Qualitative user feedback and quantitative results prove that CARTON is functional and feasible to anyone, without specific skills. The results also showed that ControlWear had the most positive results, compared with the other interaction modalities, and that user interaction preference would vary depending on the task.
Originality/value
The authors describe a novel way to create a smart eyewear available for a wide audience around the world. By providing everything open-source and open-hardware, they intend to solve the reachability of technologies related to smart eyewear and aim to accelerate research around it.
Details
Keywords
Ziyu Liao, Bai Chen, Tianzuo Chang, Qian Zheng, Keming Liu and Junnan Lv
Supernumerary robotic limbs (SRLs) are a new type of wearable robot, which improve the user’s operating and perceive the user’s environment by extra robotic limbs. There are some…
Abstract
Purpose
Supernumerary robotic limbs (SRLs) are a new type of wearable robot, which improve the user’s operating and perceive the user’s environment by extra robotic limbs. There are some literature reviews about the SRLs’ key technology and development trend, but the design of SRLs has not been fully discussed and summarized. This paper aims to focus on the design of SRLs and provides a comprehensive review of the ontological structure design of SRLs.
Design/methodology/approach
In this paper, the related literature of SRLs is summarized and analyzed by VOSviewer. The structural features of different types of SRLs are extracted, and then discuss the design approach and characteristics of SRLs which are different from typical wearable robots.
Findings
The design concept of SRLs is different from the conventional wearable robots. SRLs have various reconfiguration and installed positions, and it will influence the safety and cooperativeness performance of SRLs.
Originality/value
This paper focuses on discussing the structural design of SRLs by literature review, and this review will help researchers understand the structural features of SRLs and key points of the ontological design of SRLs, which can be used as a reference for designing SRLs.
Details
Keywords
Pei Jia, Huosheng H. Hu, Tao Lu and Kui Yuan
This paper presents a novel hands‐free control system for intelligent wheelchairs (IWs) based on visual recognition of head gestures.
Abstract
Purpose
This paper presents a novel hands‐free control system for intelligent wheelchairs (IWs) based on visual recognition of head gestures.
Design/methodology/approach
A robust head gesture‐based interface (HGI), is designed for head gesture recognition of the RoboChair user. The recognised gestures are used to generate motion control commands to the low‐level DSP motion controller so that it can control the motion of the RoboChair according to the user's intention. Adaboost face detection algorithm and Camshift object tracking algorithm are combined in our system to achieve accurate face detection, tracking and gesture recognition in real time. It is intended to be used as a human‐friendly interface for elderly and disabled people to operate our intelligent wheelchair using their head gestures rather than their hands.
Findings
This is an extremely useful system for the users who have restricted limb movements caused by some diseases such as Parkinson's disease and quadriplegics.
Practical implications
In this paper, a novel integrated approach to real‐time face detection, tracking and gesture recognition is proposed, namely HGI.
Originality/value
It is an useful human‐robot interface for IWs.
Details
Keywords
– The purpose of this paper is to present a novel non-contact method of using head movement to control software without the need for wearable devices.
Abstract
Purpose
The purpose of this paper is to present a novel non-contact method of using head movement to control software without the need for wearable devices.
Design/methodology/approach
A webcam and software are used to track head position. When the head is moved through a virtual target, a keystroke is simulated. The system was assessed by participants with impaired mobility using Sensory Software’s Grid 2 software as a test platform.
Findings
The target user group could effectively use this system to interact with switchable software.
Practical implications
Physical head switches could be replaced with virtual devices, reducing fatigue and dissatisfaction.
Originality/value
Using a webcam to control software using head gestures where the participant does not have to wear any specialised technology or a marker. This system is shown to be of benefit to motor impaired participants for operating switchable software.
Details
Keywords
Younghwan Kim and Hyunseung Lee
This study aims to develop a safe, wearable clothing system that combines visibility-enhancing and emergency–accident-responding functions for two-wheeled vehicle (TWV) users'…
Abstract
Purpose
This study aims to develop a safe, wearable clothing system that combines visibility-enhancing and emergency–accident-responding functions for two-wheeled vehicle (TWV) users' safety assistance.
Design/methodology/approach
First, the wearable system (WS) allowing users to control turn signals, brake lights and emergency flasher only with head movements was developed. Second, multiconnected systems were developed between WSs and a smartphone application (AS), providing accident occurrence recognition, driving photo capture–storage and emergency notification functions. Third, usability testing in each function was performed to assess the operability of the systems.
Findings
The intuitive interface, which uses head movement as gesture commands, was effectively operated for controlling turn signals, brake lights and emergency flasher when driving, despite differences in user physique and boarding structure among TWVs. In addition, using Bluetooth low energy and Wi-Fi protocols simultaneously can establish automatic accident recognition–notification and driving photo capture–storage–display functions by linking two WSs with one AS.
Research limitations/implications
This study presents a case using relatively accessible technologies within the fashion industry to improve users' safety and provide fundamental data for convergence education for smart fashion products, highlighting the significance of this study in this convergence era.
Originality/value
The WSs and the AS of a TWV user visually evoke the attention of other drivers and pedestrians, reducing the risk of accidents; social contribution regarding public safety will be possible by allowing the system to autonomously inform emergencies and receive emergency medical treatment quickly when the accident occurred.
Details
Keywords
Heon‐Hui Kim, Yun‐Su Ha, Zeungnam Bien and Kwang‐Hyun Park
The purpose of this paper is to deal with a method for gesture encoding and reproduction, particularly aiming at a text‐to‐gesture (TTG) system that enables robotic agents to…
Abstract
Purpose
The purpose of this paper is to deal with a method for gesture encoding and reproduction, particularly aiming at a text‐to‐gesture (TTG) system that enables robotic agents to generate proper gestures automatically and naturally in human‐robot interaction.
Design/methodology/approach
Reproducing proper gestures, naturally synchronized with speech, is important under the TTG concept. The authors first introduce a gesture model that is effective to abstract and describe a variety of human gestures. Based on the model, a gesture encoding/decoding scheme is proposed to encode observed gestures symbolically and parametrically and to reproduce robot gestures from the codes. In particular, this paper mainly addresses a gesture scheduling method that deals with the alignment and refinement of gestural motions, in order to reproduce robotic gesticulation in a human‐like, natural fashion.
Findings
The proposed method has been evaluated through a series of questionnaire surveys, and it was found that reproduced gestures by a robotic agent could appeal satisfactorily to human beings.
Originality/value
This paper provides a series of algorithms to treat overlapped motions and to refine the timing parameters for the motions, so that robotic agents reproduce human‐like, natural gestures.
Details
Keywords
Mary Dawson, Juan M. Madera and Jack A. Neal
One out of four foodservice employees speaks a foreign language at home. Furthermore, 37 percent of those employees speak limited English. Given this, hospitality managers must…
Abstract
Purpose
One out of four foodservice employees speaks a foreign language at home. Furthermore, 37 percent of those employees speak limited English. Given this, hospitality managers must find ways to effectively communicate with their employees. This paper seeks to address these issues.
Design/methodology/approach
The methodology employed a perspective‐taking manipulation. Participants were placed in the role of an individual that does not speak the native language that is used in the workplace. Groups were measured on performance, quality, and accuracy. Groups were video‐taped to measure frequency of non‐verbal behaviors. Participants were surveyed to measure their levels of positivity.
Findings
The results of this study identified effective non‐verbal communication strategies for managers (combination of gestures, demonstrating, and pointing). When the leader used these strategies, the groups were able to complete the recipes faster. Managers who spoke another language expressed a more positive behavior towards the group. The group also expressed more positive behaviors towards each other when they had a second language leader.
Research limitations/implications
A limitation is that data were collected from students and the methodology simulated an environment of limited language proficiency. Although this method has been shown to be effective, the true experiences of what non‐English speaking workers might face include more complex processes.
Practical implications
This research suggests that non‐verbal tools are effective when communication barriers exist. Managers who are multiculturally competent are more efficient in leading employees. Positive feedback must be given even if it is non‐verbal.
Originality/value
This research offers valuable strategies for hospitality managers to communicate with those employees who speak limited English.
Details
Keywords
Michael Winkler, Kai Michael Höver and Max Mühlhäuser
The purpose of this study is to present a depth information-based solution for automatic camera control, depending on the presenter’s moving positions. Talks, presentations and…
Abstract
Purpose
The purpose of this study is to present a depth information-based solution for automatic camera control, depending on the presenter’s moving positions. Talks, presentations and lectures are often captured on video to give a broad audience the possibility to (re-)access the content. As presenters are often moving around during a talk, it is necessary to steer recording cameras.
Design/methodology/approach
We use depth information from Kinect to implement a prototypical application to automatically steer multiple cameras for recording a talk.
Findings
We present our experiences with the system during actual lectures at a university. We found out that Kinect is applicable for tracking a presenter during a talk robustly. Nevertheless, our prototypical solution reveals potential for improvements, which we discuss in our future work section.
Originality/value
Tracking a presenter is based on a skeleton model extracted from depth information instead of using two-dimensional (2D) motion- or brightness-based image processing techniques. The solution uses a scalable networking architecture based on publish/subscribe messaging for controlling multiple video cameras.
Details
Keywords
Gives reports and surveys of selected current research and developments in systems and cybernetics. They include Biocybernetics, Innovative systems, Focus on the brain, Computer…
Abstract
Gives reports and surveys of selected current research and developments in systems and cybernetics. They include Biocybernetics, Innovative systems, Focus on the brain, Computer survey and Communications in the twenty‐first century.
Details
Keywords
The purpose of this paper is to investigate the variety of affective emotions that are evoked in extant project management (PM) practitioners by various PM artefacts.
Abstract
Purpose
The purpose of this paper is to investigate the variety of affective emotions that are evoked in extant project management (PM) practitioners by various PM artefacts.
Design/methodology/approach
A phenomenological methodology is used for eliciting, through self‐reporting and observation of gesture, the affective responses and consequential emotions experienced by PM practitioners as they interact or recount previous interactions with various artefacts of PM.
Findings
This paper suggests that PM is prevalent in the Western corporate environment because project managers obtain an emotional affect from aspects of the PM experience, and project managers utilise various PM artefacts to emotionally manipulate their environment to their own advantage.
Practical implications
The paper argues for a PM environment which is founded on evidence‐based practices. It suggests that future research should explore the links between PM, social architecture and flow theory.
Originality/value
This paper advances the evolutionary framework for PM research.
Details