Search results

1 – 6 of 6
To view the access options for this content please click here
Article
Publication date: 4 April 2016

Ruiwei Shen, Tsutomu Terada and Masahiko Tsukamoto

This paper aims to control the crowd flow naturally by presenting appropriate information.

Abstract

Purpose

This paper aims to control the crowd flow naturally by presenting appropriate information.

Design/methodology/approach

The authors developed a navigation application for an event held in Osaka called “Osaka Mizube Bar”, and divided users into three groups to present different information in restaurants’ view list.

Findings

The results of the experiment confirmed that users will focus on the position of the rank list, regardless of the information for each item.

Originality/value

This paper used persuasive technology in information presentation for event application.

Details

International Journal of Pervasive Computing and Communications, vol. 12 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

To view the access options for this content please click here
Article
Publication date: 3 April 2017

Shuhei Tsuchida, Tatsuya Takemori, Tsutomu Terada and Masahiko Tsukamoto

When designing a performance involving people and mobile robots, the required functions and shape of the robot must be considered. However, it can be difficult to account…

Abstract

Purpose

When designing a performance involving people and mobile robots, the required functions and shape of the robot must be considered. However, it can be difficult to account for all of the requirements. The purpose of this paper is to discuss a mobile robot in the shape of a ball that is used in theatrical performances.

Design/methodology/approach

The paper proposes a mobile robot that can give the audience the optical illusion of the unique movements of a sphere by mounting a spherical light-emitting diode (LED) display on a high-agility wheeled robot.

Findings

It was found that movements that are difficult to implement with existing mechanisms can nonetheless be visualized through the use of light.

Originality/value

The paper proposes the concept of using pseudo-physical movements in performances with robots. The authors built a robot that visually reproduces the movements of a rolling sphere and is capable of faster movements and easier position estimations in comparison with previous spherical robots.

Details

International Journal of Pervasive Computing and Communications, vol. 13 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

To view the access options for this content please click here
Article
Publication date: 7 September 2015

Kazuya Murao, Hayami Tobise, Tsutomu Terada, Toshiki Iso, Masahiko Tsukamoto and Tsutomu Horikoshi

User authentication is generally used to protect personal information such as phone numbers, photos and account information stored in a mobile device by limiting the user…

Abstract

Purpose

User authentication is generally used to protect personal information such as phone numbers, photos and account information stored in a mobile device by limiting the user to a specific person, e.g. the owner of the device. Authentication methods with password, PIN, face recognition and fingerprint identification have been widely used; however, these methods have problems of difficulty in one-handed operation, vulnerability to shoulder hacking and illegal access using fingerprint with either super glue or facial portrait. From viewpoints of usability and safety, strong and uncomplicated method is required.

Design/methodology/approach

In this paper, a user authentication method is proposed based on grip gestures using pressure sensors mounted on the lateral and back sides of a mobile phone. Grip gesture is an operation of grasping a mobile phone, which is assumed to be done instead of conventional unlock procedure. Grip gesture can be performed with one hand. Moreover, it is hard to imitate grip gestures, as finger movements and grip force during a grip gesture are hardly seen by the others.

Findings

The feature values of grip force are experimentally investigated and the proposed method from viewpoint of error rate is evaluated. From the result, this method achieved 0.02 of equal error rate, which is equivalent to face recognition.

Originality/value

Many researches using pressure sensors to recognize grip pattern have been proposed thus far; however, the conventional works just recognize grip patterns and do not identify users, or need long pressure data to finish confident authentication. This proposed method authenticates users with a short grip gesture.

Details

International Journal of Pervasive Computing and Communications, vol. 11 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

To view the access options for this content please click here
Article
Publication date: 30 August 2013

Ruiwei Shen, Tsutomu Terada and Masahiko Tsukamoto

The purpose of this paper is to design and propose a new interface for hearing‐impaired for the users who can hardly realize the environmental sound.

Abstract

Purpose

The purpose of this paper is to design and propose a new interface for hearing‐impaired for the users who can hardly realize the environmental sound.

Design/methodology/approach

The authors propose the use of an augmented reality (AR) system with sound source recognition to augment human vision. In this system, sound source and position is detected by using acoustic processing.

Findings

The authors confirmed that the source and direction of sound could be effectively recognized, and that AR was implemented, and thus that the user could use this system to recognize and visualize environmental sounds. When there was only a single sound source in the surrounding environment such as at home or when doing some simple work, and especially when a source was near a user, this system provided information on the sound source and visualized the sound source to satisfy the user's need.

Originality/value

The system can recognize the environmental sound in realtime and inform the user of the type of sound by showing a virtual object in the user's sight. Furthermore, the user can find the direction of the sound source by using a microphone array and locate the sound source through the AR marker attached to the object.

Details

International Journal of Pervasive Computing and Communications, vol. 9 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

To view the access options for this content please click here
Article
Publication date: 7 September 2015

Ryo Izuta, Kazuya Murao, Tsutomu Terada and Masahiko Tsukamoto

This paper aims to propose a gesture recognition method at an early stage. An accelerometer is installed in most current mobile phones, such as iPhones, Android-powered…

Abstract

Purpose

This paper aims to propose a gesture recognition method at an early stage. An accelerometer is installed in most current mobile phones, such as iPhones, Android-powered devices and video game controllers for the Wii or PS3, which enables easy and intuitive operations. Therefore, many gesture-based user interfaces that use accelerometers are expected to appear in the future. Gesture recognition systems with an accelerometer generally have to construct models with user’s gesture data before use and recognize unknown gestures by comparing them with the models. Because the recognition process generally starts after the gesture has finished, the output of the recognition result and feedback delay, which may cause users to retry gestures, degrades the interface usability.

Design/methodology/approach

The simplest way to achieve early recognition is to start it at a fixed time after a gesture starts. However, the degree of accuracy would decrease if a gesture in an early stage was similar to the others. Moreover, the timing of a recognition has to be capped by the length of the shortest gesture, which may be too early for longer gestures. On the other hand, retreated recognition timing will exceed the length of the shorter gestures. In addition, a proper length of training data has to be found, as the full length of training data does not fit the input data until halfway. To recognize gestures in an early stage, proper recognition timing and a proper length of training data have to be decided. This paper proposes a gesture recognition method used in the early stages that sequentially calculates the distance between the input and training data. The proposed method outputs the recognition result when one candidate has a stronger likelihood of recognition than the other candidates so that similar incorrect gestures are not output.

Findings

The proposed method was experimentally evaluated on 27 kinds of gestures and it was confirmed that the recognition process finished 1,000 msec before the end of the gestures on average without deteriorating the level of accuracy. Gestures were recognized in an early stage of motion, which would lead to an improvement in the interface usability and a reduction in the number of incorrect operations such as retried gestures. Moreover, a gesture-based photo viewer was implemented as a useful application of our proposed method, the proposed early gesture recognition system was used in a live unscripted performance and its effectiveness is ensured.

Originality/value

Gesture recognition methods with accelerometers generally learn a given user’s gesture data before using the system, then recognizes any unknown gestures by comparing them with the training data. The recognition process starts after a gesture has finished, and therefore, any interaction or feedback depending on the recognition result is delayed. For example, an image on a smartphone screen rotates a few seconds after the device has been tilted, which may cause the user to retry tilting the smartphone even if the first one was correctly recognized. Although many studies on gesture recognition using accelerometers have been done, to the best of the authors’ knowledge, none of these studies has taken the potential delays in output into consideration.

Details

International Journal of Pervasive Computing and Communications, vol. 11 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Content available
Article
Publication date: 30 August 2013

Ismail Khalil

Abstract

Details

International Journal of Pervasive Computing and Communications, vol. 9 no. 3
Type: Research Article
ISSN: 1742-7371

1 – 6 of 6