Search results

1 – 10 of 825
Article
Publication date: 25 March 2024

Boyang Hu, Ling Weng, Kaile Liu, Yang Liu, Zhuolin Li and Yuxin Chen

Gesture recognition plays an important role in many fields such as human–computer interaction, medical rehabilitation, virtual and augmented reality. Gesture recognition using…

Abstract

Purpose

Gesture recognition plays an important role in many fields such as human–computer interaction, medical rehabilitation, virtual and augmented reality. Gesture recognition using wearable devices is a common and effective recognition method. This study aims to combine the inverse magnetostrictive effect and tunneling magnetoresistance effect and proposes a novel wearable sensing glove applied in the field of gesture recognition.

Design/methodology/approach

A magnetostrictive sensing glove with function of gesture recognition is proposed based on Fe-Ni alloy, tunneling magnetoresistive elements, Agilus30 base and square permanent magnets. The sensing glove consists of five sensing units to measure the bending angle of each finger joint. The optimal structure of the sensing units is determined through experimentation and simulation. The output voltage model of the sensing units is established, and the output characteristics of the sensing units are tested by the experimental platform. Fifteen gestures are selected for recognition, and the corresponding output voltages are collected to construct the data set and the data is processed using Back Propagation Neural Network.

Findings

The sensing units can detect the change in the bending angle of finger joints from 0 to 105 degrees and a maximum error of 4.69% between the experimental and theoretical values. The average recognition accuracy of Back Propagation Neural Network is 97.53% for 15 gestures.

Research limitations/implications

The sensing glove can only recognize static gestures at present, and further research is still needed to recognize dynamic gestures.

Practical implications

A new approach to gesture recognition using wearable devices.

Social implications

This study has a broad application prospect in the field of human–computer interaction.

Originality/value

The sensing glove can collect voltage signals under different gestures to realize the recognition of different gestures with good repeatability, which has a broad application prospect in the field of human–computer interaction.

Details

Sensor Review, vol. 44 no. 2
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 2 February 2023

Ahmed Eslam Salman and Magdy Raouf Roman

The study proposed a human–robot interaction (HRI) framework to enable operators to communicate remotely with robots in a simple and intuitive way. The study focused on the…

Abstract

Purpose

The study proposed a human–robot interaction (HRI) framework to enable operators to communicate remotely with robots in a simple and intuitive way. The study focused on the situation when operators with no programming skills have to accomplish teleoperated tasks dealing with randomly localized different-sized objects in an unstructured environment. The purpose of this study is to reduce stress on operators, increase accuracy and reduce the time of task accomplishment. The special application of the proposed system is in the radioactive isotope production factories. The following approach combined the reactivity of the operator’s direct control with the powerful tools of vision-based object classification and localization.

Design/methodology/approach

Perceptive real-time gesture control predicated on a Kinect sensor is formulated by information fusion between human intuitiveness and an augmented reality-based vision algorithm. Objects are localized using a developed feature-based vision algorithm, where the homography is estimated and Perspective-n-Point problem is solved. The 3D object position and orientation are stored in the robot end-effector memory for the last mission adjusting and waiting for a gesture control signal to autonomously pick/place an object. Object classification process is done using a one-shot Siamese neural network (NN) to train a proposed deep NN; other well-known models are also used in a comparison. The system was contextualized in one of the nuclear industry applications: radioactive isotope production and its validation were performed through a user study where 10 participants of different backgrounds are involved.

Findings

The system was contextualized in one of the nuclear industry applications: radioactive isotope production and its validation were performed through a user study where 10 participants of different backgrounds are involved. The results revealed the effectiveness of the proposed teleoperation system and demonstrate its potential for use by robotics non-experienced users to effectively accomplish remote robot tasks.

Social implications

The proposed system reduces risk and increases level of safety when applied in hazardous environment such as the nuclear one.

Originality/value

The contribution and uniqueness of the presented study are represented in the development of a well-integrated HRI system that can tackle the four aforementioned circumstances in an effective and user-friendly way. High operator–robot reactivity is kept by using the direct control method, while a lot of cognitive stress is removed using elective/flapped autonomous mode to manipulate randomly localized different configuration objects. This necessitates building an effective deep learning algorithm (in comparison to well-known methods) to recognize objects in different conditions: illumination levels, shadows and different postures.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 5
Type: Research Article
ISSN: 0143-991X

Keywords

Content available
Article
Publication date: 13 November 2023

Sheuli Paul

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this…

1085

Abstract

Purpose

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making.

Design/methodology/approach

This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success.

Findings

Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed.

Research limitations/implications

Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research.

Practical implications

A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously.

Social implications

Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission.

Originality/value

The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.

Article
Publication date: 30 March 2023

Tseng-Lung Huang and Henry F.L. Chung

Drawing on embodied cognition theory, this study examined the impact of midair, gesture-based somatosensory augmented reality (AR) experience on consumer delight and stickiness…

Abstract

Purpose

Drawing on embodied cognition theory, this study examined the impact of midair, gesture-based somatosensory augmented reality (AR) experience on consumer delight and stickiness intention. The mediating effects of three psychological states for body schema (i.e. natural symbol sets, vivid memory and human touch) on the relationships between somatosensory AR and consumer delight/stickiness intention are determined. By filling gaps in the research, we hope to provide guidance on how to drive delightful somatosensory AR marketing.

Design/methodology/approach

Two experiments were conducted (Study 1 and Study 2) to test the research model and hypotheses. These experiments compared the effects of the “presence” (midair, gesture-based) and “absence” (mouse-based traditional website) conditions in somatosensory AR on consumer body schema and the creation of a delightful virtual shopping experience (i.e. consumer delight and stickiness intention).

Findings

The consumer delight and stickiness intention created in the presence condition was much higher than those in the absence condition. Consumers appeared to prefer engaging in a midair gesture-based somatosensory AR experience and exploring an augmented metaverse reality to interacting with a mouse-based traditional website. We also found that giving online consumers more somatosensory activities and kinesthetic experiences effectively inspired three psychological states of body schema in online consumers.

Originality/value

The results contribute to the AR experience and somatosensory marketing literature by revealing the role of natural symbol sets, vivid memory and the sense of human touch. This research breaks through the long-developed research paradigm on consumer delight, which has been limited to traditional entities and web contexts. We also extend embodied cognition theory to the study of somatosensory AR marketing.

Details

Journal of Research in Interactive Marketing, vol. 18 no. 1
Type: Research Article
ISSN: 2040-7122

Keywords

Open Access
Article
Publication date: 22 September 2023

Linh Duong and Malin Brännback

This study aims to explore gender performance in entrepreneurial pitching. Understanding pitching as a social practice, the authors argue that pitch content and body gestures…

Abstract

Purpose

This study aims to explore gender performance in entrepreneurial pitching. Understanding pitching as a social practice, the authors argue that pitch content and body gestures contain gender-based norms and practices. The authors focus on early-stage ventures and the hegemonic masculinities and femininities that are performed in entrepreneurial pitches. The main research question is as follows: How is gender performed in entrepreneurial pitching?

Design/methodology/approach

The authors carried out the study with the post-structuralist feminist approach. The authors collected and analyzed nine online pitches with the reflexive thematic method to depict hegemonic masculinities and femininities performed at the pitch.

Findings

The authors found that heroic and breadwinner masculinities are dominant in pitching. Both male and female founders perform hegemonic masculinities. Entrepreneurs are expected to be assertive but empathetic people. Finally, there are connections between what entrepreneurs do and what investors ask, indicating the iteration of gender performance and expectations.

Research limitations/implications

While the online setting helps the authors to collect data during the pandemic, it limits the observation of the place, space and interactions between the judges/investors and the entrepreneurs. As a result, the linguistic and gesture communication of the investors in the pitch was not discussed in full-length in this paper. Also, as the authors observed, people would come to the pitch knowing what they should perform and how they should interact. Therefore, the preparation of the pitch as a study context could provide rich details on how gender norms and stereotypes influence people's interactions and their entrepreneurial identity. Lastly, the study has a methodological limitation. The authors did not include aspects of space in the analysis. It is mainly due to the variety of settings that the pitching sessions that the data set had.

Practical implications

For social practices and policies, the results indicate barriers to finance for women entrepreneurs. Women entrepreneurs are rewarded when they perform entrepreneurial hegemonic masculinities with a touch of emphasized femininities. Eventually, if women entrepreneurs do not perform correctly as investors expect them to, they will face barriers to acquiring finance. It is important to acknowledge how certain gendered biases might be (re)constructed and (re)produced through entrepreneurial activities, in which pitching is one of them.

Social implications

Practitioners could utilize research findings to understand how gender stereotypes exist not only on the pitch stage but also before and after the pitch, such as the choice of business idea and pitch training. In other words, it is necessary to create a more enabling environment for women entrepreneurs, such as customizing the accelerator program so that all business ideas receive relevant support from experts. On a macro level, the study has shown that seemingly gender-equal societies do not practically translate into higher participation of women in entrepreneurship.

Originality/value

For theoretical contributions, the study enhances the discussion that entrepreneurship is gendered; women and men entrepreneurs need to perform certain hegemonic traits to be legitimated as founders. The authors also address various pitching practices that shape pitch performance by including both textual and semiotic data in the study. This study provides social implications on the awareness of gendered norms and the design of entrepreneurial pitching.

Details

International Journal of Gender and Entrepreneurship, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1756-6266

Keywords

Article
Publication date: 29 December 2023

Younghwan Kim and Hyunseung Lee

This study aims to develop a safe, wearable clothing system that combines visibility-enhancing and emergency–accident-responding functions for two-wheeled vehicle (TWV) users'…

Abstract

Purpose

This study aims to develop a safe, wearable clothing system that combines visibility-enhancing and emergency–accident-responding functions for two-wheeled vehicle (TWV) users' safety assistance.

Design/methodology/approach

First, the wearable system (WS) allowing users to control turn signals, brake lights and emergency flasher only with head movements was developed. Second, multiconnected systems were developed between WSs and a smartphone application (AS), providing accident occurrence recognition, driving photo capture–storage and emergency notification functions. Third, usability testing in each function was performed to assess the operability of the systems.

Findings

The intuitive interface, which uses head movement as gesture commands, was effectively operated for controlling turn signals, brake lights and emergency flasher when driving, despite differences in user physique and boarding structure among TWVs. In addition, using Bluetooth low energy and Wi-Fi protocols simultaneously can establish automatic accident recognition–notification and driving photo capture–storage–display functions by linking two WSs with one AS.

Research limitations/implications

This study presents a case using relatively accessible technologies within the fashion industry to improve users' safety and provide fundamental data for convergence education for smart fashion products, highlighting the significance of this study in this convergence era.

Originality/value

The WSs and the AS of a TWV user visually evoke the attention of other drivers and pedestrians, reducing the risk of accidents; social contribution regarding public safety will be possible by allowing the system to autonomously inform emergencies and receive emergency medical treatment quickly when the accident occurred.

Details

International Journal of Clothing Science and Technology, vol. 36 no. 1
Type: Research Article
ISSN: 0955-6222

Keywords

Abstract

Details

Understanding Intercultural Interaction: An Analysis of Key Concepts, 2nd Edition
Type: Book
ISBN: 978-1-83753-438-8

Executive summary
Publication date: 21 September 2023

SYRIA: President’s China invite is a symbolic gesture

Details

DOI: 10.1108/OXAN-ES282126

ISSN: 2633-304X

Keywords

Geographic
Topical
Executive summary
Publication date: 16 May 2023

SYRIA: Gestures to the Arab League face limits

Article
Publication date: 25 October 2022

Jeya Amantha Kumar, Paula Alexandra Silva, Sharifah Osman and Brandford Bervell

Selfie is a popular self-expression platform to visually communicate and represent individual thoughts, beliefs, and creativity. However, not much has been investigated about…

Abstract

Purpose

Selfie is a popular self-expression platform to visually communicate and represent individual thoughts, beliefs, and creativity. However, not much has been investigated about selifie's pedagogical impact when used as an educational tool. Therefore, the authors seek to explore students' perceptions, emotions, and behaviour of using selfies for a classroom activity.

Design/methodology/approach

A triangulated qualitative approach using thematic, sentiment, and selfie visual analysis was used to investigate selfie perception, behaviour and creativity on 203 undergraduates. Sentiment analyses (SAs) were conducted using Azure Machine Learning and International Business Machines (IBM) Tone Analyzer (TA) to validate the thematic analysis outcomes, whilst the visual analysis reflected cues of behaviour and creativity portrayed.

Findings

Respondents indicated positive experiences and reflected selfies as an engaging, effortless, and practical activity that improves classroom dynamics. Emotions such as joy with analytical and confident tones were observed in their responses, further validating these outcomes. Subsequently, the visual cue analysis indicated overall positive emotions reflecting openness towards the experience, yet also reflected gender-based clique tendency with modest use of popular selfie gestures such as the “peace sign” and “chin shelf”. Furthermore, respondents also preferred to mainly manipulate text colours, frames, and colour blocks as a form of creative output.

Originality/value

The study's findings contribute to the limited studies of using selfies for teaching and learning by offering insights using thematic analysis, SA and visual cue analysis to reflect perception, emotions, and behaviour.

Peer review

The peer review history for this article is available at: https://publons.com/publon/10.1108/OIR-11-2021-0608/

Details

Online Information Review, vol. 47 no. 5
Type: Research Article
ISSN: 1468-4527

Keywords

1 – 10 of 825