Search results

1 – 10 of 191
Article
Publication date: 1 April 2014

Charlotte Travis and Pietro Murano

This paper is about an investigation into the usability of touch-based user interfaces. Currently, not enough knowledge is available to guide user interface designers and…

Abstract

Purpose

This paper is about an investigation into the usability of touch-based user interfaces. Currently, not enough knowledge is available to guide user interface designers and developers concerning the appropriate use of touch-based technology. The paper aims to discuss these issues.

Design/methodology/approach

The authors adopt an empirical approach using an experiment to test the effectiveness and user satisfaction of touch-based interaction compared with equivalent mouse-based interaction. The authors had two abstract type tasks and one contextualised task using the two methods of interaction. The authors measured errors, task time and user satisfaction.

Findings

The data were statistically analysed and the statistically significant results show that overall the mouse-based interaction was faster, caused fewer errors and was preferred by the participants.

Originality/value

These results are interesting for all user interface designers and developers, where the authors make some design suggestions based on the empirical results. The results also add to the current knowledge the authors have regarding interaction with touch interfaces. Further, the authors also propose ways forward to enrich this research area of further knowledge.

Details

International Journal of Pervasive Computing and Communications, vol. 10 no. 1
Type: Research Article
ISSN: 1742-7371

Keywords

Article
Publication date: 26 August 2014

Werner Kurschl, Mirjam Augstein, Thomas Burger and Claudia Pointner

The purpose of this paper is to present an approach where a novel user modeling wizard for people with motor impairments is used to gain a deeper understanding of very specific …

Abstract

Purpose

The purpose of this paper is to present an approach where a novel user modeling wizard for people with motor impairments is used to gain a deeper understanding of very specific (touch-based and touchless) interaction patterns. The findings are used to set up and fill a user model which allows to automatically derive an application- and user-specific configuration for natural user interfaces.

Design/methodology/approach

Based on expert knowledge in the domain of software/user interfaces for people with special needs, a test-case –based user modeling tool was developed. Task-based user tests were conducted with seven users for the touch-based interaction scenario and with five users for the touchless interaction scenario. The participants are all people with different motor and/or cognitive impairments.

Findings

The paper describes the results of different test cases that were designed to model users’ touch-based and touchless interaction capabilities. To evaluate the tool’s findings, experts additionally judged the participants’ performance (their opinions were compared to the tool’s findings). The results suggest that the user modeling tool could quite well capture users’ capabilities.

Social implications

The paper presents a tool that can be used to model users’ interaction capabilities. The approach aims at taking over some of the (very time-consuming) configuration tasks consultants have to do to configure software according to the needs of people with disabilities. This can lead to a wider accessibility of software, especially in the area of gesture-based user interaction.

Originality/value

Part of the approach has been published in the proceedings of the Interactional Conference on Advances in Mobile Computing and Multimedia 2014. Significant additions have been made since (e.g. all of the touchless interaction part of the approach and the related user study).

Details

International Journal of Pervasive Computing and Communications, vol. 10 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Book part
Publication date: 17 September 2014

Naresh Kumar Agarwal

Through observing the use of iPhone and iPad by a child between the ages of two and four years and a half, this study presents accounts on the child’s use of and interaction with…

Abstract

Purpose

Through observing the use of iPhone and iPad by a child between the ages of two and four years and a half, this study presents accounts on the child’s use of and interaction with these devices, as well as her interaction with the physical environment.

Design/methodology/approach

Unstructured, naturalistic observation was employed in this study. The study is grounded in theories of user engagement with digital and physical objects.

Findings

A child’s interaction with touch-based devices does not deter the child from engaging effectively with the physical environment or from activities centered on creativity and interpersonal engagement. A child is able to move back and forth seamlessly between the physical and digital environments.

Practical implications

Findings from this study could help parents, educators, and system designers understand why and how toddlers and preschoolers use and engage with touch-based devices, as well as the kind of tasks they perform.

Originality/value

Studies of toddlers’ or preschoolers’ information behavior and interaction with touch-based devices are scarce. Children born toward the end of the first decade of the twenty-first century are growing up with a propensity to using touch-based devices. This study provides a framework for effective usage of such devices while ensuring all-round cognitive and physical development of the child.

Details

New Directions in Children’s and Adolescents’ Information Behavior Research
Type: Book
ISBN: 978-1-78350-814-3

Keywords

Article
Publication date: 20 November 2023

Kesha K. Coker and Ramendra Thakur

Powered by artificial intelligence, voice assistants (VAs), such as Alexa, Siri and Cortona, are at early-stage adoption rates in service contexts. Customers express hesitance in…

Abstract

Purpose

Powered by artificial intelligence, voice assistants (VAs), such as Alexa, Siri and Cortona, are at early-stage adoption rates in service contexts. Customers express hesitance in using the technology. Furthermore, the effect of a relevant variable (VA empathy) as a determinant of VAs is not widely researched. This study aims to extend the unified theory of acceptance and use of technology (UTAUT) and social response theory (SRT) to propose and test a conceptual model of the role of customer perceptions of VA empathy and risk on VA adoption and usage intensity.

Design/methodology/approach

In this study, data were collected from 387 VA users in the USA using a survey administered through Amazon MTurk. Data cleaning retained a final n = 318 for structural equation modeling analysis.

Findings

Findings show that perceived VA empathy enhances customers’ attitude toward VA and drives adoption, thereby increasing VA usage intensity. Perceived risk is a moderator; users with high perceptions of VA empathy have greater VA adoption rates when they have high (vs low) risk perceptions of using VA.

Originality/value

This research is one of the first known studies to provide empirical evidence of the role of customer perceptions of VA empathy and risk on VA adoption in service delivery. It goes beyond VA adoption research to provide empirical evidence of the impact of VA adoption on actual usage intensity. By extending the UTAUT and SRT, this research adds to the theoretical foundation for research on VA adoption, offering practical insights for firms regarding empathetic VA design to enhance customer service delivery.

Details

Journal of Services Marketing, vol. 38 no. 3
Type: Research Article
ISSN: 0887-6045

Keywords

Content available
Article
Publication date: 1 April 2014

Liming Luke Chen and Rene Mayrhofer and Matthias Steinbauer

99

Abstract

Details

International Journal of Pervasive Computing and Communications, vol. 10 no. 1
Type: Research Article
ISSN: 1742-7371

Article
Publication date: 16 March 2012

Antti Konttila, Marja Harjumaa, Salla Muuraiskangas, Mikko Jokela and Minna Isomursu

This article aims to explore the possibilities and use of a mobile technology‐supported audio annotation system that can be used for attaching free‐formatted audio annotations to…

Abstract

Purpose

This article aims to explore the possibilities and use of a mobile technology‐supported audio annotation system that can be used for attaching free‐formatted audio annotations to physical objects. The solution can help visually impaired people to identify objects and associate additional information with these objects.

Design/methodology/approach

A human‐centred design approach was adopted in the system's development and potential end‐users were involved in the development process. In order to evaluate the emerging use cases, as well as the usefulness and usability of the application, a qualitative field trial was conducted with ten visually impaired or blind users.

Findings

The findings show that visually impaired users learned to use the application easily and found it easy and robust to use. Most users responded positively towards the idea of tagging items with their own voice messages. Some users found the technology very useful and saw many possibilities for using it in the future. The most common targets for tagging were food items; however, some users had difficulties in integrating the solution with their everyday practices.

Originality/value

This paper presents an innovative mobile phone application with a touch and audio user interface. The actual use cases describe the everyday needs of visually impaired people and this information might be valuable to service providers and technology developers. Also, the experiences gained from these trials can be used when developing software for the visually impaired on other platforms.

Details

Journal of Assistive Technologies, vol. 6 no. 1
Type: Research Article
ISSN: 1754-9450

Keywords

Article
Publication date: 3 October 2016

Donghee Shin, Myunggoon Choi, Jang Hyun Kim and Jae-gil Lee

The purpose of this paper is to examine the effects of interaction techniques (e.g. swiping and tapping) and the range of thumb movement on interactivity, engagement, attitude…

1728

Abstract

Purpose

The purpose of this paper is to examine the effects of interaction techniques (e.g. swiping and tapping) and the range of thumb movement on interactivity, engagement, attitude, and behavioral intention in single-handed interaction with smartphones.

Design/methodology/approach

A 2×2 between-participant experiment (technological features: swiping and tapping×range of thumb movement: wide and narrow) was conducted to study the effects of interaction techniques and thumb movement ranges.

Findings

The results showed that the range of thumb movement had significant effects on perceived interactivity, engagement, attitude, and behavioral intention, whereas no effects were observed for interaction techniques. A narrow range of thumb movement had more influence on the interactivity outcomes in comparison to a wide range of thumb movement.

Practical implications

While the subject of actual and perceived interactivity has been discussed, the issue has not been applied to smartphone. Based on the research results, the mobile industry may come up with a design strategy that balances feature- and perception-based interactivity.

Originality/value

This study adopted the perspective of the hybrid definition of interactivity, which includes both actual and perceived interactivity. Interactivity effect outcomes mediated by perceived interactivity.

Details

Internet Research, vol. 26 no. 5
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 19 June 2021

Naresh Kumar Agarwal and Wenqing Lu

The purpose of this paper is to study smartphone use and its positive and negative effects and to provide recommendations for balanced use.

Abstract

Purpose

The purpose of this paper is to study smartphone use and its positive and negative effects and to provide recommendations for balanced use.

Design/methodology/approach

To study phone use, this paper applies the uses and gratification theory and gathered interview data from 24 participants on the participants’ frequency of use, mode of communication, people contacted and the reasons for using their phones. This paper analyzes the pros and cons of using smartphones using the Yin-Yang worldview.

Findings

This paper finds that people use their smartphones for communication, entertainment and other specific functions. Ease of communication and multitasking are the key benefits, and overuse and disconnect from the real world are the detriments in smartphone use.

Research limitations/implications

The findings can enable future researchers and practitioners to view smartphones and their effects more holistically, rather than seeing it only from the negative or the positive lens.

Practical implications

The proposed framework can help the reader to consider their daily use of smartphones and their ways of balancing their presence in the virtual and the real worlds.

Originality/value

This paper proposes the Yin-Yang framework of smartphone use and provides recommendations for effective usage.

Details

Global Knowledge, Memory and Communication, vol. 71 no. 6/7
Type: Research Article
ISSN: 2514-9342

Keywords

Article
Publication date: 3 August 2015

Po-Yao Chao and Chia-Ching Lin

The purpose of this paper is to explore how young children interact with a visualized search interface to search for storybooks by assembling the provided visual searching items…

Abstract

Purpose

The purpose of this paper is to explore how young children interact with a visualized search interface to search for storybooks by assembling the provided visual searching items and to explore the difference in visual search behaviours and strategies exhibited by pre-schoolers and second-graders.

Design/methodology/approach

The visualized search interface was used to help young children search for storybooks by dragging-and-dropping story characters, scene objects and colour icons to perform search queries. Twenty pre-schoolers and 20 second-graders were asked to finish a search task through the visualized search interface. Their activities and successes in performing visual searches were logged for later analysis. Furthermore, in-depth interviews were also conducted to research their cognitive strategies exhibited while formulating visual search queries.

Findings

Young children with different grades adopted different cognitive strategies to perform visual searching. In contrast to the pre-schoolers who performed visual searching by personal preference, the second-graders could exercise visual searching accompanied with relatively high-order thinking. Young children may also place different foci on the storybook structure to deal with conditional storybook queries. The pre-schoolers tended to address the characters in the story, whereas the second-graders paid much attention to the aspects of scene and colour.

Originality/value

This paper describes a new visual search approach allowing young children to search for storybooks by describing an intended storybook in terms of its characters, scenes or the background colours, which provides valuable indicators to inform researchers of how pre-schoolers and second-graders formulate concepts to search for storybooks.

Details

The Electronic Library, vol. 33 no. 4
Type: Research Article
ISSN: 0264-0473

Keywords

Content available
Article
Publication date: 13 November 2023

Sheuli Paul

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this…

1043

Abstract

Purpose

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making.

Design/methodology/approach

This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success.

Findings

Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed.

Research limitations/implications

Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research.

Practical implications

A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously.

Social implications

Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission.

Originality/value

The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.

1 – 10 of 191