Search results
1 – 10 of 379Katherine M. Tsui, Eric McCann, Amelia McHugh, Mikhail Medvedev, Holly A. Yanco, David Kontak and Jill L. Drury
The authors believe that people with cognitive and motor impairments may benefit from using of telepresence robots to engage in social activities. To date, these systems have not…
Abstract
Purpose
The authors believe that people with cognitive and motor impairments may benefit from using of telepresence robots to engage in social activities. To date, these systems have not been designed for use by people with disabilities as the robot operators. The paper aims to discuss these issues.
Design/methodology/approach
The authors conducted two formative evaluations using a participatory action design process. First, the authors conducted a focus group (n=5) to investigate how members of the target audience would want to direct a telepresence robot in a remote environment using speech. The authors then conducted a follow-on experiment in which participants (n=12) used a telepresence robot or directed a human in a scavenger hunt task.
Findings
The authors collected a corpus of 312 utterances (first hand as opposed to speculative) relating to spatial navigation. Overall, the analysis of the corpus supported several speculations put forth during the focus group. Further, it showed few statistically significant differences between speech used in the human and robot agent conditions; thus, the authors believe that, for the task of directing a telepresence robot's movements in a remote environment, people will speak to the robot in a manner similar to speaking to another person.
Practical implications
Based upon the two formative evaluations, the authors present four guidelines for designing speech-based interfaces for telepresence robots.
Originality/value
Robot systems designed for general use do not typically consider people with disabilities. The work is a first step towards having our target population take the active role of the telepresence robot operator.
Details
Keywords
The chapter describes the basics of developed high-level spatial grasp technology (SGT) and its spatial grasp language (SGL) allowing us to create and manage very large…
Abstract
The chapter describes the basics of developed high-level spatial grasp technology (SGT) and its spatial grasp language (SGL) allowing us to create and manage very large distributed systems in physical, virtual and executive domains in a highly parallel manner and without any centralized resources. Main features of SGT with its self-evolving and self-spreading spatial intelligence, recursive nature of SGL and organization of its networked interpreter will be briefed. Numerous interpreter copies can be installed worldwide and integrated with other systems or operate autonomously and collectively in critical situations. Relation of SGT, with capability of holistic solutions in distributed systems, to the gestalt psychology and theory, showing unique qualities of human mind and brain to directly grasp the whole of different phenomena, will be explained too, with SGT serving as an attempt to implement the notion of gestalt for distributed applications.
Juan Chen, Nannan Xi, Vilma Pohjonen and Juho Hamari
Metaverse, that is extended reality (XR)-based technologies such as augmented reality (AR) and virtual reality (VR), are increasingly believed to facilitate fundamental human…
Abstract
Purpose
Metaverse, that is extended reality (XR)-based technologies such as augmented reality (AR) and virtual reality (VR), are increasingly believed to facilitate fundamental human practice in the future. One of the vanguards of this development has been the consumption domain, where the multi-modal and multi-sensory technology-mediated immersion is expected to enrich consumers' experience. However, it remains unclear whether these expectations have been warranted in reality and whether, rather than enhancing the experience, metaverse technologies inhibit the functioning and experience, such as cognitive functioning and experience.
Design/methodology/approach
This study utilizes a 2 (VR: yes vs no) × 2 (AR: yes vs no) between-subjects laboratory experiment. A total of 159 student participants are randomly assigned to one condition — a brick-and-mortar store, a VR store, an AR store and an augmented virtuality (AV) store — to complete a typical shopping task. Four spatial attention indicators — visit shift, duration shift, visit variation and duration variation — are compared based on attention allocation data converted from head movements extracted from recorded videos during the experiments.
Findings
This study identifies three essential effects of XR technologies on consumers' spatial attention allocation: the inattention effect, acceleration effect and imbalance effect. Specifically, the inattention effect (the attentional visit shift from showcased products to the environmental periphery) appears when VR or AR technology is applied to virtualize the store and disappears when AR and VR are used together. The acceleration effect (the attentional duration shift from showcased products to the environmental periphery) exists in the VR store. Additionally, AR causes an imbalance effect (the attentional duration variation increases horizontally among the showcased products).
Originality/value
This study provides valuable empirical evidence of how VR and AR influence consumers' spatial bias in attention allocation, filling the research gap on cognitive function in the metaverse. This study also provides practical guidelines for retailers and XR designers and developers.
Details
Keywords
Wendell H. Chun, Thomas Spura, Frank C. Alvidrez and Randy J. Stiles
Lockheed Martin has been a premier builder and developer of manned aircraft and fighter jets since 1909. Since then, aircraft design has drastically evolved in many areas…
Abstract
Lockheed Martin has been a premier builder and developer of manned aircraft and fighter jets since 1909. Since then, aircraft design has drastically evolved in many areas including the evolution of manual linkages to fly-by-wire systems, and mechanical gauges to glass cockpits. Lockheed Martin's knowledge of manned aircraft has produced a variety of Unmanned Aerial Vehicles (UAVs) based on size/wingspan, ranging from a micro-UAV (MicroStar) to a hand-launched UAV (Desert Hawk) and up to larger platforms such as the DarkStar. Their control systems vary anywhere between remotely piloted to fully autonomous systems. Remotely piloted control is equivalent to full human involvement with an operator controlling all the decisions of the aircraft. Similarly, fully autonomous operations describe a situation that has the human having minimal contact with the platform. Flight path control relies on a set of waypoints for the vehicle to fly through. This is the most common mode of UAV navigation, and GPS has made this form of navigation practical.
Orly Lahav, David Schloerb, Siddarth Kumar and Mandyam Srinivasan
This research is based on the hypothesis that the supply of appropriate perceptual and conceptual information through compensatory sensorial channels may assist people who are…
Abstract
Purpose
This research is based on the hypothesis that the supply of appropriate perceptual and conceptual information through compensatory sensorial channels may assist people who are blind with anticipatory exploration. The two main goals of the research are: evaluation of different modalities (haptic and audio) and navigation tools; and evaluation of spatial cognitive mapping employed by people who are blind.
Design/methodology/approach
In this research the BlindAid system, which allows the user to explore a virtual environment, was developed and tested. The research included four participants who are totally blind.
Findings
The preliminary findings confirm that the system enabled participants to develop comprehensive cognitive maps by exploring the virtual environment. The BlindAid system could be used as a training‐simulator for O&M rehabilitation training, as a O&M diagnostic tool, and to support people who are blind in exploring and collecting spatial information in advance.
Originality/value
This preliminary study aims to highlight which VE properties could provide perceptual and conceptual spatial information and allow users who are blind to gather and expand their spatial information.
Details
Keywords
The purpose of this paper is to design an integrated guidance and control design for a formation flight of four unmanned aerial vehicles to follow a moving ground target.
Abstract
Purpose
The purpose of this paper is to design an integrated guidance and control design for a formation flight of four unmanned aerial vehicles to follow a moving ground target.
Design/methodology/approach
The guidance law is based on the line‐of‐sight. The control is optimal. The guidance law is integrated with the optimal control law and is applied to a linear dynamic model.
Findings
The theoretical results are supported by the numerical simulations that illustrate a coordinated encirclement of a ground maneuvering target.
Research limitations/implications
A linear dynamic UAV model and a liner engine model were employed.
Practical implications
This is expected to provide efficient coordination technique required in many civilian circular formation UAV applications; also the technique can be used to provide a safe environment required for the civil applications.
Social implications
The research will facilitate the deployment of autonomous unmanned aircraft systems in various civilian applications such as border monitoring.
Originality/value
The research addresses the challenges of coordination of multiple unmanned aerial vehicles in a circular formation using an integrated optimal control technique with line‐of‐sight guidance.
Details
Keywords
John M. Carroll, Mary Beth Rosson, Philip L. Isenhour, Christina Van Metre, Wendy A. Schafer and Craig H. Ganoe
MOOsburg is a community‐oriented multi‐user domain. It was created to enrich the Blacksburg Electronic Village by providing real‐time, situated, interaction, and a place‐based…
Abstract
MOOsburg is a community‐oriented multi‐user domain. It was created to enrich the Blacksburg Electronic Village by providing real‐time, situated, interaction, and a place‐based information model for community information. We are experimenting with an implementation fundamentally different from classic multi‐user domains object‐oriented (MOOs), supporting distributed system development and management, and a direct manipulation approach to navigation. To guide the development of MOOsburg, we are focusing on a set of community‐oriented applications, including a virtual science fair.
Details
Keywords
Michael Göller, Florian Steinhardt, Thilo Kerscher, J. Marius Zöllner and Rüdiger Dillmann
The purpose of this paper is to present a navigation system designed for highly dynamic environments which is independent from a metrically exact global map.
Abstract
Purpose
The purpose of this paper is to present a navigation system designed for highly dynamic environments which is independent from a metrically exact global map.
Design/methodology/approach
A navigation system is developed to cope with highly dynamic environments. Here, this refers especially to changes in the environment itself, like the daily deployment or removal of advertisements or special offers in a supermarket. The navigation system is split into a global part, relying on non‐concealable artificial landmarks and a local part containing a behavior‐based control using a dynamic potential field approach. The required information are the definitively static structures and the actual sensor information only.
Findings
The system proved to be useful in environments that change frequently and where the presence of many people complicates the perception of landmarks.
Practical implications
The presented navigation system is robust against changes in the environment and provides reliable collision avoidance capabilities.
Originality/value
It is a useful navigation system for autonomous robots dedicated to frequently changing and populated environments.
Details
Keywords
Catherine Todd, Swati Mallya, Sara Majeed, Jude Rojas and Katy Naylor
VirtuNav is a haptic-, audio-enabled virtual reality simulator that facilitates persons with visual impairment to explore a 3D computer model of a real-life indoor location, such…
Abstract
Purpose
VirtuNav is a haptic-, audio-enabled virtual reality simulator that facilitates persons with visual impairment to explore a 3D computer model of a real-life indoor location, such as a room or building. The purpose of this paper is to aid in pre-planning and spatial awareness, for a user to become more familiar with the environment prior to experiencing it in reality.
Design/methodology/approach
The system offers two unique interfaces: a free-roam interface where the user can navigate, and an edit mode where the administrator can manage test users, maps and retrieve test data.
Findings
System testing reveals that spatial awareness and memory mapping improve with user iterations within VirtuNav.
Research limitations/implications
VirtuNav is a research tool for investigation of user familiarity developed after repeated exposure to the simulator, to determine the extent to which haptic and/or sound cues improve a visually impaired user’s ability to navigate a room or building with or without occlusion.
Social implications
The application may prove useful for greater real world engagement: to build confidence in real world experiences, enabling persons with sight impairment to more comfortably and readily explore and interact with environments formerly unfamiliar or unattainable to them.
Originality/value
VirtuNav is developed as a practical application offering several unique features including map design, semi-automatic 3D map reconstruction and object classification from 2D map data. Visual and haptic rendering of real-time 3D map navigation are provided as well as automated administrative functions for shortest path determination, actual path comparison, and performance indicator assessment: exploration time taken and collision data.
Details