Search results

1 – 10 of over 16000
Article
Publication date: 1 May 1994

Norman M Fraser

The ESPRIT SUNDIAL project ran for five years, concluding in August 1993. The objective of the project was to design and build telephone‐access spoken language interfaces to…

Abstract

The ESPRIT SUNDIAL project ran for five years, concluding in August 1993. The objective of the project was to design and build telephone‐access spoken language interfaces to computer databases. After introducing the aims and objectives of the project, the problems of specifying an interactive system are outlined and the Wizard‐of‐Oz simulation method described. The architecture of the resulting system is introduced, and system transaction success results of up to 96.6% are reported. In the final section, some implications for machine translation — particularly interpretive telephony — are identified.

Details

Aslib Proceedings, vol. 46 no. 5
Type: Research Article
ISSN: 0001-253X

Article
Publication date: 3 October 2017

Ibrahim Motawa

With the rapid development in the internet technologies, the applications of big data in construction have seen considerable attention. Currently, there are many input/output…

1174

Abstract

Purpose

With the rapid development in the internet technologies, the applications of big data in construction have seen considerable attention. Currently, there are many input/output modes of capturing construction knowledge related to all construction stages. On the other hand, building information modelling (BIM) systems have been developed to help in storing various structured data of buildings. However, these systems cannot fully capture the knowledge and unstructured data used in the operation of building systems in a usable format that uses the intelligent capabilities of BIM systems. Therefore, this research aims to adopt the concept of big data and develop a spoken dialogue BIM system to capture buildings operation knowledge, particularly for building maintenance and refurbishment.

Design/methodology/approach

The proposed system integrates cloud-based spoken dialogue system and case-based reasoning BIM system.

Findings

The system acts as an interactive expert agent that seeks answers from the user for questions specific to building maintenance problems and helps searching for solutions from previously stored knowledge cases. The practices of monitoring and maintaining buildings performance can be more efficient by the retrieval of relevant solutions from the captured knowledge to new problems when maintaining buildings components. The developed system enables easier capture and search for solutions to new problems with a more comprehensive retrieval of information.

Originality/value

Capturing multi-modes data into BIM systems using the cloud-based spoken dialogue systems will help construction teams use the high volume of data generated over building lifecycle and search for the most suitable solutions for maintenance problems. This new area of research also contributes to the current BIM systems by advancing their capabilities to instantly capture and retrieve knowledge of operations instead of only information.

Details

Facilities, vol. 35 no. 13/14
Type: Research Article
ISSN: 0263-2772

Keywords

Article
Publication date: 4 September 2009

Michael Schuricht, Zachary Davis, Michael Hu, Shreyas Prasad, Peter M. Melliar‐Smith and Louise E. Moser

Mobile handheld devices, such as cellular phones and personal digital assistants, are inherently small and lack an intuitive and natural user interface. Speech recognition and…

Abstract

Purpose

Mobile handheld devices, such as cellular phones and personal digital assistants, are inherently small and lack an intuitive and natural user interface. Speech recognition and synthesis technology can be used in mobile handheld devices to improve the user experience. The purpose of this paper is to describe a prototype system that supports multiple speech‐enabled applications in a mobile handheld device.

Design/methodology/approach

The main component of the system, the Program Manager, coordinates and controls the speech‐enabled applications. Human speech requests to, and responses from, these applications are processed in the mobile handheld device, to achieve the goal of human‐like interactions between the human and the device. In addition to speech, the system also supports graphics and text, i.e., multimodal input and output, for greater usability, flexibility, adaptivity, accuracy, and robustness. The paper presents a qualitative and quantitative evaluation of the prototype system. The Program Manager is currently designed to handle the specific speech‐enabled applications that we developed.

Findings

The paper determines that many human interactions involve not single applications but multiple applications working together in possibly unanticipated ways.

Research limitations/implications

Future work includes generalization of the Program Manager so that it supports arbitrary applications and the addition of new applications dynamically. Future work also includes deployment of the Program Manager and the applications on cellular phones running the Android Platform or the Openmoko Framework.

Originality/value

This paper presents a first step towards a future human interface for mobile handheld devices and for speech‐enabled applications operating on those devices.

Details

International Journal of Pervasive Computing and Communications, vol. 5 no. 3
Type: Research Article
ISSN: 1742-7371

Keywords

Book part
Publication date: 2 May 2006

Wendell H. Chun, Thomas Spura, Frank C. Alvidrez and Randy J. Stiles

Lockheed Martin has been a premier builder and developer of manned aircraft and fighter jets since 1909. Since then, aircraft design has drastically evolved in many areas…

Abstract

Lockheed Martin has been a premier builder and developer of manned aircraft and fighter jets since 1909. Since then, aircraft design has drastically evolved in many areas including the evolution of manual linkages to fly-by-wire systems, and mechanical gauges to glass cockpits. Lockheed Martin's knowledge of manned aircraft has produced a variety of Unmanned Aerial Vehicles (UAVs) based on size/wingspan, ranging from a micro-UAV (MicroStar) to a hand-launched UAV (Desert Hawk) and up to larger platforms such as the DarkStar. Their control systems vary anywhere between remotely piloted to fully autonomous systems. Remotely piloted control is equivalent to full human involvement with an operator controlling all the decisions of the aircraft. Similarly, fully autonomous operations describe a situation that has the human having minimal contact with the platform. Flight path control relies on a set of waypoints for the vehicle to fly through. This is the most common mode of UAV navigation, and GPS has made this form of navigation practical.

Details

Human Factors of Remotely Operated Vehicles
Type: Book
ISBN: 978-0-76231-247-4

Article
Publication date: 17 October 2008

Hartwig Holzapfel

This paper aims to give an overview of a dialogue manager and recent experiments with multimodal human‐robot dialogues.

Abstract

Purpose

This paper aims to give an overview of a dialogue manager and recent experiments with multimodal human‐robot dialogues.

Design/methodology/approach

The paper identifies requirements and solutions in the design of a human‐robot interface. The paper presents essential techniques for a humanoid robot in a household environment and describes their application to representative interaction scenarios that are based on standard situations for a humanoid robot in a household environment. The presented dialogue manager has been developed within the German collaborative research center SFB‐588 on “Humanoid Robots – Learning and Cooperating Multimodal Robots”. The dialogue system is embedded in a multimodal perceptual system of the humanoid robot developed within this project. The implementation of the dialogue manager is geared to requirements found in the explored scenarios. The algorithms include multimodal fusion, reinforcement learning, knowledge acquisition and tight coupling of dialogue manager and speech recognition.

Findings

Within the presented scenarios several algorithms have been implemented and show improvements of the interactions. Results are reported within scenarios that model typical household situations.

Research limitations/implications

Additional scenarios need to be explored especially in real‐world (out of the lab) experiments.

Practical implications

The paper includes implications for the development of humanoid robots and human‐robot interaction.

Originality/value

This paper explores human‐robot interaction scenarios and describes solutions for dialogue systems.

Details

Industrial Robot: An International Journal, vol. 35 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Content available
Article
Publication date: 13 November 2023

Sheuli Paul

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this…

1037

Abstract

Purpose

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making.

Design/methodology/approach

This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success.

Findings

Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed.

Research limitations/implications

Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research.

Practical implications

A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously.

Social implications

Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission.

Originality/value

The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.

Article
Publication date: 5 August 2014

Katherine M. Tsui, Eric McCann, Amelia McHugh, Mikhail Medvedev, Holly A. Yanco, David Kontak and Jill L. Drury

The authors believe that people with cognitive and motor impairments may benefit from using of telepresence robots to engage in social activities. To date, these systems have not…

Abstract

Purpose

The authors believe that people with cognitive and motor impairments may benefit from using of telepresence robots to engage in social activities. To date, these systems have not been designed for use by people with disabilities as the robot operators. The paper aims to discuss these issues.

Design/methodology/approach

The authors conducted two formative evaluations using a participatory action design process. First, the authors conducted a focus group (n=5) to investigate how members of the target audience would want to direct a telepresence robot in a remote environment using speech. The authors then conducted a follow-on experiment in which participants (n=12) used a telepresence robot or directed a human in a scavenger hunt task.

Findings

The authors collected a corpus of 312 utterances (first hand as opposed to speculative) relating to spatial navigation. Overall, the analysis of the corpus supported several speculations put forth during the focus group. Further, it showed few statistically significant differences between speech used in the human and robot agent conditions; thus, the authors believe that, for the task of directing a telepresence robot's movements in a remote environment, people will speak to the robot in a manner similar to speaking to another person.

Practical implications

Based upon the two formative evaluations, the authors present four guidelines for designing speech-based interfaces for telepresence robots.

Originality/value

Robot systems designed for general use do not typically consider people with disabilities. The work is a first step towards having our target population take the active role of the telepresence robot operator.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 7 no. 3
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 1 December 2005

G. Bugmann, J.C. Wolf and P. Robinson

Service robots need to be programmable by their users who are in general unskilled in the art of robot programming. We have explored the use of spoken language for programming…

Abstract

Purpose

Service robots need to be programmable by their users who are in general unskilled in the art of robot programming. We have explored the use of spoken language for programming robots.

Design/methodology/approach

Two applications domains were studied: that of route instructions and that of game instructions. The latter is work in progress. In both cases work started by recording verbal instructions representative of how human users would naturally address their robot.

Findings

The analysis of these instructions reveals references to high‐level functions natural to humans but challenging for designers of robots. The instruction structure reflects assumptions about the cognitive abilities of the listener and it is likely that some human capabilities for rational thinking will be required in service robots.

Research limitations/implications

Some of the high‐level functions called for by natural communication stretch current capabilities and there is a clear case for more effort being devoted in some areas. Instruction analysis provides pointers to such research topics.

Practical implications

It is proposed that service robot design should start with investigating the way end‐users will communicate with the robot. This is encapsulated in the “corpus‐based” approach to robot design illustrated in this paper. This results in more functional service robots.

Originality/value

The paper stresses the importance of considering human‐robot communication early in the robot design process.

Details

Industrial Robot: An International Journal, vol. 32 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 23 November 2010

Victoria L. Rubin, Yimin Chen and Lynne Marie Thorimbert

Conversational agents are natural language interaction interfaces designed to simulate conversation with a real person. This paper seeks to investigate current development and…

3669

Abstract

Purpose

Conversational agents are natural language interaction interfaces designed to simulate conversation with a real person. This paper seeks to investigate current development and applications of these systems worldwide, while focusing on their availability in Canadian libraries. It aims to argue that it is both timely and conceivable for Canadian libraries to consider adopting conversational agents to enhance – not replace – face‐to‐face human interaction. Potential users include library web site tour guides, automated virtual reference and readers' advisory librarians, and virtual story‐tellers. To provide background and justification for this argument, the paper seeks to review agents from classic implementations to state‐of‐the‐art prototypes: how they interact with users, produce language, and control conversational behaviors.

Design/methodology/approach

The web sites of the 20 largest Canadian libraries were surveyed to assess the extent to which specific language‐related technologies are offered in Canada, including conversational agents. An exemplified taxonomy of four pragmatic purposes that conversational agents currently serve outside libraries – educational, informational, assistive, and socially interactive – is proposed and translated into library settings.

Findings

As of early 2010, artificially intelligent conversational systems have been found to be virtually non‐existent in Canadian libraries, while other innovative technologies proliferate (e.g. social media tools). These findings motivate the need for a broader awareness and discussion within the LIS community of these systems' applicability and potential for library purposes.

Originality/value

This paper is intended for reflective information professionals who seek a greater understanding of the issues related to adopting conversational agents in libraries, as this topic is scarcely covered in the LIS literature. The pros and cons are discussed, and insights offered into perceptions of intelligence (artificial or not) as well as the fundamentally social nature of human‐computer interaction.

Details

Library Hi Tech, vol. 28 no. 4
Type: Research Article
ISSN: 0737-8831

Keywords

1 – 10 of over 16000