Search results
1 – 10 of 71Abstract
Details
Keywords
Abstract
Details
Keywords
Increasing demand on rail transport speeds up the introduction of new technical systems to optimize the rail traffic and increase competitiveness. Remote control of trains is seen…
Abstract
Purpose
Increasing demand on rail transport speeds up the introduction of new technical systems to optimize the rail traffic and increase competitiveness. Remote control of trains is seen as a potential layer of resilience in railway operations. It allows for operating and controlling automated trains and communicating and coordinating with other stakeholders of the railway system. This paper aims to present the first results of a multi-phased simulator study on the development and optimization of remote train driving concepts from the operators’ point of view.
Design/methodology/approach
The presented concept was developed by benchmarking good practices. Two phases of iterative user tests were conducted to evaluate the user experience and preferences of the developed human-machine-interface concept. Basic training requirements were identified and evaluated.
Findings
Results indicate positive feedback on the overall system as a fallback solution. HMI elicited positive emotions regarding pleasure and dominance, but low arousal levels. Train drivers had more conservative views on the system compared to signalers and students. The training activities achieved increased awareness and understanding of the system for future operators. Inclusion of potential users in the development of future systems has the potential to improve user acceptance. The iterative user experiments were useful in obtaining some of the needs and preferences of different user groups.
Originality/value
Multi-phase user tests were conducted to identify and to evaluate the requirements and preferences of remote operators using a simplified HMI. Training analysis provides important aspects to consider for the training of future users.
Details
Keywords
This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this…
Abstract
Purpose
This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making.
Design/methodology/approach
This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success.
Findings
Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed.
Research limitations/implications
Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research.
Practical implications
A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously.
Social implications
Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission.
Originality/value
The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.
Details
Keywords
Abstract
Details
Keywords
Wenbin Xu, Xudong Li, Liang Gong, Yixiang Huang, Zeyuan Zheng, Zelin Zhao, Lujie Zhao, Binhao Chen, Haozhe Yang, Li Cao and Chengliang Liu
This paper aims to present a human-in-the-loop natural teaching paradigm based on scene-motion cross-modal perception, which facilitates the manipulation intelligence and robot…
Abstract
Purpose
This paper aims to present a human-in-the-loop natural teaching paradigm based on scene-motion cross-modal perception, which facilitates the manipulation intelligence and robot teleoperation.
Design/methodology/approach
The proposed natural teaching paradigm is used to telemanipulate a life-size humanoid robot in response to a complicated working scenario. First, a vision sensor is used to project mission scenes onto virtual reality glasses for human-in-the-loop reactions. Second, motion capture system is established to retarget eye-body synergic movements to a skeletal model. Third, real-time data transfer is realized through publish-subscribe messaging mechanism in robot operating system. Next, joint angles are computed through a fast mapping algorithm and sent to a slave controller through a serial port. Finally, visualization terminals render it convenient to make comparisons between two motion systems.
Findings
Experimentation in various industrial mission scenes, such as approaching flanges, shows the numerous advantages brought by natural teaching, including being real-time, high accuracy, repeatability and dexterity.
Originality/value
The proposed paradigm realizes the natural cross-modal combination of perception information and enhances the working capacity and flexibility of industrial robots, paving a new way for effective robot teaching and autonomous learning.
Details
Keywords
Abstract
Details
Keywords
Abstract
Details