Search results

1 – 2 of 2
Article
Publication date: 9 January 2024

Zhuoyu Zhang, Lijia Zhong, Mingwei Lin, Ri Lin and Dejun Li

Docking technology plays a crucial role in enabling long-duration operations of autonomous underwater vehicles (AUVs). Visual positioning solutions alone are susceptible to…

Abstract

Purpose

Docking technology plays a crucial role in enabling long-duration operations of autonomous underwater vehicles (AUVs). Visual positioning solutions alone are susceptible to abnormal drift values due to the challenging underwater optical imaging environment. When an AUV approaches the docking station, the absolute positioning method fails if the AUV captures an insufficient number of tracers. This study aims to to provide a more stable absolute position visual positioning method for underwater terminal visual docking.

Design/methodology/approach

This paper presents a six-degree-of-freedom positioning method for AUV terminal visual docking, which uses lights and triangle codes. The authors use an extended Kalman filter to fuse the visual calculation results with inertial measurement unit data. Moreover, this paper proposes a triangle code recognition and positioning algorithm.

Findings

The authors conducted a simulation experiment to compare the underwater positioning performance of triangle codes, AprilTag and Aruco. The results demonstrate that the implemented triangular code reduces the running time by over 70% compared to the other two codes, and also exhibits a longer recognition distance in turbid environments. Subsequent experiments were carried out in Qingjiang Lake, Hubei Province, China, which further confirmed the effectiveness of the proposed positioning algorithm.

Originality/value

This fusion approach effectively mitigates abnormal drift errors stemming from visual positioning and cumulative errors resulting from inertial navigation. The authors also propose a triangle code recognition and positioning algorithm as a supplementary approach to overcome the limitations of tracer light positioning beacons.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Content available
Article
Publication date: 13 November 2023

Sheuli Paul

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this…

1162

Abstract

Purpose

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making.

Design/methodology/approach

This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success.

Findings

Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed.

Research limitations/implications

Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research.

Practical implications

A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously.

Social implications

Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission.

Originality/value

The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.

Access

Year

Last 6 months (2)

Content type

1 – 2 of 2