Search results

1 – 10 of 64
Article
Publication date: 6 March 2024

Xiaohui Li, Dongfang Fan, Yi Deng, Yu Lei and Owen Omalley

This study aims to offer a comprehensive exploration of the potential and challenges associated with sensor fusion-based virtual reality (VR) applications in the context of…

Abstract

Purpose

This study aims to offer a comprehensive exploration of the potential and challenges associated with sensor fusion-based virtual reality (VR) applications in the context of enhanced physical training. The main objective is to identify key advancements in sensor fusion technology, evaluate its application in VR systems and understand its impact on physical training.

Design/methodology/approach

The research initiates by providing context to the physical training environment in today’s technology-driven world, followed by an in-depth overview of VR. This overview includes a concise discussion on the advancements in sensor fusion technology and its application in VR systems for physical training. A systematic review of literature then follows, examining VR’s application in various facets of physical training: from exercise, skill development and technique enhancement to injury prevention, rehabilitation and psychological preparation.

Findings

Sensor fusion-based VR presents tangible advantages in the sphere of physical training, offering immersive experiences that could redefine traditional training methodologies. While the advantages are evident in domains such as exercise optimization, skill acquisition and mental preparation, challenges persist. The current research suggests there is a need for further studies to address these limitations to fully harness VR’s potential in physical training.

Originality/value

The integration of sensor fusion technology with VR in the domain of physical training remains a rapidly evolving field. Highlighting the advancements and challenges, this review makes a significant contribution by addressing gaps in knowledge and offering directions for future research.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 9 July 2024

Zengrui Zheng, Kainan Su, Shifeng Lin, Zhiquan Fu and Chenguang Yang

Visual simultaneous localization and mapping (SLAM) has limitations such as sensitivity to lighting changes and lower measurement accuracy. The effective fusion of information…

Abstract

Purpose

Visual simultaneous localization and mapping (SLAM) has limitations such as sensitivity to lighting changes and lower measurement accuracy. The effective fusion of information from multiple modalities to address these limitations has emerged as a key research focus. This study aims to provide a comprehensive review of the development of vision-based SLAM (including visual SLAM) for navigation and pose estimation, with a specific focus on techniques for integrating multiple modalities.

Design/methodology/approach

This paper initially introduces the mathematical models and framework development of visual SLAM. Subsequently, this paper presents various methods for improving accuracy in visual SLAM by fusing different spatial and semantic features. This paper also examines the research advancements in vision-based SLAM with respect to multi-sensor fusion in both loosely coupled and tightly coupled approaches. Finally, this paper analyzes the limitations of current vision-based SLAM and provides predictions for future advancements.

Findings

The combination of vision-based SLAM and deep learning has significant potential for development. There are advantages and disadvantages to both loosely coupled and tightly coupled approaches in multi-sensor fusion, and the most suitable algorithm should be chosen based on the specific application scenario. In the future, vision-based SLAM is evolving toward better addressing challenges such as resource-limited platforms and long-term mapping.

Originality/value

This review introduces the development of vision-based SLAM and focuses on the advancements in multimodal fusion. It allows readers to quickly understand the progress and current status of research in this field.

Details

Robotic Intelligence and Automation, vol. 44 no. 4
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 4 June 2024

Dan Zhang, Junji Yuan, Haibin Meng, Wei Wang, Rui He and Sen Li

In the context of fire incidents within buildings, efficient scene perception by firefighting robots is particularly crucial. Although individual sensors can provide specific…

Abstract

Purpose

In the context of fire incidents within buildings, efficient scene perception by firefighting robots is particularly crucial. Although individual sensors can provide specific types of data, achieving deep data correlation among multiple sensors poses challenges. To address this issue, this study aims to explore a fusion approach integrating thermal imaging cameras and LiDAR sensors to enhance the perception capabilities of firefighting robots in fire environments.

Design/methodology/approach

Prior to sensor fusion, accurate calibration of the sensors is essential. This paper proposes an extrinsic calibration method based on rigid body transformation. The collected data is optimized using the Ceres optimization algorithm to obtain precise calibration parameters. Building upon this calibration, a sensor fusion method based on coordinate projection transformation is proposed, enabling real-time mapping between images and point clouds. In addition, the effectiveness of the proposed fusion device data collection is validated in experimental smoke-filled fire environments.

Findings

The average reprojection error obtained by the extrinsic calibration method based on rigid body transformation is 1.02 pixels, indicating good accuracy. The fused data combines the advantages of thermal imaging cameras and LiDAR, overcoming the limitations of individual sensors.

Originality/value

This paper introduces an extrinsic calibration method based on rigid body transformation, along with a sensor fusion approach based on coordinate projection transformation. The effectiveness of this fusion strategy is validated in simulated fire environments.

Details

Sensor Review, vol. 44 no. 4
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 15 September 2023

Kaushal Jani

This article takes into account object identification, enhanced visual feature optimization, cost effectiveness and speed selection in response to terrain conditions. Neither…

49

Abstract

Purpose

This article takes into account object identification, enhanced visual feature optimization, cost effectiveness and speed selection in response to terrain conditions. Neither supervised machine learning nor manual engineering are used in this work. Instead, the OTV educates itself without instruction from humans or labeling. Beyond its link to stopping distance and lateral mobility, choosing the right speed is crucial. One of the biggest problems with autonomous operations is accurate perception. Obstacle avoidance is typically the focus of perceptive technology. The vehicle's shock is nonetheless controlled by the terrain's roughness at high speeds. The precision needed to recognize difficult terrain is far higher than the accuracy needed to avoid obstacles.

Design/methodology/approach

Robots that can drive unattended in an unfamiliar environment should be used for the Orbital Transfer Vehicle (OTV) for the clearance of space debris. In recent years, OTV research has attracted more attention and revealed several insights for robot systems in various applications. Improvements to advanced assistance systems like lane departure warning and intelligent speed adaptation systems are eagerly sought after by the industry, particularly space enterprises. OTV serves as a research basis for advancements in machine learning, computer vision, sensor data fusion, path planning, decision making and intelligent autonomous behavior from a computer science perspective. In the framework of autonomous OTV, this study offers a few perceptual technologies for autonomous driving in this study.

Findings

One of the most important steps in the functioning of autonomous OTVs and aid systems is the recognition of barriers, such as other satellites. Using sensors to perceive its surroundings, an autonomous car decides how to operate on its own. Driver-assistance systems like adaptive cruise control and stop-and-go must be able to distinguish between stationary and moving objects surrounding the OTV.

Originality/value

One of the most important steps in the functioning of autonomous OTVs and aid systems is the recognition of barriers, such as other satellites. Using sensors to perceive its surroundings, an autonomous car decides how to operate on its own. Driver-assistance systems like adaptive cruise control and stop-and-go must be able to distinguish between stationary and moving objects surrounding the OTV.

Details

International Journal of Intelligent Unmanned Systems, vol. 12 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 18 July 2024

Zhiyu Li, Hongguang Li, Yang Liu, Lingyun Jin and Congqing Wang

Autonomous flight of unmanned aerial vehicles (UAVs) in global position system (GPS)-denied environments has become an increasing research hotspot. This paper aims to realize the…

Abstract

Purpose

Autonomous flight of unmanned aerial vehicles (UAVs) in global position system (GPS)-denied environments has become an increasing research hotspot. This paper aims to realize the indoor fixed-point hovering control and autonomous flight for UAVs based on visual inertial simultaneous localization and mapping (SLAM) and sensor fusion algorithm based on extended Kalman filter.

Design/methodology/approach

The fundamental of the proposed method is using visual inertial SLAM to estimate the position information of the UAV and position-speed double-loop controller to control the UAV. The motion and observation models of the UAV and the fusion algorithm are given. Finally, experiments are performed to test the proposed algorithms.

Findings

A position-speed double-loop controller is proposed, by fusing the position information obtained by visual inertial SLAM with the data of airborne sensors. The experiment results of the indoor fixed-points hovering show that UAV flight control can be realized based on visual inertial SLAM in the absence of GPS.

Originality/value

A position-speed double-loop controller for UAV is designed and tested, which provides a more stable position estimation and enabled UAV to fly autonomously and hover in GPS-denied environment.

Details

Robotic Intelligence and Automation, vol. 44 no. 5
Type: Research Article
ISSN: 2754-6969

Keywords

Content available
Article
Publication date: 13 November 2023

Sheuli Paul

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this…

1413

Abstract

Purpose

This paper presents a survey of research into interactive robotic systems for the purpose of identifying the state of the art capabilities as well as the extant gaps in this emerging field. Communication is multimodal. Multimodality is a representation of many modes chosen from rhetorical aspects for its communication potentials. The author seeks to define the available automation capabilities in communication using multimodalities that will support a proposed Interactive Robot System (IRS) as an AI mounted robotic platform to advance the speed and quality of military operational and tactical decision making.

Design/methodology/approach

This review will begin by presenting key developments in the robotic interaction field with the objective of identifying essential technological developments that set conditions for robotic platforms to function autonomously. After surveying the key aspects in Human Robot Interaction (HRI), Unmanned Autonomous System (UAS), visualization, Virtual Environment (VE) and prediction, the paper then proceeds to describe the gaps in the application areas that will require extension and integration to enable the prototyping of the IRS. A brief examination of other work in HRI-related fields concludes with a recapitulation of the IRS challenge that will set conditions for future success.

Findings

Using insights from a balanced cross section of sources from the government, academic, and commercial entities that contribute to HRI a multimodal IRS in military communication is introduced. Multimodal IRS (MIRS) in military communication has yet to be deployed.

Research limitations/implications

Multimodal robotic interface for the MIRS is an interdisciplinary endeavour. This is not realistic that one can comprehend all expert and related knowledge and skills to design and develop such multimodal interactive robotic interface. In this brief preliminary survey, the author has discussed extant AI, robotics, NLP, CV, VDM, and VE applications that is directly related to multimodal interaction. Each mode of this multimodal communication is an active research area. Multimodal human/military robot communication is the ultimate goal of this research.

Practical implications

A multimodal autonomous robot in military communication using speech, images, gestures, VST and VE has yet to be deployed. Autonomous multimodal communication is expected to open wider possibilities for all armed forces. Given the density of the land domain, the army is in a position to exploit the opportunities for human–machine teaming (HMT) exposure. Naval and air forces will adopt platform specific suites for specially selected operators to integrate with and leverage this emerging technology. The possession of a flexible communications means that readily adapts to virtual training will enhance planning and mission rehearsals tremendously.

Social implications

Interaction, perception, cognition and visualization based multimodal communication system is yet missing. Options to communicate, express and convey information in HMT setting with multiple options, suggestions and recommendations will certainly enhance military communication, strength, engagement, security, cognition, perception as well as the ability to act confidently for a successful mission.

Originality/value

The objective is to develop a multimodal autonomous interactive robot for military communications. This survey reports the state of the art, what exists and what is missing, what can be done and possibilities of extension that support the military in maintaining effective communication using multimodalities. There are some separate ongoing progresses, such as in machine-enabled speech, image recognition, tracking, visualizations for situational awareness, and virtual environments. At this time, there is no integrated approach for multimodal human robot interaction that proposes a flexible and agile communication. The report briefly introduces the research proposal about multimodal interactive robot in military communication.

Article
Publication date: 2 May 2023

Hang Guo, Xin Chen, Min Yu, Marcin Uradziński and Liang Cheng

In this study, an indoor sensor information fusion positioning system of the quadrotor unmanned aerial vehicle (UAV) was investigated to solve the problem of unstable indoor…

Abstract

Purpose

In this study, an indoor sensor information fusion positioning system of the quadrotor unmanned aerial vehicle (UAV) was investigated to solve the problem of unstable indoor flight positioning.

Design/methodology/approach

The presented system was built on Light Detection and Ranging (LiDAR), Inertial Measurement Unit (IMU) and LiDAR-Lite devices. Based on this, one can obtain the aircraft's current attitude and the position vector relative to the target and control the attitudes and positions of the UAV to reach the specified target positions. While building a UAV positioning model relative to the target for indoor positioning scenarios under limited Global Navigation Satellite Systems (GNSS), the system detects the environment through the NVIDIA Jetson TX2 (Transmit Data) peripheral sensor, obtains the current attitude and the position vector of the UAV, packs the data in the format and delivers it to the flight controller. Then the flight controller controls the UAV by calculating the posture to reach the specified target position.

Findings

The authors used two systems in the experiment. The first is the proposed UAV, and the other is the Vicon system, our reference system for comparison purposes. Vicon positioning error can be considered lower than 2 mm from low to high-speed experiments. After comparison, experimental results demonstrated that the system could fully meet the requirements (less than 50 mm) in real-time positioning of the indoor quadrotor UAV flight. It verifies the accuracy and robustness of the proposed method compared with that of Vicon and achieves the aim of a stable indoor flight preliminarily.

Originality/value

Vicon positioning error can be considered lower than 2 mm from low to high-speed experiments. After comparison, experimental results demonstrated that the system could fully meet the requirements (less than 50 mm) in real-time positioning of the indoor quadrotor UAV flight. It verifies the accuracy and robustness of the proposed method compared with that of Vicon and achieves the aim of a stable indoor flight preliminarily.

Details

International Journal of Intelligent Unmanned Systems, vol. 12 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 6 June 2024

Zhiwei Zhang, Saasha Nair, Zhe Liu, Yanzi Miao and Xiaoping Ma

This paper aims to facilitate the research and development of resilient navigation approaches, explore the robustness of adversarial training to different interferences and…

Abstract

Purpose

This paper aims to facilitate the research and development of resilient navigation approaches, explore the robustness of adversarial training to different interferences and promote their practical applications in real complex environments.

Design/methodology/approach

In this paper, the authors first summarize the real accidents of self-driving cars and develop a set of methods to simulate challenging scenarios by introducing simulated disturbances and attacks into the input sensor data. Then a robust and transferable adversarial training approach is proposed to improve the performance and resilience of current navigation models, followed by a multi-modality fusion-based end-to-end navigation network to demonstrate real-world performance of the methods. In addition, an augmented self-driving simulator with designed evaluation metrics is built to evaluate navigation models.

Findings

Synthetical experiments in simulator demonstrate the robustness and transferability of the proposed adversarial training strategy. The simulation function flow can also be used for promoting any robust perception or navigation researches. Then a multi-modality fusion-based navigation framework is proposed as a light-weight model to evaluate the adversarial training method in real-world.

Originality/value

The adversarial training approach provides a transferable and robust enhancement for navigation models both in simulation and real-world.

Details

Robotic Intelligence and Automation, vol. 44 no. 3
Type: Research Article
ISSN: 2754-6969

Keywords

Open Access
Article
Publication date: 10 July 2024

Tianyun Shi, Zhoulong Wang, Jia You, Pengyue Guo, Lili Jiang, Huijin Fu and Xu Gao

The safety of high-speed rail operation environments is an important guarantee for the safe operation of high-speed rail. The operating environment of the high-speed rail is…

Abstract

Purpose

The safety of high-speed rail operation environments is an important guarantee for the safe operation of high-speed rail. The operating environment of the high-speed rail is complex, and the main factors affecting the safety of high-speed rail operating environment include meteorological disasters, perimeter intrusion and external environmental hazards. The purpose of the paper is to elaborate on the current research status and team research progress on the perception of safety situation in high-speed rail operation environment and to propose directions for further research in the future.

Design/methodology/approach

In terms of the mechanism and spatio-temporal evolution law of the main influencing factors on the safety of high-speed rail operation environments, the research status is elaborated, and the latest research progress and achievements of the team are introduced. This paper elaborates on the research status and introduces the latest research progress and achievements of the team in terms of meteorological, perimeter and external environmental situation perception methods for high-speed rail operation.

Findings

Based on the technical route of “situational awareness evaluation warning active control,” a technical system for monitoring the safety of high-speed train operation environments has been formed. Relevant theoretical and technical research and application have been carried out around the impact of meteorological disasters, perimeter intrusion and the external environment on high-speed rail safety. These works strongly support the improvement of China’s railway environmental safety guarantee technology.

Originality/value

With the operation of CR450 high-speed trains with a speed of 400 km per hour and the application of high-speed train autonomous driving technology in the future, new and higher requirements have been put forward for the safety of high-speed rail operation environments. The following five aspects of work are urgently needed: (1) Research the single factor disaster mechanism of wind, rain, snow, lightning, etc. for high-speed railways with a speed of 400 kms per hour, and based on this, study the evolution characteristics of multiple safety factors and the correlation between the high-speed driving safety environment, revealing the coupling disaster mechanism of multiple influencing factors; (2) Research covers multi-source data fusion methods and associated features such as disaster monitoring data, meteorological information, route characteristics and terrain and landforms, studying the spatio-temporal evolution laws of meteorological disasters, perimeter intrusions and external environmental hazards; (3) In terms of meteorological disaster situation awareness, research high-precision prediction methods for meteorological information time series along high-speed rail lines and study the realization of small-scale real-time dynamic and accurate prediction of meteorological disasters along high-speed rail lines; (4) In terms of perimeter intrusion, research a multi-modal fusion perception method for typical scenarios of high-speed rail operation in all time, all weather and all coverage and combine artificial intelligence technology to achieve comprehensive and accurate perception of perimeter security risks along the high-speed rail line and (5) In terms of external environment, based on the existing general network framework for change detection, we will carry out research on change detection and algorithms in the surrounding environment of high-speed rail.

Article
Publication date: 7 April 2023

Sixing Liu, Yan Chai, Rui Yuan and Hong Miao

Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments…

Abstract

Purpose

Simultaneous localization and map building (SLAM), as a state estimation problem, is a prerequisite for solving the problem of autonomous vehicle motion in unknown environments. Existing algorithms are based on laser or visual odometry; however, the lidar sensing range is small, the amount of data features is small, the camera is vulnerable to external conditions and the localization and map building cannot be performed stably and accurately using a single sensor. This paper aims to propose a laser three dimensions tightly coupled map building method that incorporates visual information, and uses laser point cloud information and image information to complement each other to improve the overall performance of the algorithm.

Design/methodology/approach

The visual feature points are first matched at the front end of the method, and the mismatched point pairs are removed using the bidirectional random sample consensus (RANSAC) algorithm. The laser point cloud is then used to obtain its depth information, while the two types of feature points are fed into the pose estimation module for a tightly coupled local bundle adjustment solution using a heuristic simulated annealing algorithm. Finally, the visual bag-of-words model is fused in the laser point cloud information to establish a threshold to construct a loopback framework to further reduce the cumulative drift error of the system over time.

Findings

Experiments on publicly available data sets show that the proposed method in this paper can match its real trajectory well. For various scenes, the map can be constructed by using the complementary laser and vision sensors, with high accuracy and robustness. At the same time, the method is verified in a real environment using an autonomous walking acquisition platform, and the system loaded with the method can run well for a long time and take into account the environmental adaptability of multiple scenes.

Originality/value

A multi-sensor data tight coupling method is proposed to fuse laser and vision information for optimal solution of the positional attitude. A bidirectional RANSAC algorithm is used for the removal of visual mismatched point pairs. Further, oriented fast and rotated brief feature points are used to build a bag-of-words model and construct a real-time loopback framework to reduce error accumulation. According to the experimental validation results, the accuracy and robustness of the single-sensor SLAM algorithm can be improved.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of 64