Search results

1 – 10 of 78
Article
Publication date: 6 March 2024

Xiaohui Li, Dongfang Fan, Yi Deng, Yu Lei and Owen Omalley

This study aims to offer a comprehensive exploration of the potential and challenges associated with sensor fusion-based virtual reality (VR) applications in the context of…

Abstract

Purpose

This study aims to offer a comprehensive exploration of the potential and challenges associated with sensor fusion-based virtual reality (VR) applications in the context of enhanced physical training. The main objective is to identify key advancements in sensor fusion technology, evaluate its application in VR systems and understand its impact on physical training.

Design/methodology/approach

The research initiates by providing context to the physical training environment in today’s technology-driven world, followed by an in-depth overview of VR. This overview includes a concise discussion on the advancements in sensor fusion technology and its application in VR systems for physical training. A systematic review of literature then follows, examining VR’s application in various facets of physical training: from exercise, skill development and technique enhancement to injury prevention, rehabilitation and psychological preparation.

Findings

Sensor fusion-based VR presents tangible advantages in the sphere of physical training, offering immersive experiences that could redefine traditional training methodologies. While the advantages are evident in domains such as exercise optimization, skill acquisition and mental preparation, challenges persist. The current research suggests there is a need for further studies to address these limitations to fully harness VR’s potential in physical training.

Originality/value

The integration of sensor fusion technology with VR in the domain of physical training remains a rapidly evolving field. Highlighting the advancements and challenges, this review makes a significant contribution by addressing gaps in knowledge and offering directions for future research.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Open Access
Article
Publication date: 4 April 2024

Yanmin Zhou, Zheng Yan, Ye Yang, Zhipeng Wang, Ping Lu, Philip F. Yuan and Bin He

Vision, audition, olfactory, tactile and taste are five important senses that human uses to interact with the real world. As facing more and more complex environments, a sensing…

Abstract

Purpose

Vision, audition, olfactory, tactile and taste are five important senses that human uses to interact with the real world. As facing more and more complex environments, a sensing system is essential for intelligent robots with various types of sensors. To mimic human-like abilities, sensors similar to human perception capabilities are indispensable. However, most research only concentrated on analyzing literature on single-modal sensors and their robotics application.

Design/methodology/approach

This study presents a systematic review of five bioinspired senses, especially considering a brief introduction of multimodal sensing applications and predicting current trends and future directions of this field, which may have continuous enlightenments.

Findings

This review shows that bioinspired sensors can enable robots to better understand the environment, and multiple sensor combinations can support the robot’s ability to behave intelligently.

Originality/value

The review starts with a brief survey of the biological sensing mechanisms of the five senses, which are followed by their bioinspired electronic counterparts. Their applications in the robots are then reviewed as another emphasis, covering the main application scopes of localization and navigation, objection identification, dexterous manipulation, compliant interaction and so on. Finally, the trends, difficulties and challenges of this research were discussed to help guide future research on intelligent robot sensors.

Details

Robotic Intelligence and Automation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 15 September 2023

Kaushal Jani

This article takes into account object identification, enhanced visual feature optimization, cost effectiveness and speed selection in response to terrain conditions. Neither…

19

Abstract

Purpose

This article takes into account object identification, enhanced visual feature optimization, cost effectiveness and speed selection in response to terrain conditions. Neither supervised machine learning nor manual engineering are used in this work. Instead, the OTV educates itself without instruction from humans or labeling. Beyond its link to stopping distance and lateral mobility, choosing the right speed is crucial. One of the biggest problems with autonomous operations is accurate perception. Obstacle avoidance is typically the focus of perceptive technology. The vehicle's shock is nonetheless controlled by the terrain's roughness at high speeds. The precision needed to recognize difficult terrain is far higher than the accuracy needed to avoid obstacles.

Design/methodology/approach

Robots that can drive unattended in an unfamiliar environment should be used for the Orbital Transfer Vehicle (OTV) for the clearance of space debris. In recent years, OTV research has attracted more attention and revealed several insights for robot systems in various applications. Improvements to advanced assistance systems like lane departure warning and intelligent speed adaptation systems are eagerly sought after by the industry, particularly space enterprises. OTV serves as a research basis for advancements in machine learning, computer vision, sensor data fusion, path planning, decision making and intelligent autonomous behavior from a computer science perspective. In the framework of autonomous OTV, this study offers a few perceptual technologies for autonomous driving in this study.

Findings

One of the most important steps in the functioning of autonomous OTVs and aid systems is the recognition of barriers, such as other satellites. Using sensors to perceive its surroundings, an autonomous car decides how to operate on its own. Driver-assistance systems like adaptive cruise control and stop-and-go must be able to distinguish between stationary and moving objects surrounding the OTV.

Originality/value

One of the most important steps in the functioning of autonomous OTVs and aid systems is the recognition of barriers, such as other satellites. Using sensors to perceive its surroundings, an autonomous car decides how to operate on its own. Driver-assistance systems like adaptive cruise control and stop-and-go must be able to distinguish between stationary and moving objects surrounding the OTV.

Details

International Journal of Intelligent Unmanned Systems, vol. 12 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 2 January 2024

Xiangdi Yue, Yihuan Zhang, Jiawei Chen, Junxin Chen, Xuanyi Zhou and Miaolei He

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and…

Abstract

Purpose

In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and mapping (SLAM) techniques. This paper aims to provide a significant reference for researchers and engineers in robotic mapping.

Design/methodology/approach

This paper focused on the research state of LiDAR-based SLAM for robotic mapping as well as a literature survey from the perspective of various LiDAR types and configurations.

Findings

This paper conducted a comprehensive literature review of the LiDAR-based SLAM system based on three distinct LiDAR forms and configurations. The authors concluded that multi-robot collaborative mapping and multi-source fusion SLAM systems based on 3D LiDAR with deep learning will be new trends in the future.

Originality/value

To the best of the authors’ knowledge, this is the first thorough survey of robotic mapping from the perspective of various LiDAR types and configurations. It can serve as a theoretical and practical guide for the advancement of academic and industrial robot mapping.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 6 March 2024

Ruoxing Wang, Shoukun Wang, Junfeng Xue, Zhihua Chen and Jinge Si

This paper aims to investigate an autonomous obstacle-surmounting method based on a hybrid gait for the problem of crossing low-height obstacles autonomously by a six wheel-legged…

Abstract

Purpose

This paper aims to investigate an autonomous obstacle-surmounting method based on a hybrid gait for the problem of crossing low-height obstacles autonomously by a six wheel-legged robot. The autonomy of obstacle-surmounting is reflected in obstacle recognition based on multi-frame point cloud fusion.

Design/methodology/approach

In this paper, first, for the problem that the lidar on the robot cannot scan the point cloud of low-height obstacles, the lidar is driven to rotate by a 2D turntable to obtain the point cloud of low-height obstacles under the robot. Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping algorithm, fast ground segmentation algorithm and Euclidean clustering algorithm are used to recognize the point cloud of low-height obstacles and obtain low-height obstacle in-formation. Then, combined with the structural characteristics of the robot, the obstacle-surmounting action planning is carried out for two types of obstacle scenes. A segmented approach is used for action planning. Gait units are designed to describe each segment of the action. A gait matrix is used to describe the overall action. The paper also analyzes the stability and surmounting capability of the robot’s key pose and determines the robot’s surmounting capability and the value scheme of the surmounting control variables.

Findings

The experimental verification is carried out on the robot laboratory platform (BIT-6NAZA). The obstacle recognition method can accurately detect low-height obstacles. The robot can maintain a smooth posture to cross low-height obstacles, which verifies the feasibility of the adaptive obstacle-surmounting method.

Originality/value

The study can provide the theory and engineering foundation for the environmental perception of the unmanned platform. It provides environmental information to support follow-up work, for example, on the planning of obstacles and obstacles.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 22 January 2024

Jun Liu, Junyuan Dong, Mingming Hu and Xu Lu

Existing Simultaneous Localization and Mapping (SLAM) algorithms have been relatively well developed. However, when in complex dynamic environments, the movement of the dynamic…

Abstract

Purpose

Existing Simultaneous Localization and Mapping (SLAM) algorithms have been relatively well developed. However, when in complex dynamic environments, the movement of the dynamic points on the dynamic objects in the image in the mapping can have an impact on the observation of the system, and thus there will be biases and errors in the position estimation and the creation of map points. The aim of this paper is to achieve more accurate accuracy in SLAM algorithms compared to traditional methods through semantic approaches.

Design/methodology/approach

In this paper, the semantic segmentation of dynamic objects is realized based on U-Net semantic segmentation network, followed by motion consistency detection through motion detection method to determine whether the segmented objects are moving in the current scene or not, and combined with the motion compensation method to eliminate dynamic points and compensate for the current local image, so as to make the system robust.

Findings

Experiments comparing the effect of detecting dynamic points and removing outliers are conducted on a dynamic data set of Technische Universität München, and the results show that the absolute trajectory accuracy of this paper's method is significantly improved compared with ORB-SLAM3 and DS-SLAM.

Originality/value

In this paper, in the semantic segmentation network part, the segmentation mask is combined with the method of dynamic point detection, elimination and compensation, which reduces the influence of dynamic objects, thus effectively improving the accuracy of localization in dynamic environments.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 6 February 2024

Han Wang, Quan Zhang, Zhenquan Fan, Gongcheng Wang, Pengchao Ding and Weidong Wang

To solve the obstacle detection problem in robot autonomous obstacle negotiation, this paper aims to propose an obstacle detection system based on elevation maps for three types…

Abstract

Purpose

To solve the obstacle detection problem in robot autonomous obstacle negotiation, this paper aims to propose an obstacle detection system based on elevation maps for three types of obstacles: positive obstacles, negative obstacles and trench obstacles.

Design/methodology/approach

The system framework includes mapping, ground segmentation, obstacle clustering and obstacle recognition. The positive obstacle detection is realized by calculating its minimum rectangle bounding boxes, which includes convex hull calculation, minimum area rectangle calculation and bounding box generation. The detection of negative obstacles and trench obstacles is implemented on the basis of information absence in the map, including obstacles discovery method and type confirmation method.

Findings

The obstacle detection system has been thoroughly tested in various environments. In the outdoor experiment, with an average speed of 22.2 ms, the system successfully detected obstacles with a 95% success rate, indicating the effectiveness of the detection algorithm. Moreover, the system’s error range for obstacle detection falls between 4% and 6.6%, meeting the necessary requirements for obstacle negotiation in the next stage.

Originality/value

This paper studies how to solve the obstacle detection problem when the robot obstacle negotiation.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 17 October 2022

Jiayue Zhao, Yunzhong Cao and Yuanzhi Xiang

The safety management of construction machines is of primary importance. Considering that traditional construction machine safety monitoring and evaluation methods cannot adapt to…

Abstract

Purpose

The safety management of construction machines is of primary importance. Considering that traditional construction machine safety monitoring and evaluation methods cannot adapt to the complex construction environment, and the monitoring methods based on sensor equipment cost too much. This paper aims to introduce computer vision and deep learning technologies to propose the YOLOv5-FastPose (YFP) model to realize the pose estimation of construction machines by improving the AlphaPose human pose model.

Design/methodology/approach

This model introduced the object detection module YOLOv5m to improve the recognition accuracy for detecting construction machines. Meanwhile, to better capture the pose characteristics, the FastPose network optimized feature extraction was introduced into the Single-Machine Pose Estimation Module (SMPE) of AlphaPose. This study used Alberta Construction Image Dataset (ACID) and Construction Equipment Poses Dataset (CEPD) to establish the dataset of object detection and pose estimation of construction machines through data augmentation technology and Labelme image annotation software for training and testing the YFP model.

Findings

The experimental results show that the improved model YFP achieves an average normalization error (NE) of 12.94 × 103, an average Percentage of Correct Keypoints (PCK) of 98.48% and an average Area Under the PCK Curve (AUC) of 37.50 × 103. Compared with existing methods, this model has higher accuracy in the pose estimation of the construction machine.

Originality/value

This study extends and optimizes the human pose estimation model AlphaPose to make it suitable for construction machines, improving the performance of pose estimation for construction machines.

Details

Engineering, Construction and Architectural Management, vol. 31 no. 3
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 26 March 2024

Daniel Nygaard Ege, Pasi Aalto and Martin Steinert

This study was conducted to address the methodical shortcomings and high associated cost of understanding the use of new, poorly understood architectural spaces, such as…

Abstract

Purpose

This study was conducted to address the methodical shortcomings and high associated cost of understanding the use of new, poorly understood architectural spaces, such as makerspaces. The proposed quantified method of enhancing current post-occupancy evaluation (POE) practices aims to provide architects, engineers and building professionals with accessible and intuitive data that can be used to conduct comparative studies of spatial changes, understand changes over time (such as those resulting from COVID-19) and verify design intentions after construction through a quantified post-occupancy evaluation.

Design/methodology/approach

In this study, we demonstrate the use of ultra-wideband (UWB) technology to gather, analyze and visualize quantified data showing interactions between people, spaces and objects. The experiment was conducted in a makerspace over a four-day hackathon event with a team of four actively tracked participants.

Findings

The study shows that by moving beyond simply counting people in a space, a more nuanced pattern of interactions can be discovered, documented and analyzed. The ability to automatically visualize findings intuitively in 3D aids architects and visual thinkers to easily grasp the essence of interactions with minimal effort.

Originality/value

By providing a method for better understanding the spatial and temporal interactions between people, objects and spaces, our approach provides valuable feedback in POE. Specifically, our approach aids practitioners in comparing spaces, verifying design intent and speeding up knowledge building when developing new architectural spaces, such as makerspaces.

Details

Engineering, Construction and Architectural Management, vol. 31 no. 13
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 12 December 2023

Robert Bogue

The purpose of this paper is to provide a detailed insight into the global military robot industry with an emphasis on products and their applications.

Abstract

Purpose

The purpose of this paper is to provide a detailed insight into the global military robot industry with an emphasis on products and their applications.

Design/methodology/approach

Following an introduction which includes a brief historical account, this provides an industry overview, including various market dimensions and a discussion of the geopolitical and technological factors driving market development. The three following sections provide details of land, airborne and marine robots, their capabilities and deployments in recent conflicts. Finally, brief conclusions are drawn.

Findings

Military robots which operate on land, in the air and at sea constitute a multi-billion dollar industry which is growing rapidly. It is being driven by geopolitical tensions, notably the military-technology arms race between China and the USA and the conflict in Ukraine, together with technological progress, particularly in AI. Many robots possess multi-functional capabilities, and the leading application is presently intelligence, surveillance and reconnaissance. An increasing number of heavily armed robots are being developed, and AI has the potential to impart these with the capacity to deliver lethal force without human intervention. Although heavily criticised in some quarters, this capability has probably already been deployed on the battlefield. With ever-growing military budgets, escalating political tensions and technological innovations, robots will play an increasingly significant role in future conflicts.

Originality/value

This provides a detail account of military robots and their role in modern warfare.

Details

Industrial Robot: the international journal of robotics research and application, vol. 51 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

1 – 10 of 78