Search results
1 – 10 of 19Xiaohui Li, Dongfang Fan, Yi Deng, Yu Lei and Owen Omalley
This study aims to offer a comprehensive exploration of the potential and challenges associated with sensor fusion-based virtual reality (VR) applications in the context of…
Abstract
Purpose
This study aims to offer a comprehensive exploration of the potential and challenges associated with sensor fusion-based virtual reality (VR) applications in the context of enhanced physical training. The main objective is to identify key advancements in sensor fusion technology, evaluate its application in VR systems and understand its impact on physical training.
Design/methodology/approach
The research initiates by providing context to the physical training environment in today’s technology-driven world, followed by an in-depth overview of VR. This overview includes a concise discussion on the advancements in sensor fusion technology and its application in VR systems for physical training. A systematic review of literature then follows, examining VR’s application in various facets of physical training: from exercise, skill development and technique enhancement to injury prevention, rehabilitation and psychological preparation.
Findings
Sensor fusion-based VR presents tangible advantages in the sphere of physical training, offering immersive experiences that could redefine traditional training methodologies. While the advantages are evident in domains such as exercise optimization, skill acquisition and mental preparation, challenges persist. The current research suggests there is a need for further studies to address these limitations to fully harness VR’s potential in physical training.
Originality/value
The integration of sensor fusion technology with VR in the domain of physical training remains a rapidly evolving field. Highlighting the advancements and challenges, this review makes a significant contribution by addressing gaps in knowledge and offering directions for future research.
Details
Keywords
This article takes into account object identification, enhanced visual feature optimization, cost effectiveness and speed selection in response to terrain conditions. Neither…
Abstract
Purpose
This article takes into account object identification, enhanced visual feature optimization, cost effectiveness and speed selection in response to terrain conditions. Neither supervised machine learning nor manual engineering are used in this work. Instead, the OTV educates itself without instruction from humans or labeling. Beyond its link to stopping distance and lateral mobility, choosing the right speed is crucial. One of the biggest problems with autonomous operations is accurate perception. Obstacle avoidance is typically the focus of perceptive technology. The vehicle's shock is nonetheless controlled by the terrain's roughness at high speeds. The precision needed to recognize difficult terrain is far higher than the accuracy needed to avoid obstacles.
Design/methodology/approach
Robots that can drive unattended in an unfamiliar environment should be used for the Orbital Transfer Vehicle (OTV) for the clearance of space debris. In recent years, OTV research has attracted more attention and revealed several insights for robot systems in various applications. Improvements to advanced assistance systems like lane departure warning and intelligent speed adaptation systems are eagerly sought after by the industry, particularly space enterprises. OTV serves as a research basis for advancements in machine learning, computer vision, sensor data fusion, path planning, decision making and intelligent autonomous behavior from a computer science perspective. In the framework of autonomous OTV, this study offers a few perceptual technologies for autonomous driving in this study.
Findings
One of the most important steps in the functioning of autonomous OTVs and aid systems is the recognition of barriers, such as other satellites. Using sensors to perceive its surroundings, an autonomous car decides how to operate on its own. Driver-assistance systems like adaptive cruise control and stop-and-go must be able to distinguish between stationary and moving objects surrounding the OTV.
Originality/value
One of the most important steps in the functioning of autonomous OTVs and aid systems is the recognition of barriers, such as other satellites. Using sensors to perceive its surroundings, an autonomous car decides how to operate on its own. Driver-assistance systems like adaptive cruise control and stop-and-go must be able to distinguish between stationary and moving objects surrounding the OTV.
Details
Keywords
Yanmin Zhou, Zheng Yan, Ye Yang, Zhipeng Wang, Ping Lu, Philip F. Yuan and Bin He
Vision, audition, olfactory, tactile and taste are five important senses that human uses to interact with the real world. As facing more and more complex environments, a sensing…
Abstract
Purpose
Vision, audition, olfactory, tactile and taste are five important senses that human uses to interact with the real world. As facing more and more complex environments, a sensing system is essential for intelligent robots with various types of sensors. To mimic human-like abilities, sensors similar to human perception capabilities are indispensable. However, most research only concentrated on analyzing literature on single-modal sensors and their robotics application.
Design/methodology/approach
This study presents a systematic review of five bioinspired senses, especially considering a brief introduction of multimodal sensing applications and predicting current trends and future directions of this field, which may have continuous enlightenments.
Findings
This review shows that bioinspired sensors can enable robots to better understand the environment, and multiple sensor combinations can support the robot’s ability to behave intelligently.
Originality/value
The review starts with a brief survey of the biological sensing mechanisms of the five senses, which are followed by their bioinspired electronic counterparts. Their applications in the robots are then reviewed as another emphasis, covering the main application scopes of localization and navigation, objection identification, dexterous manipulation, compliant interaction and so on. Finally, the trends, difficulties and challenges of this research were discussed to help guide future research on intelligent robot sensors.
Details
Keywords
Jun Liu, Junyuan Dong, Mingming Hu and Xu Lu
Existing Simultaneous Localization and Mapping (SLAM) algorithms have been relatively well developed. However, when in complex dynamic environments, the movement of the dynamic…
Abstract
Purpose
Existing Simultaneous Localization and Mapping (SLAM) algorithms have been relatively well developed. However, when in complex dynamic environments, the movement of the dynamic points on the dynamic objects in the image in the mapping can have an impact on the observation of the system, and thus there will be biases and errors in the position estimation and the creation of map points. The aim of this paper is to achieve more accurate accuracy in SLAM algorithms compared to traditional methods through semantic approaches.
Design/methodology/approach
In this paper, the semantic segmentation of dynamic objects is realized based on U-Net semantic segmentation network, followed by motion consistency detection through motion detection method to determine whether the segmented objects are moving in the current scene or not, and combined with the motion compensation method to eliminate dynamic points and compensate for the current local image, so as to make the system robust.
Findings
Experiments comparing the effect of detecting dynamic points and removing outliers are conducted on a dynamic data set of Technische Universität München, and the results show that the absolute trajectory accuracy of this paper's method is significantly improved compared with ORB-SLAM3 and DS-SLAM.
Originality/value
In this paper, in the semantic segmentation network part, the segmentation mask is combined with the method of dynamic point detection, elimination and compensation, which reduces the influence of dynamic objects, thus effectively improving the accuracy of localization in dynamic environments.
Details
Keywords
Miaoxian Guo, Shouheng Wei, Chentong Han, Wanliang Xia, Chao Luo and Zhijian Lin
Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical…
Abstract
Purpose
Surface roughness has a serious impact on the fatigue strength, wear resistance and life of mechanical products. Realizing the evolution of surface quality through theoretical modeling takes a lot of effort. To predict the surface roughness of milling processing, this paper aims to construct a neural network based on deep learning and data augmentation.
Design/methodology/approach
This study proposes a method consisting of three steps. Firstly, the machine tool multisource data acquisition platform is established, which combines sensor monitoring with machine tool communication to collect processing signals. Secondly, the feature parameters are extracted to reduce the interference and improve the model generalization ability. Thirdly, for different expectations, the parameters of the deep belief network (DBN) model are optimized by the tent-SSA algorithm to achieve more accurate roughness classification and regression prediction.
Findings
The adaptive synthetic sampling (ADASYN) algorithm can improve the classification prediction accuracy of DBN from 80.67% to 94.23%. After the DBN parameters were optimized by Tent-SSA, the roughness prediction accuracy was significantly improved. For the classification model, the prediction accuracy is improved by 5.77% based on ADASYN optimization. For regression models, different objective functions can be set according to production requirements, such as root-mean-square error (RMSE) or MaxAE, and the error is reduced by more than 40% compared to the original model.
Originality/value
A roughness prediction model based on multiple monitoring signals is proposed, which reduces the dependence on the acquisition of environmental variables and enhances the model's applicability. Furthermore, with the ADASYN algorithm, the Tent-SSA intelligent optimization algorithm is introduced to optimize the hyperparameters of the DBN model and improve the optimization performance.
Details
Keywords
Xiangdi Yue, Yihuan Zhang, Jiawei Chen, Junxin Chen, Xuanyi Zhou and Miaolei He
In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and…
Abstract
Purpose
In recent decades, the field of robotic mapping has witnessed widespread research and development in light detection and ranging (LiDAR)-based simultaneous localization and mapping (SLAM) techniques. This paper aims to provide a significant reference for researchers and engineers in robotic mapping.
Design/methodology/approach
This paper focused on the research state of LiDAR-based SLAM for robotic mapping as well as a literature survey from the perspective of various LiDAR types and configurations.
Findings
This paper conducted a comprehensive literature review of the LiDAR-based SLAM system based on three distinct LiDAR forms and configurations. The authors concluded that multi-robot collaborative mapping and multi-source fusion SLAM systems based on 3D LiDAR with deep learning will be new trends in the future.
Originality/value
To the best of the authors’ knowledge, this is the first thorough survey of robotic mapping from the perspective of various LiDAR types and configurations. It can serve as a theoretical and practical guide for the advancement of academic and industrial robot mapping.
Details
Keywords
Pengyue Guo, Tianyun Shi, Zhen Ma and Jing Wang
The paper aims to solve the problem of personnel intrusion identification within the limits of high-speed railways. It adopts the fusion method of millimeter wave radar and camera…
Abstract
Purpose
The paper aims to solve the problem of personnel intrusion identification within the limits of high-speed railways. It adopts the fusion method of millimeter wave radar and camera to improve the accuracy of object recognition in dark and harsh weather conditions.
Design/methodology/approach
This paper adopts the fusion strategy of radar and camera linkage to achieve focus amplification of long-distance targets and solves the problem of low illumination by laser light filling of the focus point. In order to improve the recognition effect, this paper adopts the YOLOv8 algorithm for multi-scale target recognition. In addition, for the image distortion caused by bad weather, this paper proposes a linkage and tracking fusion strategy to output the correct alarm results.
Findings
Simulated intrusion tests show that the proposed method can effectively detect human intrusion within 0–200 m during the day and night in sunny weather and can achieve more than 80% recognition accuracy for extreme severe weather conditions.
Originality/value
(1) The authors propose a personnel intrusion monitoring scheme based on the fusion of millimeter wave radar and camera, achieving all-weather intrusion monitoring; (2) The authors propose a new multi-level fusion algorithm based on linkage and tracking to achieve intrusion target monitoring under adverse weather conditions; (3) The authors have conducted a large number of innovative simulation experiments to verify the effectiveness of the method proposed in this article.
Details
Keywords
Rafiu King Raji, Yini Wei, Guiqiang Diao and Zilun Tang
Devices for step estimation are body-worn devices used to compute steps taken and/or distance covered by the user. Even though textiles or clothing are foremost to come to mind in…
Abstract
Purpose
Devices for step estimation are body-worn devices used to compute steps taken and/or distance covered by the user. Even though textiles or clothing are foremost to come to mind in terms of articles meant to be worn, their prominence among devices and systems meant for cadence is overshadowed by electronic products such as accelerometers, wristbands and smart phones. Athletes and sports enthusiasts using knee sleeves should be able to track their performances and monitor workout progress without the need to carry other devices with no direct sport utility, such as wristbands and wearable accelerometers. The purpose of this study thus is to contribute to the broad area of wearable devices for cadence application by developing a cheap but effective and efficient stride measurement system based on a knee sleeve.
Design/methodology/approach
A textile strain sensor is designed by weft knitting silver-plated nylon yarn together with nylon DTY and covered elastic yarn using a 1 × 1 rib structure. The area occupied by the silver-plated yarn within the structure served as the strain sensor. It worked such that, upon being subjected to stress, the electrical resistance of the sensor increases and in turn, is restored when the stress is removed. The strip with the sensor is knitted separately and subsequently sewn to the knee sleeve. The knee sleeve is then connected to a custom-made signal acquisition and processing system. A volunteer was employed for a wearer trial.
Findings
Experimental results establish that the number of strides taken by the wearer can easily be correlated to the knee flexion and extension cycles of the wearer. The number of peaks computed by the signal acquisition and processing system is therefore counted to represent stride per minute. Therefore, the sensor is able to effectively count the number of strides taken by the user per minute. The coefficient of variation of over-ground test results yielded 0.03%, and stair climbing also obtained 0.14%, an indication of very high sensor repeatability.
Research limitations/implications
The study was conducted using limited number of volunteers for the wearer trials.
Practical implications
By embedding textile piezoresistive sensors in some specific garments and or accessories, physical activity such as gait and its related data can be effectively measured.
Originality/value
To the best of our knowledge, this is the first application of piezoresistive sensing in the knee sleeve for stride estimation. Also, this study establishes that it is possible to attach (sew) already-knit textile strain sensors to apparel to effectuate smart functionality.
Details
Keywords
Daniel Nygaard Ege, Pasi Aalto and Martin Steinert
This study was conducted to address the methodical shortcomings and high associated cost of understanding the use of new, poorly understood architectural spaces, such as…
Abstract
Purpose
This study was conducted to address the methodical shortcomings and high associated cost of understanding the use of new, poorly understood architectural spaces, such as makerspaces. The proposed quantified method of enhancing current post-occupancy evaluation (POE) practices aims to provide architects, engineers and building professionals with accessible and intuitive data that can be used to conduct comparative studies of spatial changes, understand changes over time (such as those resulting from COVID-19) and verify design intentions after construction through a quantified post-occupancy evaluation.
Design/methodology/approach
In this study, we demonstrate the use of ultra-wideband (UWB) technology to gather, analyze and visualize quantified data showing interactions between people, spaces and objects. The experiment was conducted in a makerspace over a four-day hackathon event with a team of four actively tracked participants.
Findings
The study shows that by moving beyond simply counting people in a space, a more nuanced pattern of interactions can be discovered, documented and analyzed. The ability to automatically visualize findings intuitively in 3D aids architects and visual thinkers to easily grasp the essence of interactions with minimal effort.
Originality/value
By providing a method for better understanding the spatial and temporal interactions between people, objects and spaces, our approach provides valuable feedback in POE. Specifically, our approach aids practitioners in comparing spaces, verifying design intent and speeding up knowledge building when developing new architectural spaces, such as makerspaces.
Details
Keywords
Ruoxing Wang, Shoukun Wang, Junfeng Xue, Zhihua Chen and Jinge Si
This paper aims to investigate an autonomous obstacle-surmounting method based on a hybrid gait for the problem of crossing low-height obstacles autonomously by a six wheel-legged…
Abstract
Purpose
This paper aims to investigate an autonomous obstacle-surmounting method based on a hybrid gait for the problem of crossing low-height obstacles autonomously by a six wheel-legged robot. The autonomy of obstacle-surmounting is reflected in obstacle recognition based on multi-frame point cloud fusion.
Design/methodology/approach
In this paper, first, for the problem that the lidar on the robot cannot scan the point cloud of low-height obstacles, the lidar is driven to rotate by a 2D turntable to obtain the point cloud of low-height obstacles under the robot. Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping algorithm, fast ground segmentation algorithm and Euclidean clustering algorithm are used to recognize the point cloud of low-height obstacles and obtain low-height obstacle in-formation. Then, combined with the structural characteristics of the robot, the obstacle-surmounting action planning is carried out for two types of obstacle scenes. A segmented approach is used for action planning. Gait units are designed to describe each segment of the action. A gait matrix is used to describe the overall action. The paper also analyzes the stability and surmounting capability of the robot’s key pose and determines the robot’s surmounting capability and the value scheme of the surmounting control variables.
Findings
The experimental verification is carried out on the robot laboratory platform (BIT-6NAZA). The obstacle recognition method can accurately detect low-height obstacles. The robot can maintain a smooth posture to cross low-height obstacles, which verifies the feasibility of the adaptive obstacle-surmounting method.
Originality/value
The study can provide the theory and engineering foundation for the environmental perception of the unmanned platform. It provides environmental information to support follow-up work, for example, on the planning of obstacles and obstacles.
Details