Search results
1 – 10 of 481Xiaohui Li, Dongfang Fan, Yi Deng, Yu Lei and Owen Omalley
This study aims to offer a comprehensive exploration of the potential and challenges associated with sensor fusion-based virtual reality (VR) applications in the context of…
Abstract
Purpose
This study aims to offer a comprehensive exploration of the potential and challenges associated with sensor fusion-based virtual reality (VR) applications in the context of enhanced physical training. The main objective is to identify key advancements in sensor fusion technology, evaluate its application in VR systems and understand its impact on physical training.
Design/methodology/approach
The research initiates by providing context to the physical training environment in today’s technology-driven world, followed by an in-depth overview of VR. This overview includes a concise discussion on the advancements in sensor fusion technology and its application in VR systems for physical training. A systematic review of literature then follows, examining VR’s application in various facets of physical training: from exercise, skill development and technique enhancement to injury prevention, rehabilitation and psychological preparation.
Findings
Sensor fusion-based VR presents tangible advantages in the sphere of physical training, offering immersive experiences that could redefine traditional training methodologies. While the advantages are evident in domains such as exercise optimization, skill acquisition and mental preparation, challenges persist. The current research suggests there is a need for further studies to address these limitations to fully harness VR’s potential in physical training.
Originality/value
The integration of sensor fusion technology with VR in the domain of physical training remains a rapidly evolving field. Highlighting the advancements and challenges, this review makes a significant contribution by addressing gaps in knowledge and offering directions for future research.
Details
Keywords
Yinghan Wang, Diansheng Chen and Zhe Liu
Multi-sensor fusion in robotic dexterous hands is a hot research field. However, there is little research on multi-sensor fusion rules. This study aims to introduce a multi-sensor…
Abstract
Purpose
Multi-sensor fusion in robotic dexterous hands is a hot research field. However, there is little research on multi-sensor fusion rules. This study aims to introduce a multi-sensor fusion algorithm using a motor force sensor, film pressure sensor, temperature sensor and angle sensor, which can form a consistent interpretation of grasp stability by sensor fusion without multi-dimensional force/torque sensors.
Design/methodology/approach
This algorithm is based on the three-finger force balance theorem, which provides a judgment method for the unknown force direction. Moreover, the Monte Carlo method calculates the grasping ability and judges the grasping stability under a certain confidence interval using probability and statistics. Based on three fingers, the situation of four- and five-fingered dexterous hand has been expanded. Moreover, an experimental platform was built using dexterous hands, and a grasping experiment was conducted to confirm the proposed algorithm. The grasping experiment uses three fingers and five fingers to grasp different objects, use the introduced method to judge the grasping stability and calculate the accuracy of the judgment according to the actual grasping situation.
Findings
The multi-sensor fusion algorithms are universal and can perform multi-sensor fusion for multi-finger rigid, flexible and rigid-soft coupled dexterous hands. The three-finger balance theorem and Monte Carlo method can better replace the discrimination method using multi-dimensional force/torque sensors.
Originality/value
A new multi-sensor fusion algorithm is proposed and verified. According to the experiments, the accuracy of grasping judgment is more than 85%, which proves that the method is feasible.
Details
Keywords
Michal Grzes, Maciej Slowik and Zdzisław Gosiewski
In relation to rapid development of possible applications of unmanned vehicles, new opportunities for their use are emerging. Among the most dynamic, we can distinguish package…
Abstract
Purpose
In relation to rapid development of possible applications of unmanned vehicles, new opportunities for their use are emerging. Among the most dynamic, we can distinguish package shipments, rescue and military applications, autonomous flights and unattended transportation. However, most of the UAV solutions have limitations related to their power supplies and the field of operation. Some of these restrictions can be overcome by implementing the cooperation between unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs). The purpose of this paper is to explore the problem of sensor fusion for autonomous landing of a UAV on the UGV by comparing the performance of precision landing algorithms using different sensor fusions to have precise and reliable information about the position and velocity.
Design/methodology/approach
The difficulties in this scenario, among others, are different coordination systems and necessity for sensor data from air and ground. The most suitable solution seems to be the use of widely available Global Navigational Satellite System (GNSS) receivers. Unfortunately, the position measurements obtained from cheap receivers are encumbered with errors when desiring precision. The different approaches are based on the usage of sensor fusion of Inertial Navigation System and image processing. However most of these systems are very vulnerable to lightning.
Findings
In this paper, methods based on an exchange of telemetry data and sensor fusion of GNSS, infrared markers detection and others are used. Different methods are compared.
Originality/value
The subject of sensor fusion and high-precision measurements in reference to the autonomous vehicle cooperation is very important because of the increasing popularity of these vehicles. The proposed solution is efficient to perform autonomous landing of UAV on the UGV.
Details
Keywords
Sharnil Pandya, Anirban Sur and Ketan Kotecha
The purpose of the presented IoT based sensor-fusion assistive technology for COVID-19 disinfection termed as “Smart epidemic tunnel” is to protect an individual using an…
Abstract
Purpose
The purpose of the presented IoT based sensor-fusion assistive technology for COVID-19 disinfection termed as “Smart epidemic tunnel” is to protect an individual using an automatic sanitizer spray system equipped with a sanitizer sensing unit based on individual using an automatic sanitizer spray system equipped with a sanitizer sensing unit based on human motion detection.
Design/methodology/approach
The presented research work discusses a smart epidemic tunnel that can assist an individual in immediate disinfection from COVID-19 infections. The authors have presented a sensor-fusion-based automatic sanitizer tunnel that detects a human using an ultrasonic sensor from the height of 1.5 feet and disinfects him/her using the spread of a sanitizer spray. The presented smart tunnel operates using a solar cell during the day time and switched to a solar power-bank power mode during night timings using a light-dependent register sensing unit.
Findings
The investigation results validate the performance evaluation of the presented smart epidemic tunnel mechanism. The presented smart tunnel can prevent or disinfect an outsider who is entering a particular building or a premise from COVID-19 infection possibilities. Furthermore, it has also been observed that the presented sensor-fusion-based mechanism can disinfect a person in a time of span of just 10 s. The presented smart epidemic tunnel is embedded with an intelligent sanitizer sensing unit which stores the essential information in a cloud platform such as Google Fire-base. Thus, the proposed system favours society by saving time and helps in lowering the spread of coronavirus. It also provides daily, weekly and monthly reports of the counts of individuals, along with in-out timestamps and power usage reports.
Practical implications
The presented system has been designed and developed after the lock-down period to disinfect an individual from the possibility of COVID-19 infections.
Social implications
The presented smart epidemic tunnel reduced the possibility by disinfecting an outside individual/COVID-19 suspect from spreading the COVID-19 infections in a particular building or a premise.
Originality/value
The presented system is an original work done by all the authors which have been installed at the Symbiosis Institute of Technology premise and have undergone rigorous experimentation and testing by the authors and end-users.
Details
Keywords
Wen Qi, Xiaorui Liu, Longbin Zhang, Lunan Wu, Wenchuan Zang and Hang Su
The purpose of this paper is to mainly center on the touchless interaction between humans and robots in the real world. The accuracy of hand pose identification and stable…
Abstract
Purpose
The purpose of this paper is to mainly center on the touchless interaction between humans and robots in the real world. The accuracy of hand pose identification and stable operation in a non-stationary environment is the main challenge, especially in multiple sensors conditions. To guarantee the human-machine interaction system’s performance with a high recognition rate and lower computational time, an adaptive sensor fusion labeling framework should be considered in surgery robot teleoperation.
Design/methodology/approach
In this paper, a hand pose estimation model is proposed consisting of automatic labeling and classified based on a deep convolutional neural networks (DCNN) structure. Subsequently, an adaptive sensor fusion methodology is proposed for hand pose estimation with two leap motions. The sensor fusion system is implemented to process depth data and electromyography signals capturing from Myo Armband and leap motion, respectively. The developed adaptive methodology can perform stable and continuous hand position estimation even when a single sensor is unable to detect a hand.
Findings
The proposed adaptive sensor fusion method is verified with various experiments in six degrees of freedom in space. The results showed that the clustering model acquires the highest clustering accuracy (96.31%) than other methods, which can be regarded as real gestures. Moreover, the DCNN classifier gets the highest performance (88.47% accuracy and lowest computational time) than other methods.
Originality/value
This study can provide theoretical and engineering guidance for hand pose recognition in surgery robot teleoperation and design a new deep learning model for accuracy enhancement.
Details
Keywords
Noor Cholis Basjaruddin, Faris Rifqi Fakhrudin, Yana Sudarsa and Fatimah Noor
In the context of overcoming malnutrition in elementary school children and increasing public awareness of this issue, the Indonesian Government has created a “Card for Healthy…
Abstract
Purpose
In the context of overcoming malnutrition in elementary school children and increasing public awareness of this issue, the Indonesian Government has created a “Card for Healthy School Children” (KMS-AS) program in the form of a paper health card. However, currently, the KMS-AS record data are still written on paper, which is less effective in terms of the health process. An integrated measuring device and an online data-recording system are needed to promote children’s health and facilitate access and transfer of data from one place to another. This study aims to develop NFC and IoT-based KMS-AS using sensor fusion method.
Design/methodology/approach
The results of this study show that an integrated measuring device for weight, height, body temperature and Spo2 level can be connected with mobile and Web applications using IoT technology, facilitating data recording and monitoring of children’s nutritional status. The sensor fusion method was used for the classification of nutritional status and health status, based on the results of measurement tools. Near field communication (NFC) technology was used to facilitate user identification when making measurements.
Findings
The results show that KMS-AS can facilitate classification of nutritional status and children's health status. Measurement and classification data can be monitored via Web and mobile applications. The accuracy of height, weight, body temperature and Spo2 measurements was 98.21%, 98.59%, 98.93% and 98.93%, respectively.
Originality/value
In this research, the authors successfully produced a system using sensor fusion method for measuring body weight, height, temperature and Spo2 level, which is integrated and can be connected to mobile applications and the Web using the IoT and NFC.
Details
Keywords
Noor Cholis Basjaruddin, Edi Rakhman and Fazrin Adinugraha
Safety is one of the most crucial factors in transportation due to so many factors, including human-error, that might result in various small to fatal accidents. This is why lane…
Abstract
Purpose
Safety is one of the most crucial factors in transportation due to so many factors, including human-error, that might result in various small to fatal accidents. This is why lane keeping assist (LKA) is an important system that needs to be developed. LKA will prevent the vehicle from moving to another lane because the driver is negligent to maintain the direction of the vehicle.
Design/methodology/approach
In this study, LKA works using camera and ultrasonic sensors. The data coming from these two types of sensors are then fused to determine how should the car reacts to some certain situation so that it stays on the lane. The sensor fusion method was used to further ensure the car's safety.
Findings
The results of the hardware simulation show that LKA can successfully avoid the car from getting off the lane with safety index of 95% for high-speed mode and 100% for medium and low-speed mode; meanwhile on the use of ultrasonic sensor, the speed of the car is adjusted to the distance of the car's side disturbance.
Originality/value
This research has succeeded in simulating a sensor fusion–based LKA system.
Details
Keywords
Ravinder Singh and Kuldeep Singh Nagla
An efficient perception of the complex environment is the foremost requirement in mobile robotics. At present, the utilization of glass as a glass wall and automated transparent…
Abstract
Purpose
An efficient perception of the complex environment is the foremost requirement in mobile robotics. At present, the utilization of glass as a glass wall and automated transparent door in the modern building has become a highlight feature for interior decoration, which has resulted in the wrong perception of the environment by various range sensors. The perception generated by multi-data sensor fusion (MDSF) of sonar and laser is fairly consistent to detect glass but is still affected by the issues such as sensor inaccuracies, sensor reliability, scan mismatching due to glass, sensor model, probabilistic approaches for sensor fusion, sensor registration, etc. The paper aims to discuss these issues.
Design/methodology/approach
This paper presents a modified framework – Advanced Laser and Sonar Framework (ALSF) – to fuse the sensory information of a laser scanner and sonar to reduce the uncertainty caused by glass in an environment by selecting the optimal range information corresponding to a selected threshold value. In the proposed approach, the conventional sonar sensor model is also modified to reduce the wrong perception in sonar as an outcome of the diverse range measurement. The laser scan matching algorithm is also modified by taking out the small cluster of laser point (w.r.t. range information) to get efficient perception.
Findings
The probability of the occupied cells w.r.t. the modified sonar sensor model becomes consistent corresponding to diverse sonar range measurement. The scan matching technique is also modified to reduce the uncertainty caused by glass and high computational load for the efficient and fast pose estimation of the laser sensor/mobile robot to generate robust mapping. These stated modifications are linked with the proposed ALSF technique to reduce the uncertainty caused by glass, inconsistent probabilities and high load computation during the generation of occupancy grid mapping with MDSF. Various real-world experiments are performed with the implementation of the proposed approach on a mobile robot fitted with laser and sonar, and the obtained results are qualitatively and quantitatively compared with conventional approaches.
Originality/value
The proposed ASIF approach generates efficient perception of the complex environment contains glass and can be implemented for various robotics applications.
Details
Keywords
Umair Ali, Wasif Muhammad, Muhammad Jehanzed Irshad and Sajjad Manzoor
Self-localization of an underwater robot using global positioning sensor and other radio positioning systems is not possible, as an alternative onboard sensor-based self-location…
Abstract
Purpose
Self-localization of an underwater robot using global positioning sensor and other radio positioning systems is not possible, as an alternative onboard sensor-based self-location estimation provides another possible solution. However, the dynamic and unstructured nature of the sea environment and highly noise effected sensory information makes the underwater robot self-localization a challenging research topic. The state-of-art multi-sensor fusion algorithms are deficient in dealing of multi-sensor data, e.g. Kalman filter cannot deal with non-Gaussian noise, while parametric filter such as Monte Carlo localization has high computational cost. An optimal fusion policy with low computational cost is an important research question for underwater robot localization.
Design/methodology/approach
In this paper, the authors proposed a novel predictive coding-biased competition/divisive input modulation (PC/BC-DIM) neural network-based multi-sensor fusion approach, which has the capability to fuse and approximate noisy sensory information in an optimal way.
Findings
Results of low mean localization error (i.e. 1.2704 m) and computation cost (i.e. 2.2 ms) show that the proposed method performs better than existing previous techniques in such dynamic and unstructured environments.
Originality/value
To the best of the authors’ knowledge, this work provides a novel multisensory fusion approach to overcome the existing problems of non-Gaussian noise removal, higher self-localization estimation accuracy and reduced computational cost.
Details
Keywords
Emine Ayaz, Ahmet Öztürk, Serhat Şeker and Belle R. Upadhyaya
The purpose of this paper is to extract features from vibration signals measured from induction motors subjected to accelerated aging of bearings by fluting tests.
Abstract
Purpose
The purpose of this paper is to extract features from vibration signals measured from induction motors subjected to accelerated aging of bearings by fluting tests.
Design/methodology/approach
Aging tests were performed according to IEEE test procedures. The data acquisition involved the measurement of vibration signals using accelerometers that were installed on the bearings and on the motor casing. In this application, only two accelerometers, which were placed near the process end of the motor bearing, are used for data analysis and feature extraction studies. After the data collection, information from the two sensors was combined using simple sensor fusion method under the linearity conditions, and then spectral analysis and time‐scale analysis were performed. The fused vibration signal is decomposed into several scales using continuous wavelet transform (CWT) and its first scale is used to indicate the bearing degradation.
Findings
Bearing damage characterization was determined between 2‐4 kHz and some specific frequencies were calculated as harmonics of the bearing characteristic frequencies.
Research limitations/implications
The bearing damage characteristics used in this study is occurred by the experimental study. In terms of the methodology, the use of the CWT shows the fault characteristics from the initial case.
Practical implications
The experimental study and data acquisition are based on the accelerated aging of the motor bearings. Hence, the real aging is represented by the accelerated one. But, this situation reflects same properties of the aging occurred in industrial environments. The methodology is also applicable to the hardware application.
Originality/value
There are two important aspects of this research: the experimental study and the application of CWT to get the potential defects, which will appear as a failure in future, from the healthy case of the motor bearings.
Details