Search results

1 – 10 of over 2000
Article
Publication date: 6 March 2024

Xiaohui Li, Dongfang Fan, Yi Deng, Yu Lei and Owen Omalley

This study aims to offer a comprehensive exploration of the potential and challenges associated with sensor fusion-based virtual reality (VR) applications in the context of…

Abstract

Purpose

This study aims to offer a comprehensive exploration of the potential and challenges associated with sensor fusion-based virtual reality (VR) applications in the context of enhanced physical training. The main objective is to identify key advancements in sensor fusion technology, evaluate its application in VR systems and understand its impact on physical training.

Design/methodology/approach

The research initiates by providing context to the physical training environment in today’s technology-driven world, followed by an in-depth overview of VR. This overview includes a concise discussion on the advancements in sensor fusion technology and its application in VR systems for physical training. A systematic review of literature then follows, examining VR’s application in various facets of physical training: from exercise, skill development and technique enhancement to injury prevention, rehabilitation and psychological preparation.

Findings

Sensor fusion-based VR presents tangible advantages in the sphere of physical training, offering immersive experiences that could redefine traditional training methodologies. While the advantages are evident in domains such as exercise optimization, skill acquisition and mental preparation, challenges persist. The current research suggests there is a need for further studies to address these limitations to fully harness VR’s potential in physical training.

Originality/value

The integration of sensor fusion technology with VR in the domain of physical training remains a rapidly evolving field. Highlighting the advancements and challenges, this review makes a significant contribution by addressing gaps in knowledge and offering directions for future research.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 28 June 2013

Rong Wang, Jianye Liu, Zhi Xiong and Qinghua Zeng

The Embedded GPS/INS System (EGI) has been used more widely as central navigation equipment of aircraft. For certain cases needing high attitude accuracy, star sensor can be…

Abstract

Purpose

The Embedded GPS/INS System (EGI) has been used more widely as central navigation equipment of aircraft. For certain cases needing high attitude accuracy, star sensor can be integrated with EGI to improve attitude performance. Since the filtering‐correction loop has already built in finished EGI product, centralized or federated Kalman filter is not applicable for integrating EGI with star sensor; it is a challenge to design multi‐sensor information fusion algorithm suitable for this situation. The purpose of this paper is to present a double‐layer fusion scheme and algorithms to meet the practical need of constructing integrated multi‐sensor navigation system by star sensor assisting finished EGI unit.

Design/methodology/approach

The alternate fusion algorithms for asynchronous measurements and the sequential fusion algorithms for synchronous measurements are presented. By combining alternate filtering and sequential filtering algorithms, a kind of double‐layer fusion algorithms for multi‐sensors is proposed and validated by semi‐physical test in this paper.

Findings

The double‐layer fusion algorithms represent a filtering strategy for multiple non‐identical parallel sensors to assist INS, while the independent estimation‐correction loop in EGI is still maintained. It has significant benefits in updating original navigation system by integrating new sensors.

Practical implications

The approach described in this paper can be used in designing similar multi‐sensor information fusion navigation system composed by EGI and various kinds of sensors, so as to improve the navigation performance.

Originality/value

Compared with conventional approach, in the situation that centralized and federated Kalman filter are not applicable, the double‐layer fusion scheme and algorithms give an external filtering strategy for measurements of finished EGI unit and star sensors.

Details

Aircraft Engineering and Aerospace Technology, vol. 85 no. 4
Type: Research Article
ISSN: 0002-2667

Keywords

Article
Publication date: 4 June 2024

Dan Zhang, Junji Yuan, Haibin Meng, Wei Wang, Rui He and Sen Li

In the context of fire incidents within buildings, efficient scene perception by firefighting robots is particularly crucial. Although individual sensors can provide specific…

Abstract

Purpose

In the context of fire incidents within buildings, efficient scene perception by firefighting robots is particularly crucial. Although individual sensors can provide specific types of data, achieving deep data correlation among multiple sensors poses challenges. To address this issue, this study aims to explore a fusion approach integrating thermal imaging cameras and LiDAR sensors to enhance the perception capabilities of firefighting robots in fire environments.

Design/methodology/approach

Prior to sensor fusion, accurate calibration of the sensors is essential. This paper proposes an extrinsic calibration method based on rigid body transformation. The collected data is optimized using the Ceres optimization algorithm to obtain precise calibration parameters. Building upon this calibration, a sensor fusion method based on coordinate projection transformation is proposed, enabling real-time mapping between images and point clouds. In addition, the effectiveness of the proposed fusion device data collection is validated in experimental smoke-filled fire environments.

Findings

The average reprojection error obtained by the extrinsic calibration method based on rigid body transformation is 1.02 pixels, indicating good accuracy. The fused data combines the advantages of thermal imaging cameras and LiDAR, overcoming the limitations of individual sensors.

Originality/value

This paper introduces an extrinsic calibration method based on rigid body transformation, along with a sensor fusion approach based on coordinate projection transformation. The effectiveness of this fusion strategy is validated in simulated fire environments.

Details

Sensor Review, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 8 September 2022

Yinghan Wang, Diansheng Chen and Zhe Liu

Multi-sensor fusion in robotic dexterous hands is a hot research field. However, there is little research on multi-sensor fusion rules. This study aims to introduce a multi-sensor

Abstract

Purpose

Multi-sensor fusion in robotic dexterous hands is a hot research field. However, there is little research on multi-sensor fusion rules. This study aims to introduce a multi-sensor fusion algorithm using a motor force sensor, film pressure sensor, temperature sensor and angle sensor, which can form a consistent interpretation of grasp stability by sensor fusion without multi-dimensional force/torque sensors.

Design/methodology/approach

This algorithm is based on the three-finger force balance theorem, which provides a judgment method for the unknown force direction. Moreover, the Monte Carlo method calculates the grasping ability and judges the grasping stability under a certain confidence interval using probability and statistics. Based on three fingers, the situation of four- and five-fingered dexterous hand has been expanded. Moreover, an experimental platform was built using dexterous hands, and a grasping experiment was conducted to confirm the proposed algorithm. The grasping experiment uses three fingers and five fingers to grasp different objects, use the introduced method to judge the grasping stability and calculate the accuracy of the judgment according to the actual grasping situation.

Findings

The multi-sensor fusion algorithms are universal and can perform multi-sensor fusion for multi-finger rigid, flexible and rigid-soft coupled dexterous hands. The three-finger balance theorem and Monte Carlo method can better replace the discrimination method using multi-dimensional force/torque sensors.

Originality/value

A new multi-sensor fusion algorithm is proposed and verified. According to the experiments, the accuracy of grasping judgment is more than 85%, which proves that the method is feasible.

Details

Industrial Robot: the international journal of robotics research and application, vol. 50 no. 2
Type: Research Article
ISSN: 0143-991X

Keywords

Open Access
Article
Publication date: 12 August 2022

Bolin Gao, Kaiyuan Zheng, Fan Zhang, Ruiqi Su, Junying Zhang and Yimin Wu

Intelligent and connected vehicle technology is in the ascendant. High-level autonomous driving places more stringent requirements on the accuracy and reliability of environmental…

Abstract

Purpose

Intelligent and connected vehicle technology is in the ascendant. High-level autonomous driving places more stringent requirements on the accuracy and reliability of environmental perception. Existing research works on multitarget tracking based on multisensor fusion mostly focuses on the vehicle perspective, but limited by the principal defects of the vehicle sensor platform, it is difficult to comprehensively and accurately describe the surrounding environment information.

Design/methodology/approach

In this paper, a multitarget tracking method based on roadside multisensor fusion is proposed, including a multisensor fusion method based on measurement noise adaptive Kalman filtering, a global nearest neighbor data association method based on adaptive tracking gate, and a Track life cycle management method based on M/N logic rules.

Findings

Compared with fixed-size tracking gates, the adaptive tracking gates proposed in this paper can comprehensively improve the data association performance in the multitarget tracking process. Compared with single sensor measurement, the proposed method improves the position estimation accuracy by 13.5% and the velocity estimation accuracy by 22.2%. Compared with the control method, the proposed method improves the position estimation accuracy by 23.8% and the velocity estimation accuracy by 8.9%.

Originality/value

A multisensor fusion method with adaptive Kalman filtering of measurement noise is proposed to realize the adaptive adjustment of measurement noise. A global nearest neighbor data association method based on adaptive tracking gate is proposed to realize the adaptive adjustment of the tracking gate.

Details

Smart and Resilient Transportation, vol. 4 no. 2
Type: Research Article
ISSN: 2632-0487

Keywords

Article
Publication date: 4 June 2021

Guotao Xie, Jing Zhang, Junfeng Tang, Hongfei Zhao, Ning Sun and Manjiang Hu

To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions…

363

Abstract

Purpose

To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions. However, the accuracy of perception is closely related to the performance of sensors configured on the vehicle. To enhance sensors’ performance further to improve the accuracy of environmental perception, this paper aims to introduce an obstacle detection method based on the depth fusion of lidar and radar in challenging conditions, which could reduce the false rate resulting from sensors’ misdetection.

Design/methodology/approach

Firstly, a multi-layer self-calibration method is proposed based on the spatial and temporal relationships. Next, a depth fusion model is proposed to improve the performance of obstacle detection in challenging conditions. Finally, the study tests are carried out in challenging conditions, including straight unstructured road, unstructured road with rough surface and unstructured road with heavy dust or mist.

Findings

The experimental tests in challenging conditions demonstrate that the depth fusion model, comparing with the use of a single sensor, can filter out the false alarm of radar and point clouds of dust or mist received by lidar. So, the accuracy of objects detection is also improved under challenging conditions.

Originality/value

A multi-layer self-calibration method is conducive to improve the accuracy of the calibration and reduce the workload of manual calibration. Next, a depth fusion model based on lidar and radar can effectively get high precision by way of filtering out the false alarm of radar and point clouds of dust or mist received by lidar, which could improve ICVs’ performance in challenging conditions.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 25 November 2013

Edwin El-Mahassni

The purpose of this paper is to extend the work of fusing sensors with a Bayesian method to incorporate the sensor's reliability with regard to their operating environment. The…

173

Abstract

Purpose

The purpose of this paper is to extend the work of fusing sensors with a Bayesian method to incorporate the sensor's reliability with regard to their operating environment. The results are then to be used with the expected decision formula, conditional entropy and mutual information for suboptimally selecting which types of sensors should be fused where there are operational constraints.

Design/methodology/approach

The approach is an extension of previous work incorporating an environment parameter. The expected decision formula then forms the basis for sensor selection.

Findings

The author found that the performance of the sensors is correlated to the environment of operation, given that the likelihood of error will be higher in a difficult terrain than would otherwise be the case. However, the author also shows the sensors for fusion will vary if the author knows specifically which terrain the sensors will be operating in.

Research limitations/implications

The author notes that in order for this technique to be effective, a proper understanding of the limitations of the sensors, possible terrain types and targets have to be assumed.

Practical implications

The practical implication of this work is the ability to assess the performance of fused sensors according to the environment or terrain they might be operating under, thus providing a greater level of sensitivity than would otherwise be the case.

Originality/value

The author has extended previous ideas on sensor fusion from imprecise and uncertain sources using a Bayesian technique, as well as developed techniques regarding which sensors should be chosen for fusion given payload or other constraints.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 6 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

Article
Publication date: 1 March 1989

Per Holmbom, Ole Pedersen, Bengt Sandell and Alexander Lauber

By tradition, sensors are used to measure one desired parameter; all other parameters influencing the sensor are considered as interfering inputs, to be eliminated if possible…

Abstract

By tradition, sensors are used to measure one desired parameter; all other parameters influencing the sensor are considered as interfering inputs, to be eliminated if possible. Hence most of existing sensors are specifically intended for measuring one parameter, e.g. temperature, and the ideal temperature sensor should be as immune to all other parameters as possible. True, we sometimes use primitive sensor fusion, e.g. when calculating heat flow by combining separate measurements of temperature difference and of fluid flow.

Details

Sensor Review, vol. 9 no. 3
Type: Research Article
ISSN: 0260-2288

Article
Publication date: 5 October 2021

Umair Ali, Wasif Muhammad, Muhammad Jehanzed Irshad and Sajjad Manzoor

Self-localization of an underwater robot using global positioning sensor and other radio positioning systems is not possible, as an alternative onboard sensor-based self-location…

Abstract

Purpose

Self-localization of an underwater robot using global positioning sensor and other radio positioning systems is not possible, as an alternative onboard sensor-based self-location estimation provides another possible solution. However, the dynamic and unstructured nature of the sea environment and highly noise effected sensory information makes the underwater robot self-localization a challenging research topic. The state-of-art multi-sensor fusion algorithms are deficient in dealing of multi-sensor data, e.g. Kalman filter cannot deal with non-Gaussian noise, while parametric filter such as Monte Carlo localization has high computational cost. An optimal fusion policy with low computational cost is an important research question for underwater robot localization.

Design/methodology/approach

In this paper, the authors proposed a novel predictive coding-biased competition/divisive input modulation (PC/BC-DIM) neural network-based multi-sensor fusion approach, which has the capability to fuse and approximate noisy sensory information in an optimal way.

Findings

Results of low mean localization error (i.e. 1.2704 m) and computation cost (i.e. 2.2 ms) show that the proposed method performs better than existing previous techniques in such dynamic and unstructured environments.

Originality/value

To the best of the authors’ knowledge, this work provides a novel multisensory fusion approach to overcome the existing problems of non-Gaussian noise removal, higher self-localization estimation accuracy and reduced computational cost.

Details

Sensor Review, vol. 41 no. 5
Type: Research Article
ISSN: 0260-2288

Keywords

Article
Publication date: 3 May 2011

Shuping Wan

Multi‐sensor data fusion (MSDF) is defined as the process of integrating information from multiple sources to produce the most specific and comprehensive unified data about an…

317

Abstract

Purpose

Multi‐sensor data fusion (MSDF) is defined as the process of integrating information from multiple sources to produce the most specific and comprehensive unified data about an entity, activity or event. Multi‐sensor object recognition is one of the important technologies of MSDF. It has been widely applied in the fields of navigation, aviation, artificial intelligence, pattern recognition, fuzzy control, robot, and so on. Hence, aimed at the type recognition problem in which the characteristic values of object types and observations of sensors are in the form of triangular fuzzy numbers, the purpose of this paper is to propose a new fusion method from the viewpoint of decision‐making theory.

Design/methodology/approach

This work, first divides the comprehensive transaction process of sensor signal into two phases. Then, aimed at the type recognition problem, the paper gives the definition of similarity degree between two triangular fuzzy numbers. By solving the maximization optimization model, the vector of characteristic weights is objectively derived. A new fusion method is proposed according to the overall similarity degree.

Findings

The results of the experiments show that solving the maximization optimization model improves significantly the objectivity and accuracy of object recognition.

Originality/value

The paper studies the type recognition problem in which the characteristic values of object types and observations of sensors are in the form of triangular fuzzy numbers. By solving the maximization optimization model, the vector of characteristic weights is derived. A new fusion method is proposed. This method improves the objectivity and accuracy of object recognition.

Details

Kybernetes, vol. 40 no. 3/4
Type: Research Article
ISSN: 0368-492X

Keywords

1 – 10 of over 2000