Search results

1 – 10 of over 1000
To view the access options for this content please click here
Article
Publication date: 28 June 2013

Rong Wang, Jianye Liu, Zhi Xiong and Qinghua Zeng

The Embedded GPS/INS System (EGI) has been used more widely as central navigation equipment of aircraft. For certain cases needing high attitude accuracy, star sensor can…

Abstract

Purpose

The Embedded GPS/INS System (EGI) has been used more widely as central navigation equipment of aircraft. For certain cases needing high attitude accuracy, star sensor can be integrated with EGI to improve attitude performance. Since the filtering‐correction loop has already built in finished EGI product, centralized or federated Kalman filter is not applicable for integrating EGI with star sensor; it is a challenge to design multi‐sensor information fusion algorithm suitable for this situation. The purpose of this paper is to present a double‐layer fusion scheme and algorithms to meet the practical need of constructing integrated multi‐sensor navigation system by star sensor assisting finished EGI unit.

Design/methodology/approach

The alternate fusion algorithms for asynchronous measurements and the sequential fusion algorithms for synchronous measurements are presented. By combining alternate filtering and sequential filtering algorithms, a kind of double‐layer fusion algorithms for multi‐sensors is proposed and validated by semi‐physical test in this paper.

Findings

The double‐layer fusion algorithms represent a filtering strategy for multiple non‐identical parallel sensors to assist INS, while the independent estimation‐correction loop in EGI is still maintained. It has significant benefits in updating original navigation system by integrating new sensors.

Practical implications

The approach described in this paper can be used in designing similar multi‐sensor information fusion navigation system composed by EGI and various kinds of sensors, so as to improve the navigation performance.

Originality/value

Compared with conventional approach, in the situation that centralized and federated Kalman filter are not applicable, the double‐layer fusion scheme and algorithms give an external filtering strategy for measurements of finished EGI unit and star sensors.

Details

Aircraft Engineering and Aerospace Technology, vol. 85 no. 4
Type: Research Article
ISSN: 0002-2667

Keywords

To view the access options for this content please click here
Article
Publication date: 4 June 2021

Guotao Xie, Jing Zhang, Junfeng Tang, Hongfei Zhao, Ning Sun and Manjiang Hu

To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions…

Downloads
51

Abstract

Purpose

To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions. However, the accuracy of perception is closely related to the performance of sensors configured on the vehicle. To enhance sensors’ performance further to improve the accuracy of environmental perception, this paper aims to introduce an obstacle detection method based on the depth fusion of lidar and radar in challenging conditions, which could reduce the false rate resulting from sensors’ misdetection.

Design/methodology/approach

Firstly, a multi-layer self-calibration method is proposed based on the spatial and temporal relationships. Next, a depth fusion model is proposed to improve the performance of obstacle detection in challenging conditions. Finally, the study tests are carried out in challenging conditions, including straight unstructured road, unstructured road with rough surface and unstructured road with heavy dust or mist.

Findings

The experimental tests in challenging conditions demonstrate that the depth fusion model, comparing with the use of a single sensor, can filter out the false alarm of radar and point clouds of dust or mist received by lidar. So, the accuracy of objects detection is also improved under challenging conditions.

Originality/value

A multi-layer self-calibration method is conducive to improve the accuracy of the calibration and reduce the workload of manual calibration. Next, a depth fusion model based on lidar and radar can effectively get high precision by way of filtering out the false alarm of radar and point clouds of dust or mist received by lidar, which could improve ICVs’ performance in challenging conditions.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

To view the access options for this content please click here
Article
Publication date: 25 November 2013

Edwin El-Mahassni

The purpose of this paper is to extend the work of fusing sensors with a Bayesian method to incorporate the sensor's reliability with regard to their operating…

Downloads
141

Abstract

Purpose

The purpose of this paper is to extend the work of fusing sensors with a Bayesian method to incorporate the sensor's reliability with regard to their operating environment. The results are then to be used with the expected decision formula, conditional entropy and mutual information for suboptimally selecting which types of sensors should be fused where there are operational constraints.

Design/methodology/approach

The approach is an extension of previous work incorporating an environment parameter. The expected decision formula then forms the basis for sensor selection.

Findings

The author found that the performance of the sensors is correlated to the environment of operation, given that the likelihood of error will be higher in a difficult terrain than would otherwise be the case. However, the author also shows the sensors for fusion will vary if the author knows specifically which terrain the sensors will be operating in.

Research limitations/implications

The author notes that in order for this technique to be effective, a proper understanding of the limitations of the sensors, possible terrain types and targets have to be assumed.

Practical implications

The practical implication of this work is the ability to assess the performance of fused sensors according to the environment or terrain they might be operating under, thus providing a greater level of sensitivity than would otherwise be the case.

Originality/value

The author has extended previous ideas on sensor fusion from imprecise and uncertain sources using a Bayesian technique, as well as developed techniques regarding which sensors should be chosen for fusion given payload or other constraints.

Details

International Journal of Intelligent Computing and Cybernetics, vol. 6 no. 4
Type: Research Article
ISSN: 1756-378X

Keywords

To view the access options for this content please click here
Article
Publication date: 5 October 2021

Umair Ali, Wasif Muhammad, Muhammad Jehanzed Irshad and Sajjad Manzoor

Self-localization of an underwater robot using global positioning sensor and other radio positioning systems is not possible, as an alternative onboard sensor-based…

Abstract

Purpose

Self-localization of an underwater robot using global positioning sensor and other radio positioning systems is not possible, as an alternative onboard sensor-based self-location estimation provides another possible solution. However, the dynamic and unstructured nature of the sea environment and highly noise effected sensory information makes the underwater robot self-localization a challenging research topic. The state-of-art multi-sensor fusion algorithms are deficient in dealing of multi-sensor data, e.g. Kalman filter cannot deal with non-Gaussian noise, while parametric filter such as Monte Carlo localization has high computational cost. An optimal fusion policy with low computational cost is an important research question for underwater robot localization.

Design/methodology/approach

In this paper, the authors proposed a novel predictive coding-biased competition/divisive input modulation (PC/BC-DIM) neural network-based multi-sensor fusion approach, which has the capability to fuse and approximate noisy sensory information in an optimal way.

Findings

Results of low mean localization error (i.e. 1.2704 m) and computation cost (i.e. 2.2 ms) show that the proposed method performs better than existing previous techniques in such dynamic and unstructured environments.

Originality/value

To the best of the authors’ knowledge, this work provides a novel multisensory fusion approach to overcome the existing problems of non-Gaussian noise removal, higher self-localization estimation accuracy and reduced computational cost.

Details

Sensor Review, vol. 41 no. 5
Type: Research Article
ISSN: 0260-2288

Keywords

To view the access options for this content please click here
Article
Publication date: 1 March 1989

Per Holmbom, Ole Pedersen, Bengt Sandell and Alexander Lauber

By tradition, sensors are used to measure one desired parameter; all other parameters influencing the sensor are considered as interfering inputs, to be eliminated if…

Abstract

By tradition, sensors are used to measure one desired parameter; all other parameters influencing the sensor are considered as interfering inputs, to be eliminated if possible. Hence most of existing sensors are specifically intended for measuring one parameter, e.g. temperature, and the ideal temperature sensor should be as immune to all other parameters as possible. True, we sometimes use primitive sensor fusion, e.g. when calculating heat flow by combining separate measurements of temperature difference and of fluid flow.

Details

Sensor Review, vol. 9 no. 3
Type: Research Article
ISSN: 0260-2288

To view the access options for this content please click here
Article
Publication date: 3 May 2011

Shuping Wan

Multi‐sensor data fusion (MSDF) is defined as the process of integrating information from multiple sources to produce the most specific and comprehensive unified data…

Downloads
304

Abstract

Purpose

Multi‐sensor data fusion (MSDF) is defined as the process of integrating information from multiple sources to produce the most specific and comprehensive unified data about an entity, activity or event. Multi‐sensor object recognition is one of the important technologies of MSDF. It has been widely applied in the fields of navigation, aviation, artificial intelligence, pattern recognition, fuzzy control, robot, and so on. Hence, aimed at the type recognition problem in which the characteristic values of object types and observations of sensors are in the form of triangular fuzzy numbers, the purpose of this paper is to propose a new fusion method from the viewpoint of decision‐making theory.

Design/methodology/approach

This work, first divides the comprehensive transaction process of sensor signal into two phases. Then, aimed at the type recognition problem, the paper gives the definition of similarity degree between two triangular fuzzy numbers. By solving the maximization optimization model, the vector of characteristic weights is objectively derived. A new fusion method is proposed according to the overall similarity degree.

Findings

The results of the experiments show that solving the maximization optimization model improves significantly the objectivity and accuracy of object recognition.

Originality/value

The paper studies the type recognition problem in which the characteristic values of object types and observations of sensors are in the form of triangular fuzzy numbers. By solving the maximization optimization model, the vector of characteristic weights is derived. A new fusion method is proposed. This method improves the objectivity and accuracy of object recognition.

Details

Kybernetes, vol. 40 no. 3/4
Type: Research Article
ISSN: 0368-492X

Keywords

To view the access options for this content please click here
Article
Publication date: 7 January 2019

Ravinder Singh and Kuldeep Singh Nagla

An efficient perception of the complex environment is the foremost requirement in mobile robotics. At present, the utilization of glass as a glass wall and automated…

Abstract

Purpose

An efficient perception of the complex environment is the foremost requirement in mobile robotics. At present, the utilization of glass as a glass wall and automated transparent door in the modern building has become a highlight feature for interior decoration, which has resulted in the wrong perception of the environment by various range sensors. The perception generated by multi-data sensor fusion (MDSF) of sonar and laser is fairly consistent to detect glass but is still affected by the issues such as sensor inaccuracies, sensor reliability, scan mismatching due to glass, sensor model, probabilistic approaches for sensor fusion, sensor registration, etc. The paper aims to discuss these issues.

Design/methodology/approach

This paper presents a modified framework – Advanced Laser and Sonar Framework (ALSF) – to fuse the sensory information of a laser scanner and sonar to reduce the uncertainty caused by glass in an environment by selecting the optimal range information corresponding to a selected threshold value. In the proposed approach, the conventional sonar sensor model is also modified to reduce the wrong perception in sonar as an outcome of the diverse range measurement. The laser scan matching algorithm is also modified by taking out the small cluster of laser point (w.r.t. range information) to get efficient perception.

Findings

The probability of the occupied cells w.r.t. the modified sonar sensor model becomes consistent corresponding to diverse sonar range measurement. The scan matching technique is also modified to reduce the uncertainty caused by glass and high computational load for the efficient and fast pose estimation of the laser sensor/mobile robot to generate robust mapping. These stated modifications are linked with the proposed ALSF technique to reduce the uncertainty caused by glass, inconsistent probabilities and high load computation during the generation of occupancy grid mapping with MDSF. Various real-world experiments are performed with the implementation of the proposed approach on a mobile robot fitted with laser and sonar, and the obtained results are qualitatively and quantitatively compared with conventional approaches.

Originality/value

The proposed ASIF approach generates efficient perception of the complex environment contains glass and can be implemented for various robotics applications.

Details

International Journal of Intelligent Unmanned Systems, vol. 7 no. 1
Type: Research Article
ISSN: 2049-6427

Keywords

To view the access options for this content please click here
Article
Publication date: 16 April 2018

Hanieh Deilamsalehy and Timothy C. Havens

Estimating the pose – position and orientation – of a moving object such as a robot is a necessary task for many applications, e.g., robot navigation control, environment…

Abstract

Purpose

Estimating the pose – position and orientation – of a moving object such as a robot is a necessary task for many applications, e.g., robot navigation control, environment mapping, and medical applications such as robotic surgery. The purpose of this paper is to introduce a novel method to fuse the information from several available sensors in order to improve the estimated pose from any individual sensor and calculate a more accurate pose for the moving platform.

Design/methodology/approach

Pose estimation is usually done by collecting the data obtained from several sensors mounted on the object/platform and fusing the acquired information. Assuming that the robot is moving in a three-dimensional (3D) world, its location is completely defined by six degrees of freedom (6DOF): three angles and three position coordinates. Some 3D sensors, such as IMUs and cameras, have been widely used for 3D localization. Yet, there are other sensors, like 2D Light Detection And Ranging (LiDAR), which can give a very precise estimation in a 2D plane but they are not employed for 3D estimation since the sensor is unable to obtain the full 6DOF. However, in some applications there is a considerable amount of time in which the robot is almost moving on a plane during the time interval between two sensor readings; e.g., a ground vehicle moving on a flat surface or a drone flying at an almost constant altitude to collect visual data. In this paper a novel method using a “fuzzy inference system” is proposed that employs a 2D LiDAR in a 3D localization algorithm in order to improve the pose estimation accuracy.

Findings

The method determines the trajectory of the robot and the sensor reliability between two readings and based on this information defines the weight of the 2D sensor in the final fused pose by adjusting “extended Kalman filter” parameters. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.

Originality/value

To the best of the authors’ knowledge this is the first time that a 2D LiDAR has been employed to improve the 3D pose estimation in an unknown environment without any previous knowledge. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.

Details

International Journal of Intelligent Unmanned Systems, vol. 6 no. 2
Type: Research Article
ISSN: 2049-6427

Keywords

To view the access options for this content please click here
Article
Publication date: 6 March 2017

Pei-Ju Lee, Peng-Sheng You, Yu-Chih Huang and Yi-Chih Hsieh

The historical data usually consist of overlapping reports, and these reports may contain inconsistent data, which may return incorrect results of a query search…

Abstract

Purpose

The historical data usually consist of overlapping reports, and these reports may contain inconsistent data, which may return incorrect results of a query search. Moreover, users who issue the query may not learn of this inconsistency even after a data cleaning process (e.g. schema matching or data screening). The inconsistency can exist in different types of data, such as temporal or spatial data. Therefore, this paper aims to introduce an information fusion method that can detect data inconsistency in the early stages of data fusion.

Design/methodology/approach

This paper introduces an information fusion method for multi-robot operations, for which fusion is conducted continuously. When the environment is explored by multiple robots, the robot logs can provide more information about the number and coordination of targets or victims. The information fusion method proposed in this paper generates an underdetermined linear system of overlapping spatial reports and estimates the case values. Then, the least squares method is used for the underdetermined linear system. By using these two methods, the conflicts between reports can be detected and the values of the intervals at specific times or locations can be estimated.

Findings

The proposed information fusion method was tested for inconsistency detection and target projection of spatial fusion in sensor networks. The proposed approach examined the values of sensor data from simulation that robots perform search tasks. This system can be expanded to data warehouses with heterogeneous data sources to achieve completeness, robustness and conciseness.

Originality/value

Little research has been devoted to the linear systems for information fusion of tasks of mobile robots. The proposed information fusion method minimizes the cost of time and comparison for data fusion and also minimizes the probability of errors from incorrect results.

Details

Engineering Computations, vol. 34 no. 1
Type: Research Article
ISSN: 0264-4401

Keywords

To view the access options for this content please click here
Article
Publication date: 16 January 2019

Kefan Xie, Zimei Liu, Liuliu Fu and Benbu Liang

The purpose of this paper is to propose a theoretical framework of applying the Internet of Things (IoT) technologies to the intelligent evacuation protocol in libraries…

Downloads
2728

Abstract

Purpose

The purpose of this paper is to propose a theoretical framework of applying the Internet of Things (IoT) technologies to the intelligent evacuation protocol in libraries at emergency situations.

Design/methodology/approach

The authors conducted field investigations on eight libraries in Wuhan, China, analyzed the characteristics of crowd gathering in libraries and the problems of the libraries’ existing evacuation plans. Therefore, an IoT-based intelligent evacuation protocol in libraries was proposed. Its basic structure consisted of five components: the information base, the protocol base, the IoT sensors, the information fusion system and the intelligent evacuation protocol generation system. In the information fusion system, Dempster–Shafer (D-S) evidence theory was employed as the information fusion algorithm to fuse the multi-sensor information at multiple time points, so as to reduce the uncertainty of disaster prediction. The authors also conducted a case study on the Library L in Wuhan, China. A specific evacuation route was generated for a fire and the crowd evacuation was simulated by the software Patherfind.

Findings

The proposed IoT-based evacuation protocol has four distinguishing features: scenario corresponding, precise evacuation, dynamic correction and intelligent decision-making. The case study shows that the proposed protocol is feasible in practice, indicating that the IoT technologies have great potential to be successfully applied to the safety management in libraries.

Research limitations/implications

The software and hardware requirements as well as the Internet network requirements of IoT technologies need to be further discussed.

Practical implications

The proposed IoT-based intelligent evacuation protocol can be widely used in libraries, which is one of the inspirations for the use of IoT technologies in modern constructers.

Originality/value

The application of IoT technologies in libraries is a brand-new topic that has drawn much attention in academia recently. The crowd safety management in libraries is of great significance, and there is little professional literature on it. This paper proposes an IoT-based intelligent evacuation protocol, aiming at improving the safety management in libraries at emergency situations.

1 – 10 of over 1000