Search results

1 – 10 of 207
Article
Publication date: 7 June 2019

Xinyu Zhang, Mo Zhou, Peng Qiu, Yi Huang and Jun Li

The purpose of this paper is the presentation and research of a novel sensor fusion-based system for obstacle detection and identification, which uses the millimeter-wave radar to…

Abstract

Purpose

The purpose of this paper is the presentation and research of a novel sensor fusion-based system for obstacle detection and identification, which uses the millimeter-wave radar to detect the position and velocity of the obstacle. Afterwards, the image processing module uses the bounding box regression algorithm in deep learning to precisely locate and identify the obstacles.

Design/methodology/approach

Unlike the traditional algorithms that use radar and vision to detect obstacles separately, the purposed method of this paper uses radar to determine the approximate location of obstacles and then uses bounding box regression to achieve accurate positioning and recognition. First, the information of the obstacles can be acquired by the millimeter-wave radar, and the effective target is extracted by filtering the data. Then, use coordinate system conversion and camera parameter calibration to project the effective target to the image plane, and generate the region of interest (ROI). Finally, based on image processing and machine learning techniques, the vehicle targets in the ROI are detected and tracked.

Findings

The millimeter wave is used to determine the presence of an obstacle, and the deep learning algorithm of the image is combined to determine the shape and the class of the obstacle. The experimental results indicate that the detection rate of this method is up to 91.6 per cent, which can better implement the perception of the environment in front of the vehicle.

Originality/value

The originality is based on the combination of millimeter-wave sensors and deep learning. Using the bounding box regression algorithm in RCNN, the ROI detected by radar is analyzed to realize real-time obstacle detection and recognition. This method does not require processing the entire image, greatly reducing the amount of data processing and improving the efficiency of the algorithm.

Details

Industrial Robot: the international journal of robotics research and application, vol. 46 no. 3
Type: Research Article
ISSN: 0143-991X

Keywords

Open Access
Article
Publication date: 16 January 2024

Pengyue Guo, Tianyun Shi, Zhen Ma and Jing Wang

The paper aims to solve the problem of personnel intrusion identification within the limits of high-speed railways. It adopts the fusion method of millimeter wave radar and camera…

Abstract

Purpose

The paper aims to solve the problem of personnel intrusion identification within the limits of high-speed railways. It adopts the fusion method of millimeter wave radar and camera to improve the accuracy of object recognition in dark and harsh weather conditions.

Design/methodology/approach

This paper adopts the fusion strategy of radar and camera linkage to achieve focus amplification of long-distance targets and solves the problem of low illumination by laser light filling of the focus point. In order to improve the recognition effect, this paper adopts the YOLOv8 algorithm for multi-scale target recognition. In addition, for the image distortion caused by bad weather, this paper proposes a linkage and tracking fusion strategy to output the correct alarm results.

Findings

Simulated intrusion tests show that the proposed method can effectively detect human intrusion within 0–200 m during the day and night in sunny weather and can achieve more than 80% recognition accuracy for extreme severe weather conditions.

Originality/value

(1) The authors propose a personnel intrusion monitoring scheme based on the fusion of millimeter wave radar and camera, achieving all-weather intrusion monitoring; (2) The authors propose a new multi-level fusion algorithm based on linkage and tracking to achieve intrusion target monitoring under adverse weather conditions; (3) The authors have conducted a large number of innovative simulation experiments to verify the effectiveness of the method proposed in this article.

Details

Railway Sciences, vol. 3 no. 1
Type: Research Article
ISSN: 2755-0907

Keywords

Content available
Article
Publication date: 5 August 2019

Huaping Liu and Yuan Yuan

319

Abstract

Details

Industrial Robot: the international journal of robotics research and application, vol. 46 no. 3
Type: Research Article
ISSN: 0143-991X

Article
Publication date: 16 September 2021

Yipeng Zhu, Tao Wang and Shiqiang Zhu

This paper aims to develop a robust person tracking method for human following robots. The tracking system adopts the multimodal fusion results of millimeter wave (MMW) radars and

Abstract

Purpose

This paper aims to develop a robust person tracking method for human following robots. The tracking system adopts the multimodal fusion results of millimeter wave (MMW) radars and monocular cameras for perception. A prototype of human following robot is developed and evaluated by using the proposed tracking system.

Design/methodology/approach

Limited by angular resolution, point clouds from MMW radars are too sparse to form features for human detection. Monocular cameras can provide semantic information for objects in view, but cannot provide spatial locations. Considering the complementarity of the two sensors, a sensor fusion algorithm based on multimodal data combination is proposed to identify and localize the target person under challenging conditions. In addition, a closed-loop controller is designed for the robot to follow the target person with expected distance.

Findings

A series of experiments under different circumstances are carried out to validate the fusion-based tracking method. Experimental results show that the average tracking errors are around 0.1 m. It is also found that the robot can handle different situations and overcome short-term interference, continually track and follow the target person.

Originality/value

This paper proposed a robust tracking system with the fusion of MMW radars and cameras. Interference such as occlusion and overlapping are well handled with the help of the velocity information from the radars. Compared to other state-of-the-art plans, the sensor fusion method is cost-effective and requires no additional tags with people. Its stable performance shows good application prospects in human following robots.

Details

Industrial Robot: the international journal of robotics research and application, vol. 49 no. 1
Type: Research Article
ISSN: 0143-991X

Keywords

Open Access
Article
Publication date: 12 August 2022

Bolin Gao, Kaiyuan Zheng, Fan Zhang, Ruiqi Su, Junying Zhang and Yimin Wu

Intelligent and connected vehicle technology is in the ascendant. High-level autonomous driving places more stringent requirements on the accuracy and reliability of environmental…

Abstract

Purpose

Intelligent and connected vehicle technology is in the ascendant. High-level autonomous driving places more stringent requirements on the accuracy and reliability of environmental perception. Existing research works on multitarget tracking based on multisensor fusion mostly focuses on the vehicle perspective, but limited by the principal defects of the vehicle sensor platform, it is difficult to comprehensively and accurately describe the surrounding environment information.

Design/methodology/approach

In this paper, a multitarget tracking method based on roadside multisensor fusion is proposed, including a multisensor fusion method based on measurement noise adaptive Kalman filtering, a global nearest neighbor data association method based on adaptive tracking gate, and a Track life cycle management method based on M/N logic rules.

Findings

Compared with fixed-size tracking gates, the adaptive tracking gates proposed in this paper can comprehensively improve the data association performance in the multitarget tracking process. Compared with single sensor measurement, the proposed method improves the position estimation accuracy by 13.5% and the velocity estimation accuracy by 22.2%. Compared with the control method, the proposed method improves the position estimation accuracy by 23.8% and the velocity estimation accuracy by 8.9%.

Originality/value

A multisensor fusion method with adaptive Kalman filtering of measurement noise is proposed to realize the adaptive adjustment of measurement noise. A global nearest neighbor data association method based on adaptive tracking gate is proposed to realize the adaptive adjustment of the tracking gate.

Details

Smart and Resilient Transportation, vol. 4 no. 2
Type: Research Article
ISSN: 2632-0487

Keywords

Article
Publication date: 1 June 2002

George K. Chacko

Develops an original 12‐step management of technology protocol and applies it to 51 applications which range from Du Pont’s failure in Nylon to the Single Online Trade Exchange…

3656

Abstract

Develops an original 12‐step management of technology protocol and applies it to 51 applications which range from Du Pont’s failure in Nylon to the Single Online Trade Exchange for Auto Parts procurement by GM, Ford, Daimler‐Chrysler and Renault‐Nissan. Provides many case studies with regards to the adoption of technology and describes seven chief technology officer characteristics. Discusses common errors when companies invest in technology and considers the probabilities of success. Provides 175 questions and answers to reinforce the concepts introduced. States that this substantial journal is aimed primarily at the present and potential chief technology officer to assist their survival and success in national and international markets.

Details

Asia Pacific Journal of Marketing and Logistics, vol. 14 no. 2/3
Type: Research Article
ISSN: 1355-5855

Keywords

Article
Publication date: 4 June 2021

Guotao Xie, Jing Zhang, Junfeng Tang, Hongfei Zhao, Ning Sun and Manjiang Hu

To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions…

357

Abstract

Purpose

To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions. However, the accuracy of perception is closely related to the performance of sensors configured on the vehicle. To enhance sensors’ performance further to improve the accuracy of environmental perception, this paper aims to introduce an obstacle detection method based on the depth fusion of lidar and radar in challenging conditions, which could reduce the false rate resulting from sensors’ misdetection.

Design/methodology/approach

Firstly, a multi-layer self-calibration method is proposed based on the spatial and temporal relationships. Next, a depth fusion model is proposed to improve the performance of obstacle detection in challenging conditions. Finally, the study tests are carried out in challenging conditions, including straight unstructured road, unstructured road with rough surface and unstructured road with heavy dust or mist.

Findings

The experimental tests in challenging conditions demonstrate that the depth fusion model, comparing with the use of a single sensor, can filter out the false alarm of radar and point clouds of dust or mist received by lidar. So, the accuracy of objects detection is also improved under challenging conditions.

Originality/value

A multi-layer self-calibration method is conducive to improve the accuracy of the calibration and reduce the workload of manual calibration. Next, a depth fusion model based on lidar and radar can effectively get high precision by way of filtering out the false alarm of radar and point clouds of dust or mist received by lidar, which could improve ICVs’ performance in challenging conditions.

Details

Industrial Robot: the international journal of robotics research and application, vol. 48 no. 6
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 28 May 2021

Guangbing Zhou, Jing Luo, Shugong Xu, Shunqing Zhang, Shige Meng and Kui Xiang

Indoor localization is a key tool for robot navigation in indoor environments. Traditionally, robot navigation depends on one sensor to perform autonomous localization. This paper…

Abstract

Purpose

Indoor localization is a key tool for robot navigation in indoor environments. Traditionally, robot navigation depends on one sensor to perform autonomous localization. This paper aims to enhance the navigation performance of mobile robots, a multiple data fusion (MDF) method is proposed for indoor environments.

Design/methodology/approach

Here, multiple sensor data i.e. collected information of inertial measurement unit, odometer and laser radar, are used. Then, an extended Kalman filter (EKF) is used to incorporate these multiple data and the mobile robot can perform autonomous localization according to the proposed EKF-based MDF method in complex indoor environments.

Findings

The proposed method has experimentally been verified in the different indoor environments, i.e. office, passageway and exhibition hall. Experimental results show that the EKF-based MDF method can achieve the best localization performance and robustness in the process of navigation.

Originality/value

Indoor localization precision is mostly related to the collected data from multiple sensors. The proposed method can incorporate these collected data reasonably and can guide the mobile robot to perform autonomous navigation (AN) in indoor environments. Therefore, the output of this paper would be used for AN in complex and unknown indoor environments.

Details

Assembly Automation, vol. 41 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 5 May 2021

Haina Song, Shengpei Zhou, Zhenting Chang, Yuejiang Su, Xiaosong Liu and Jingfeng Yang

Autonomous driving depends on the collection, processing and analysis of environmental information and vehicle information. Environmental perception and processing are important…

Abstract

Purpose

Autonomous driving depends on the collection, processing and analysis of environmental information and vehicle information. Environmental perception and processing are important prerequisite for the safety of self-driving of vehicles; it involves road boundary detection, vehicle detection, pedestrian detection using sensors such as laser rangefinder, video camera, vehicle borne radar, etc.

Design/methodology/approach

Subjected to various environmental factors, the data clock information is often out of sync because of different data acquisition frequency, which leads to the difficulty in data fusion. In this study, according to practical requirements, a multi-sensor environmental perception collaborative method was first proposed; then, based on the principle of target priority, large-scale priority, moving target priority and difference priority, a multi-sensor data fusion optimization algorithm based on convolutional neural network was proposed.

Findings

The average unload scheduling delay of the algorithm for test data before and after optimization under different network transmission rates. It can be seen that with the improvement of network transmission rate and processing capacity, the unload scheduling delay decreased after optimization and the performance of the test results is the closest to the optimal solution indicating the excellent performance of the optimization algorithm and its adaptivity to different environments.

Originality/value

In this paper, the results showed that the proposed method significantly improved the redundancy and fault tolerance of the system thus ensuring fast and correct decision-making during driving.

Details

Assembly Automation, vol. 41 no. 3
Type: Research Article
ISSN: 0144-5154

Keywords

Article
Publication date: 27 March 2020

George-Konstantinos Gaitanakis, George Limnaios and Konstantinos Zikidis

Modern fighter aircraft using active electronically scanned array (AESA) fire control radars are able to detect and track targets at long ranges, in the order of 50 nautical miles…

Abstract

Purpose

Modern fighter aircraft using active electronically scanned array (AESA) fire control radars are able to detect and track targets at long ranges, in the order of 50 nautical miles or more. Low observable or stealth technology has contested the radar capabilities, reducing detection/tracking ranges roughly to one-third (or even less, for fighter aircraft radar). Hence, infrared search and track (IRST) systems have been reconsidered as an alternative to the radar. This study aims to explore and compare the capabilities and limitations of these two technologies, AESA radars and IRST systems, as well as their synergy through sensor fusion.

Design/methodology/approach

The AESA radar range is calculated with the help of the radar equation under certain assumptions, taking into account heat dissipation requirements, using the F-16 fighter as a case study. Concerning the IRST sensor, a new model is proposed for the estimation of the detection range, based on the emitted infrared radiation caused by aerodynamic heating.

Findings

The maximum detection range provided by an AESA radar could be restricted because of the increased waste heat which is produced and the relevant constraints concerning the cooling capacity of the carrying aircraft. On the other hand, IRST systems exhibit certain advantages over radars against low observable threats. IRST could be combined with a datalink with the help of data fusion, offering weapons-quality track.

Originality/value

An original approach is provided for the IRST detection range estimation. The AESA/IRST comparison offers valuable insight, while it allows for more efficient planning, at the military acquisition phase, as well as at the tactical level.

Details

Aircraft Engineering and Aerospace Technology, vol. 92 no. 9
Type: Research Article
ISSN: 1748-8842

Keywords

1 – 10 of 207