Search results
1 – 10 of over 47000Xin Liu, Junhui Wu, Yiyun Man, Xibao Xu and Jifeng Guo
With the continuous development of aerospace technology, space exploration missions have been increasing year by year, and higher requirements have been placed on the upper level…
Abstract
Purpose
With the continuous development of aerospace technology, space exploration missions have been increasing year by year, and higher requirements have been placed on the upper level rocket. The purpose of this paper is to improve the ability to identify and detect potential targets for upper level rocket.
Design/methodology/approach
Aiming at the upper-level recognition of space satellites and core components, this paper proposes a deep learning-based spatial multi-target recognition method, which can simultaneously recognize space satellites and core components. First, the implementation framework of spatial multi-target recognition is given. Second, by comparing and analyzing convolutional neural networks, a convolutional neural network model based on YOLOv3 is designed. Finally, seven satellite scale models are constructed based on systems tool kit (STK) and Solidworks. Multi targets, such as nozzle, star sensor, solar,etc., are selected as the recognition objects.
Findings
By labeling, training and testing the image data set, the accuracy of the proposed method for spatial multi-target recognition is 90.17%, which is improved compared with the recognition accuracy and rate based on the YOLOv1 model, thereby effectively verifying the correctness of the proposed method.
Research limitations/implications
This paper only recognizes space multi-targets under ideal simulation conditions, but has not fully considered the space multi-target recognition under the more complex space lighting environment, nutation, precession, roll and other motion laws. In the later period, training and detection can be performed by simulating more realistic space lighting environment images or multi-target images taken by upper-level rocket to further verify the feasibility of multi-target recognition algorithms in complex space environments.
Practical implications
The research in this paper validates that the deep learning-based algorithm to recognize multiple targets in the space environment is feasible in terms of accuracy and rate.
Originality/value
The paper helps to set up an image data set containing six satellite models in STK and one digital satellite model that simulates spatial illumination changes and spins in Solidworks, and use the characteristics of spatial targets (such as rectangles, circles and lines) to provide prior values to the network convolutional layer.
Details
Keywords
Pengyue Guo, Tianyun Shi, Zhen Ma and Jing Wang
The paper aims to solve the problem of personnel intrusion identification within the limits of high-speed railways. It adopts the fusion method of millimeter wave radar and camera…
Abstract
Purpose
The paper aims to solve the problem of personnel intrusion identification within the limits of high-speed railways. It adopts the fusion method of millimeter wave radar and camera to improve the accuracy of object recognition in dark and harsh weather conditions.
Design/methodology/approach
This paper adopts the fusion strategy of radar and camera linkage to achieve focus amplification of long-distance targets and solves the problem of low illumination by laser light filling of the focus point. In order to improve the recognition effect, this paper adopts the YOLOv8 algorithm for multi-scale target recognition. In addition, for the image distortion caused by bad weather, this paper proposes a linkage and tracking fusion strategy to output the correct alarm results.
Findings
Simulated intrusion tests show that the proposed method can effectively detect human intrusion within 0–200 m during the day and night in sunny weather and can achieve more than 80% recognition accuracy for extreme severe weather conditions.
Originality/value
(1) The authors propose a personnel intrusion monitoring scheme based on the fusion of millimeter wave radar and camera, achieving all-weather intrusion monitoring; (2) The authors propose a new multi-level fusion algorithm based on linkage and tracking to achieve intrusion target monitoring under adverse weather conditions; (3) The authors have conducted a large number of innovative simulation experiments to verify the effectiveness of the method proposed in this article.
Details
Keywords
Humanoid robot has similar shape and action characteristics as humans, and it can complete some basic tasks instead of humans without changing the human environment, which makes…
Abstract
Purpose
Humanoid robot has similar shape and action characteristics as humans, and it can complete some basic tasks instead of humans without changing the human environment, which makes humanoid robot become the best structure and help form for robot to provide services for human beings.
Design/methodology/approach
The mobile operation control of humanoid robot is generated by the walking movement of humanoid robot's feet, and the robot's hand and arm complete grasping and other operations together.
Findings
On the basis of humanoid robot, the integrated system of software and hardware based on the KM34Z256 humanoid robot is described first, and a series of kinematics discussion on its mobile operation is carried out.
Originality/value
The research based on this project shows that the target recognition and positioning method is not only accurate and of high energy but also can realize the mobile operation of humanoid robot.
Details
Keywords
Yueting Yang, Shaolin Hu, Ye Ke and Runguan Zhou
Fire smoke detection in petrochemical plant can prevent fire and ensure production safety and life safety. The purpose of this paper is to solve the problem of missed detection…
Abstract
Purpose
Fire smoke detection in petrochemical plant can prevent fire and ensure production safety and life safety. The purpose of this paper is to solve the problem of missed detection and false detection in flame smoke detection under complex factory background.
Design/methodology/approach
This paper presents a flame smoke detection algorithm based on YOLOv5. The target regression loss function (CIoU) is used to improve the missed detection and false detection in target detection and improve the model detection performance. The improved activation function avoids gradient disappearance to maintain high real-time performance of the algorithm. Data enhancement technology is used to enhance the ability of the network to extract features and improve the accuracy of the model for small target detection.
Findings
Based on the actual situation of flame smoke, the loss function and activation function of YOLOv5 model are improved. Based on the improved YOLOv5 model, a flame smoke detection algorithm with generalization performance is established. The improved model is compared with SSD and YOLOv4-tiny. The accuracy of the improved YOLOv5 model can reach 99.5%, which achieves a more accurate detection effect on flame smoke. The improved network model is superior to the existing methods in running time and accuracy.
Originality/value
Aiming at the actual particularity of flame smoke detection, an improved flame smoke detection network model based on YOLOv5 is established. The purpose of optimizing the model is achieved by improving the loss function, and the activation function with stronger nonlinear ability is combined to avoid over-fitting of the network. This method is helpful to improve the problems of missed detection and false detection in flame smoke detection and can be further extended to pedestrian target detection and vehicle running recognition.
Details
Keywords
BinBin Zhang, Fumin Zhang and Xinghua Qu
Laser-based measurement techniques offer various advantages over conventional measurement techniques, such as no-destructive, no-contact, fast and long measuring distance. In…
Abstract
Purpose
Laser-based measurement techniques offer various advantages over conventional measurement techniques, such as no-destructive, no-contact, fast and long measuring distance. In cooperative laser ranging systems, it’s crucial to extract center coordinates of retroreflectors to accomplish automatic measurement. To solve this problem, this paper aims to propose a novel method.
Design/methodology/approach
We propose a method using Mask RCNN (Region Convolutional Neural Network), with ResNet101 (Residual Network 101) and FPN (Feature Pyramid Network) as the backbone, to localize retroreflectors, realizing automatic recognition in different backgrounds. Compared with two other deep learning algorithms, experiments show that the recognition rate of Mask RCNN is better especially for small-scale targets. Based on this, an ellipse detection algorithm is introduced to obtain the ellipses of retroreflectors from recognized target areas. The center coordinates of retroreflectors in the camera coordinate system are obtained by using a mathematics method.
Findings
To verify the accuracy of this method, an experiment was carried out: the distance between two retroreflectors with a known distance of 1,000.109 mm was measured, with 2.596 mm root-mean-squar error, meeting the requirements of the coarse location of retroreflectors.
Research limitations/implications
The research limitations/implications are as follows: (i) As the data set only has 200 pictures, although we have used some data augmentation methods such as rotating, mirroring and cropping, there is still room for improvement in the generalization ability of detection. (ii) The ellipse detection algorithm needs to work in relatively dark conditions, as the retroreflector is made of stainless steel, which easily reflects light.
Originality/value
The originality/value of the article lies in being able to obtain center coordinates of multiple retroreflectors automatically even in a cluttered background; being able to recognize retroreflectors with different sizes, especially for small targets; meeting the recognition requirement of multiple targets in a large field of view and obtaining 3 D centers of targets by monocular model-based vision.
Details
Keywords
Weishi Chen, Yifeng Huang, Xianfeng Lu and Jie Zhang
This paper aims to review the critical technology development of avian radar system at airports.
Abstract
Purpose
This paper aims to review the critical technology development of avian radar system at airports.
Design/methodology/approach
After the origin of avian radar technology is discussed, the target characteristics of flying birds are analyzed, including the target echo amplitude, flight speed, flight height, trajectory and micro-Doppler. Four typical airport avian radar systems of Merlin, Accipiter, Robin and CAST are introduced. The performance of different modules such as antenna, target detection and tracking, target recognition and classification, analysis of bird information together determines the detection ability of avian radar. The performances and key technologies of the ubiquitous avian radar are summarized and compared with other systems, and their applications, deployment modes, as well as their advantages and disadvantages are introduced and analyzed.
Findings
The ubiquitous avian radar achieves the long-time integration of target echoes, which greatly improves detection and classification ability of the targets of birds or drones, even under strong background clutter at airport. In addition, based on the big data of bird situation accumulated by avian radar, the rules of bird activity around the airport can be mined to guide the bird avoidance work.
Originality/value
This paper presented a novel avian radar system based on ubiquitous digital radar technology. The authors’ experience has confirmed that this system can be effective for airport bird strike prevention and management. In the future, the avian radar system will see continued improvement in both software and hardware, as the system is designed to be easily extensible.
Details
Keywords
Charlie D. Frowd, David White, Richard I. Kemp, Rob Jenkins, Kamran Nawaz and Kate Herold
Research suggests that memory for unfamiliar faces is pictorial in nature, with recognition negatively affected by changes to image-specific information such as head pose…
Abstract
Purpose
Research suggests that memory for unfamiliar faces is pictorial in nature, with recognition negatively affected by changes to image-specific information such as head pose, lighting and facial expression. Further, within-person variation causes some images to resemble a subject more than others. Here, the purpose of this paper is to explore the impact of target-image choice on face construction using a modern evolving type of composite system, EvoFIT.
Design/methodology/approach
Participants saw an unfamiliar target identity and then created a single composite of it the following day with EvoFIT by repeatedly selecting from arrays of faces with “breeding”, to “evolve” a face. Targets were images that had been previously categorised as low, medium or high likeness, or a face prototype comprising averaged photographs of the same individual.
Findings
Identification of composites of low likeness targets was inferior but increased as a significant linear trend from low to medium to high likeness. Also, identification scores decreased when targets changed by pose and expression, but not by lighting. Similarly, composite identification from prototypes was more accurate than those from low likeness targets, providing some support that image averages generally produce more robust memory traces.
Practical implications
The results emphasise the potential importance of matching a target's pose and expression at face construction; also, for obtaining image-specific information for construction of facial-composite images, a result that would appear to be useful to developers and researchers of composite software.
Originality/value
This current project is the first of its kind to formally explore the potential impact of pictorial properties of a target face on identifiability of faces created from memory. The design followed forensic practices as far as is practicable, to allow good generalisation of results.
Details
Keywords
Xiufeng Zhang, Jitao Dai, Xia Li, Huizi Li, Huiqun Fu, Guoxin Pan, Ning Zhang, Rong Yang and Jianguang Xu
This paper aims to develop a signal acquisition system of surface electromyography (sEMG) and use the characteristics of (sEMG) signal to interference action pattern.
Abstract
Purpose
This paper aims to develop a signal acquisition system of surface electromyography (sEMG) and use the characteristics of (sEMG) signal to interference action pattern.
Design/methodology/approach
This paper proposes a fusion method based on combining the coefficient of AR model and wavelet coefficient. It improves the recognition rate of the target action. To overcome the slow convergence speed and local optimum in standard BP network, the study presents a BP algorithm which combine with LM algorithm and PSO algorithm, and it improves the convergence speed and the recognition rate of the target action.
Findings
Experiments verify the effectiveness of the system from two aspects the target motion recognition rate and the corresponding reaction speed of the robotic system.
Originality/value
The study developed a signal acquisition system of sEMG and used the characteristics of (sEMG) signal to interference action pattern. The myoelectricity integral values are presented to determine the starting point and end point of target movement, which is more effective than using single sample point amplitude method.
Details
Keywords
Mamoru Minami, Julien Agbanhan and Toshiyuki Asakura
This paper presents the real‐time visual servoing of a manipulator and its tracking strategy of a fish, by employing a genetic algorithm (GA) and the unprocessed gray‐scale image…
Abstract
This paper presents the real‐time visual servoing of a manipulator and its tracking strategy of a fish, by employing a genetic algorithm (GA) and the unprocessed gray‐scale image termed here as “raw‐image”. The raw‐image is employed to shorten the control period, since it has more tolerance of contrast variations occurring within an object, and between one input image and the next one. GA is employed in a method called 1‐step‐GA evolution. In this way, for every generational step of the GA process, the found results, which express the deviation of the target in the camera frame, are output for control purposes. These results are then used to determine the control inputs of the PD‐type controller. Our proposed GA‐based visual servoing has been implemented in a real system, and the results have shown its effectiveness by successfully tracking a moving target fish.
Details
Keywords
Fan Wen, Zhenshen Qu and Changhong Wang
The purpose of this paper is to describe how, in order to fulfill the specific missions under some special environments without people participating, a multi‐robot object tracking…
Abstract
Purpose
The purpose of this paper is to describe how, in order to fulfill the specific missions under some special environments without people participating, a multi‐robot object tracking and docking systems are designed based on networked control frames.
Design/methodology/approach
In the process of target recognition and tracking, the tracking robot obtains the target robot's position and poses information by means of multi‐sensors, and tracking the target robot uses a data fusion algorithm based on network‐delay. In the phase of docking, the exterior parameters of the CCD camera installed on the tracking robot can be calculated in‐phase by recognizing the coded target in a place on the target robot. Finally, the relative position and pose parameters between the tracking robot and the target robot can be derived using the coordinate rotation parameters.
Findings
The experiment results indicated that the relative position measure error is less than 1.5 percent, and the relative pose measure error less than 1° within 1.5‐10 m. The research results show that the system can actualize object tracing and docking missions accurately and timely.
Research limitations/implications
This paper is devoted to multi‐robot object tracking and docking systems.
Practical implications
The main applications are in the exploration in the seabed, consignment in the workshop, formation of spacecrafts, docking of spacecrafts, and so on.
Originality/value
The system can actualize object tracing and docking missions accurately, and the system is of reliable, real‐time, and robust capabilities. This will aid all developers and researchers to enhance their technicality.
Details