Search results

1 – 10 of 50
Article
Publication date: 15 November 2022

Jun Wu, Cheng Huang, Zili Li, Runsheng Li, Guilan Wang and Haiou Zhang

Wire and arc additive manufacturing (WAAM) is a widely used advanced manufacturing technology. If the surface defects occurred during welding process cannot be detected and…

Abstract

Purpose

Wire and arc additive manufacturing (WAAM) is a widely used advanced manufacturing technology. If the surface defects occurred during welding process cannot be detected and repaired in time, it will form the internal defects. To address this problem, this study aims to develop an in situ monitoring system for the welding process with a high-dynamic range imaging (HDR) melt pool camera.

Design/methodology/approach

An improved you only look once version 3 (YOLOv3) model was proposed for online surface defects detection and classification. In this paper, improvements were mainly made in the bounding box clustering algorithm, bounding box loss function, classification loss function and network structure.

Findings

The results showed that the improved model outperforms the Faster regions with convolutional neural network features, single shot multibox detector, RetinaNet and YOLOv3 models with mAP value of 98.0% and a recognition rate of 59 frames per second. And it was indicated that the improved YOLOv3 model satisfied the requirements of real-time monitoring well in both efficiency and accuracy.

Originality/value

Experimental results show that the improved YOLOv3 model can solve the problem of poor performance of traditional defect detection models and other deep learning models. And the proposed model can meet the requirements of WAAM quality monitoring.

Article
Publication date: 9 July 2020

Xin Liu, Junhui Wu, Yiyun Man, Xibao Xu and Jifeng Guo

With the continuous development of aerospace technology, space exploration missions have been increasing year by year, and higher requirements have been placed on the upper level…

Abstract

Purpose

With the continuous development of aerospace technology, space exploration missions have been increasing year by year, and higher requirements have been placed on the upper level rocket. The purpose of this paper is to improve the ability to identify and detect potential targets for upper level rocket.

Design/methodology/approach

Aiming at the upper-level recognition of space satellites and core components, this paper proposes a deep learning-based spatial multi-target recognition method, which can simultaneously recognize space satellites and core components. First, the implementation framework of spatial multi-target recognition is given. Second, by comparing and analyzing convolutional neural networks, a convolutional neural network model based on YOLOv3 is designed. Finally, seven satellite scale models are constructed based on systems tool kit (STK) and Solidworks. Multi targets, such as nozzle, star sensor, solar,etc., are selected as the recognition objects.

Findings

By labeling, training and testing the image data set, the accuracy of the proposed method for spatial multi-target recognition is 90.17%, which is improved compared with the recognition accuracy and rate based on the YOLOv1 model, thereby effectively verifying the correctness of the proposed method.

Research limitations/implications

This paper only recognizes space multi-targets under ideal simulation conditions, but has not fully considered the space multi-target recognition under the more complex space lighting environment, nutation, precession, roll and other motion laws. In the later period, training and detection can be performed by simulating more realistic space lighting environment images or multi-target images taken by upper-level rocket to further verify the feasibility of multi-target recognition algorithms in complex space environments.

Practical implications

The research in this paper validates that the deep learning-based algorithm to recognize multiple targets in the space environment is feasible in terms of accuracy and rate.

Originality/value

The paper helps to set up an image data set containing six satellite models in STK and one digital satellite model that simulates spatial illumination changes and spins in Solidworks, and use the characteristics of spatial targets (such as rectangles, circles and lines) to provide prior values to the network convolutional layer.

Details

Aircraft Engineering and Aerospace Technology, vol. 92 no. 8
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 20 April 2023

Vishva Payghode, Ayush Goyal, Anupama Bhan, Sailesh Suryanarayan Iyer and Ashwani Kumar Dubey

This paper aims to implement and extend the You Only Live Once (YOLO) algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural…

Abstract

Purpose

This paper aims to implement and extend the You Only Live Once (YOLO) algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural network once to detect the objects in an image, which is why it is powerful and fast. Cameras are found at many different crossroads and locations, but video processing of the feed through an object detection algorithm allows determining and tracking what is captured. Video Surveillance has many applications such as Car Tracking and tracking of people related to crime prevention. This paper provides exhaustive comparison between the existing methods and proposed method. Proposed method is found to have highest object detection accuracy.

Design/methodology/approach

The goal of this research is to develop a deep learning framework to automate the task of analyzing video footage through object detection in images. This framework processes video feed or image frames from CCTV, webcam or a DroidCam, which allows the camera in a mobile phone to be used as a webcam for a laptop. The object detection algorithm, with its model trained on a large data set of images, is able to load in each image given as an input, process the image and determine the categories of the matching objects that it finds. As a proof of concept, this research demonstrates the algorithm on images of several different objects. This research implements and extends the YOLO algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural network once to detect the objects in an image, which is why it is powerful and fast. Cameras are found at many different crossroads and locations, but video processing of the feed through an object detection algorithm allows determining and tracking what is captured. For video surveillance of traffic cameras, this has many applications, such as car tracking and person tracking for crime prevention. In this research, the implemented algorithm with the proposed methodology is compared against several different prior existing methods in literature. The proposed method was found to have the highest object detection accuracy for object detection and activity recognition, better than other existing methods.

Findings

The results indicate that the proposed deep learning–based model can be implemented in real-time for object detection and activity recognition. The added features of car crash detection, fall detection and social distancing detection can be used to implement a real-time video surveillance system that can help save lives and protect people. Such a real-time video surveillance system could be installed at street and traffic cameras and in CCTV systems. When this system would detect a car crash or a fatal human or pedestrian fall with injury, it can be programmed to send automatic messages to the nearest local police, emergency and fire stations. When this system would detect a social distancing violation, it can be programmed to inform the local authorities or sound an alarm with a warning message to alert the public to maintain their distance and avoid spreading their aerosol particles that may cause the spread of viruses, including the COVID-19 virus.

Originality/value

This paper proposes an improved and augmented version of the YOLOv3 model that has been extended to perform activity recognition, such as car crash detection, human fall detection and social distancing detection. The proposed model is based on a deep learning convolutional neural network model used to detect objects in images. The model is trained using the widely used and publicly available Common Objects in Context data set. The proposed model, being an extension of YOLO, can be implemented for real-time object and activity recognition. The proposed model had higher accuracies for both large-scale and all-scale object detection. This proposed model also exceeded all the other previous methods that were compared in extending and augmenting the object detection to activity recognition. The proposed model resulted in the highest accuracy for car crash detection, fall detection and social distancing detection.

Details

International Journal of Web Information Systems, vol. 19 no. 3/4
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 8 September 2023

Tolga Özer and Ömer Türkmen

This paper aims to design an AI-based drone that can facilitate the complicated and time-intensive control process for detecting healthy and defective solar panels. Today, the use…

Abstract

Purpose

This paper aims to design an AI-based drone that can facilitate the complicated and time-intensive control process for detecting healthy and defective solar panels. Today, the use of solar panels is becoming widespread, and control problems are increasing. Physical control of the solar panels is critical in obtaining electrical power. Controlling solar panel power plants and rooftop panel applications installed in large areas can be difficult and time-consuming. Therefore, this paper designs a system that aims to panel detection.

Design/methodology/approach

This paper designed a low-cost AI-based unmanned aerial vehicle to reduce the difficulty of the control process. Convolutional neural network based AI models were developed to classify solar panels as damaged, dusty and normal. Two approaches to the solar panel detection model were adopted: Approach 1 and Approach 2.

Findings

The training was conducted with YOLOv5, YOLOv6 and YOLOv8 models in Approach 1. The best F1 score was 81% at 150 epochs with YOLOv5m. In total, 87% and 89% of the best F1 score and mAP values were obtained with the YOLOv5s model at 100 epochs in Approach 2 as a proposed method. The best models at Approaches 1 and 2 were used with a developed AI-based drone in the real-time test application.

Originality/value

The AI-based low-cost solar panel detection drone was developed with an original data set of 1,100 images. A detailed comparative analysis of YOLOv5, YOLOv6 and YOLOv8 models regarding performance metrics was realized. Gaussian, salt-pepper noise addition and wavelet transform noise removal preprocessing techniques were applied to the created data set under the proposed method. The proposed method demonstrated expressive and remarkable performance in panel detection applications.

Details

Robotic Intelligence and Automation, vol. 43 no. 6
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 6 June 2023

Nurcan Sarikaya Basturk

The purpose of this paper is to present a deep ensemble neural network model for the detection of forest fires in aerial vehicle videos.

Abstract

Purpose

The purpose of this paper is to present a deep ensemble neural network model for the detection of forest fires in aerial vehicle videos.

Design/methodology/approach

Presented deep ensemble models include four convolutional neural networks (CNNs): a faster region-based CNN (Faster R-CNN), a simple one-stage object detector (RetinaNet) and two different versions of the you only look once (Yolo) models. The presented method generates its output by fusing the outputs of these different deep learning (DL) models.

Findings

The presented fusing approach significantly improves the detection accuracy of fire incidents in the input data.

Research limitations/implications

The computational complexity of the proposed method which is based on combining four different DL models is relatively higher than that of using each of these models individually. On the other hand, however, the performance of the proposed approach is considerably higher than that of any of the four DL models.

Practical implications

The simulation results show that using an ensemble model is quite useful for the precise detection of forest fires in real time through aerial vehicle videos or images.

Social implications

By this method, forest fires can be detected more efficiently and precisely. Because forests are crucial breathing resources of the earth and a shelter for many living creatures, the social impact of the method can be considered to be very high.

Originality/value

This study fuses the outputs of different DL models into an ensemble model. Hence, the ensemble model provides more potent and beneficial results than any of the single models.

Details

Aircraft Engineering and Aerospace Technology, vol. 95 no. 8
Type: Research Article
ISSN: 1748-8842

Keywords

Article
Publication date: 19 March 2024

Cemalettin Akdoğan, Tolga Özer and Yüksel Oğuz

Nowadays, food problems are likely to arise because of the increasing global population and decreasing arable land. Therefore, it is necessary to increase the yield of…

Abstract

Purpose

Nowadays, food problems are likely to arise because of the increasing global population and decreasing arable land. Therefore, it is necessary to increase the yield of agricultural products. Pesticides can be used to improve agricultural land products. This study aims to make the spraying of cherry trees more effective and efficient with the designed artificial intelligence (AI)-based agricultural unmanned aerial vehicle (UAV).

Design/methodology/approach

Two approaches have been adopted for the AI-based detection of cherry trees: In approach 1, YOLOv5, YOLOv7 and YOLOv8 models are trained with 70, 100 and 150 epochs. In Approach 2, a new method is proposed to improve the performance metrics obtained in Approach 1. Gaussian, wavelet transform (WT) and Histogram Equalization (HE) preprocessing techniques were applied to the generated data set in Approach 2. The best-performing models in Approach 1 and Approach 2 were used in the real-time test application with the developed agricultural UAV.

Findings

In Approach 1, the best F1 score was 98% in 100 epochs with the YOLOv5s model. In Approach 2, the best F1 score and mAP values were obtained as 98.6% and 98.9% in 150 epochs, with the YOLOv5m model with an improvement of 0.6% in the F1 score. In real-time tests, the AI-based spraying drone system detected and sprayed cherry trees with an accuracy of 66% in Approach 1 and 77% in Approach 2. It was revealed that the use of pesticides could be reduced by 53% and the energy consumption of the spraying system by 47%.

Originality/value

An original data set was created by designing an agricultural drone to detect and spray cherry trees using AI. YOLOv5, YOLOv7 and YOLOv8 models were used to detect and classify cherry trees. The results of the performance metrics of the models are compared. In Approach 2, a method including HE, Gaussian and WT is proposed, and the performance metrics are improved. The effect of the proposed method in a real-time experimental application is thoroughly analyzed.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 15 July 2021

Nehemia Sugianto, Dian Tjondronegoro, Rosemary Stockdale and Elizabeth Irenne Yuwono

The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.

Abstract

Purpose

The paper proposes a privacy-preserving artificial intelligence-enabled video surveillance technology to monitor social distancing in public spaces.

Design/methodology/approach

The paper proposes a new Responsible Artificial Intelligence Implementation Framework to guide the proposed solution's design and development. It defines responsible artificial intelligence criteria that the solution needs to meet and provides checklists to enforce the criteria throughout the process. To preserve data privacy, the proposed system incorporates a federated learning approach to allow computation performed on edge devices to limit sensitive and identifiable data movement and eliminate the dependency of cloud computing at a central server.

Findings

The proposed system is evaluated through a case study of monitoring social distancing at an airport. The results discuss how the system can fully address the case study's requirements in terms of its reliability, its usefulness when deployed to the airport's cameras, and its compliance with responsible artificial intelligence.

Originality/value

The paper makes three contributions. First, it proposes a real-time social distancing breach detection system on edge that extends from a combination of cutting-edge people detection and tracking algorithms to achieve robust performance. Second, it proposes a design approach to develop responsible artificial intelligence in video surveillance contexts. Third, it presents results and discussion from a comprehensive evaluation in the context of a case study at an airport to demonstrate the proposed system's robust performance and practical usefulness.

Details

Information Technology & People, vol. 37 no. 2
Type: Research Article
ISSN: 0959-3845

Keywords

Article
Publication date: 11 April 2022

Xinfa Shi, Ce Cui, Shizhong He, Xiaopeng Xie, Yuhang Sun and Chudong Qin

The purpose of this paper is to identify smaller wear particles and improve the calculation speed, identify more abrasive particles and promote industrial applications.

Abstract

Purpose

The purpose of this paper is to identify smaller wear particles and improve the calculation speed, identify more abrasive particles and promote industrial applications.

Design/methodology/approach

This paper studies a new intelligent recognition method for equipment wear debris based on the YOLO V5S model released in June 2020. Nearly 800 ferrography pictures, 23 types of wear debris, about 5,000 wear debris were used to train and test the model. The new lightweight approach of wear debris recognition can be implemented in rapidly and automatically and also provide for the recognition of wear debris in the field of online wear monitoring.

Findings

An intelligent recognition method of wear debris in ferrography image based on the YOLO V5S model was designed. After the training, the GIoU values of the model converged steadily at about 0.02. The overall precision rate and recall rate reached 0.4 and 0.5, respectively. The overall MAP value of each type of wear debris was 40.5, which was close to the official recognition level of YOLO V5S in the MS COCO competition. The practicality of the model was approved. The intelligent recognition method of wear debris based on the YOLO V5S model can effectively reduce the sensitivity of wear debris size. It also has a good recognition effect on wear debris in different sizes and different scales. Compared with YOLOV. YOLOV, Mask R-CNN and other algorithms%2C, the intelligent recognition method based on the YOLO V5S model, have shown their own advantages in terms of the recognition effect of wear debris%2C the operation speed and the size of weight files. It also provides a new function for implementing accurate recognition of wear debris images collected by online and independent ferrography analysis devices.

Originality/value

To the best of the authors’ knowledge, the intelligent identification of wear debris based on the YOLO V5S network is proposed for the first time, and a large number of wear debris images are verified and applied.

Details

Industrial Lubrication and Tribology, vol. 74 no. 5
Type: Research Article
ISSN: 0036-8792

Keywords

Article
Publication date: 17 October 2022

Jiayue Zhao, Yunzhong Cao and Yuanzhi Xiang

The safety management of construction machines is of primary importance. Considering that traditional construction machine safety monitoring and evaluation methods cannot adapt to…

Abstract

Purpose

The safety management of construction machines is of primary importance. Considering that traditional construction machine safety monitoring and evaluation methods cannot adapt to the complex construction environment, and the monitoring methods based on sensor equipment cost too much. This paper aims to introduce computer vision and deep learning technologies to propose the YOLOv5-FastPose (YFP) model to realize the pose estimation of construction machines by improving the AlphaPose human pose model.

Design/methodology/approach

This model introduced the object detection module YOLOv5m to improve the recognition accuracy for detecting construction machines. Meanwhile, to better capture the pose characteristics, the FastPose network optimized feature extraction was introduced into the Single-Machine Pose Estimation Module (SMPE) of AlphaPose. This study used Alberta Construction Image Dataset (ACID) and Construction Equipment Poses Dataset (CEPD) to establish the dataset of object detection and pose estimation of construction machines through data augmentation technology and Labelme image annotation software for training and testing the YFP model.

Findings

The experimental results show that the improved model YFP achieves an average normalization error (NE) of 12.94 × 103, an average Percentage of Correct Keypoints (PCK) of 98.48% and an average Area Under the PCK Curve (AUC) of 37.50 × 103. Compared with existing methods, this model has higher accuracy in the pose estimation of the construction machine.

Originality/value

This study extends and optimizes the human pose estimation model AlphaPose to make it suitable for construction machines, improving the performance of pose estimation for construction machines.

Details

Engineering, Construction and Architectural Management, vol. 31 no. 3
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 21 June 2023

Shikha Singh, Mohina Gandhi, Arpan Kumar Kar and Vinay Anand Tikkiwal

This study evaluates the effect of the media image content of business to business (B2B) organizations to accelerate social media engagement. It highlights the importance of…

Abstract

Purpose

This study evaluates the effect of the media image content of business to business (B2B) organizations to accelerate social media engagement. It highlights the importance of strategically designing image content for business marketing strategies.

Design/methodology/approach

This study designed a computation extensive research model based upon the stimulus-organism-response (SOR) theory using 39,139 Facebook posts of 125 organizations selected from Fortune 500 firms. Attributes from images and text were estimated using deep learning models. Subsequently, inferential analysis was established with ordinary least squares regression. Further machine learning algorithms, like support vector regression, k-nearest neighbour, decision tree and random forest, are used to analyze the significance and robustness of the proposed model for predicting engagement metrics.

Findings

The results indicate that the social media (SM) image content of B2B firms significantly impacts their social media engagement. The visual and linguistic attributes are extracted from the image using deep learning. The distinctive effect of each feature on social media engagement (SME) is empirically verified in this study.

Originality/value

This research presents practical insights formulated by embedding marketing, advertising, image processing and statistical knowledge of SM analytics. The findings of this study provide evidence for the stimulating effect of image content concerning SME. Based on the theoretical implications of this study, marketing and media content practitioners can enhance the efficacy of SM posts in engaging users.

Details

Industrial Management & Data Systems, vol. 123 no. 7
Type: Research Article
ISSN: 0263-5577

Keywords

1 – 10 of 50