Search results

1 – 10 of 134
Article
Publication date: 28 June 2022

Akhil Kumar

This work aims to present a deep learning model for face mask detection in surveillance environments such as automatic teller machines (ATMs), banks, etc. to identify persons…

Abstract

Purpose

This work aims to present a deep learning model for face mask detection in surveillance environments such as automatic teller machines (ATMs), banks, etc. to identify persons wearing face masks. In surveillance environments, complete visibility of the face area is a guideline, and criminals and law offenders commit crimes by hiding their faces behind a face mask. The face mask detector model proposed in this work can be used as a tool and integrated with surveillance cameras in autonomous surveillance environments to identify and catch law offenders and criminals.

Design/methodology/approach

The proposed face mask detector is developed by integrating the residual network (ResNet)34 feature extractor on top of three You Only Look Once (YOLO) detection layers along with the usage of the spatial pyramid pooling (SPP) layer to extract a rich and dense feature map. Furthermore, at the training time, data augmentation operations such as Mosaic and MixUp have been applied to the feature extraction network so that it can get trained with images of varying complexities. The proposed detector is trained and tested over a custom face mask detection dataset consisting of 52,635 images. For validation, comparisons have been provided with the performance of YOLO v1, v2, tiny YOLO v1, v2, v3 and v4 and other benchmark work present in the literature by evaluating performance metrics such as precision, recall, F1 score, mean average precision (mAP) for the overall dataset and average precision (AP) for each class of the dataset.

Findings

The proposed face mask detector achieved 4.75–9.75 per cent higher detection accuracy in terms of mAP, 5–31 per cent higher AP for detection of faces with masks and, specifically, 2–30 per cent higher AP for detection of face masks on the face region as compared to the tested baseline variants of YOLO. Furthermore, the usage of the ResNet34 feature extractor and SPP layer in the proposed detection model reduced the training time and the detection time. The proposed face mask detection model can perform detection over an image in 0.45 s, which is 0.2–0.15 s lesser than that for other tested YOLO variants, thus making the proposed detection model perform detections at a higher speed.

Research limitations/implications

The proposed face mask detector model can be utilized as a tool to detect persons with face masks who are a potential threat to the automatic surveillance environments such as ATMs, banks, airport security checks, etc. The other research implication of the proposed work is that it can be trained and tested for other object detection problems such as cancer detection in images, fish species detection, vehicle detection, etc.

Practical implications

The proposed face mask detector can be integrated with automatic surveillance systems and used as a tool to detect persons with face masks who are potential threats to ATMs, banks, etc. and in the present times of COVID-19 to detect if the people are following a COVID-appropriate behavior of wearing a face mask or not in the public areas.

Originality/value

The novelty of this work lies in the usage of the ResNet34 feature extractor with YOLO detection layers, which makes the proposed model a compact and powerful convolutional neural-network-based face mask detector model. Furthermore, the SPP layer has been applied to the ResNet34 feature extractor to make it able to extract a rich and dense feature map. The other novelty of the present work is the implementation of Mosaic and MixUp data augmentation in the training network that provided the feature extractor with 3× images of varying complexities and orientations and further aided in achieving higher detection accuracy. The proposed model is novel in terms of extracting rich features, performing augmentation at the training time and achieving high detection accuracy while maintaining the detection speed.

Details

Data Technologies and Applications, vol. 57 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 11 April 2022

Xinfa Shi, Ce Cui, Shizhong He, Xiaopeng Xie, Yuhang Sun and Chudong Qin

The purpose of this paper is to identify smaller wear particles and improve the calculation speed, identify more abrasive particles and promote industrial applications.

Abstract

Purpose

The purpose of this paper is to identify smaller wear particles and improve the calculation speed, identify more abrasive particles and promote industrial applications.

Design/methodology/approach

This paper studies a new intelligent recognition method for equipment wear debris based on the YOLO V5S model released in June 2020. Nearly 800 ferrography pictures, 23 types of wear debris, about 5,000 wear debris were used to train and test the model. The new lightweight approach of wear debris recognition can be implemented in rapidly and automatically and also provide for the recognition of wear debris in the field of online wear monitoring.

Findings

An intelligent recognition method of wear debris in ferrography image based on the YOLO V5S model was designed. After the training, the GIoU values of the model converged steadily at about 0.02. The overall precision rate and recall rate reached 0.4 and 0.5, respectively. The overall MAP value of each type of wear debris was 40.5, which was close to the official recognition level of YOLO V5S in the MS COCO competition. The practicality of the model was approved. The intelligent recognition method of wear debris based on the YOLO V5S model can effectively reduce the sensitivity of wear debris size. It also has a good recognition effect on wear debris in different sizes and different scales. Compared with YOLOV. YOLOV, Mask R-CNN and other algorithms%2C, the intelligent recognition method based on the YOLO V5S model, have shown their own advantages in terms of the recognition effect of wear debris%2C the operation speed and the size of weight files. It also provides a new function for implementing accurate recognition of wear debris images collected by online and independent ferrography analysis devices.

Originality/value

To the best of the authors’ knowledge, the intelligent identification of wear debris based on the YOLO V5S network is proposed for the first time, and a large number of wear debris images are verified and applied.

Details

Industrial Lubrication and Tribology, vol. 74 no. 5
Type: Research Article
ISSN: 0036-8792

Keywords

Article
Publication date: 1 June 2023

Johnny Kwok Wai Wong, Fateme Bameri, Alireza Ahmadian Fard Fini and Mojtaba Maghrebi

Accurate and rapid tracking and counting of building materials are crucial in managing on-site construction processes and evaluating their progress. Such processes are typically…

Abstract

Purpose

Accurate and rapid tracking and counting of building materials are crucial in managing on-site construction processes and evaluating their progress. Such processes are typically conducted by visual inspection, making them time-consuming and error prone. This paper aims to propose a video-based deep-learning approach to the automated detection and counting of building materials.

Design/methodology/approach

A framework for accurately counting building materials at indoor construction sites with low light levels was developed using state-of-the-art deep learning methods. An existing object-detection model, the You Only Look Once version 4 (YOLO v4) algorithm, was adapted to achieve rapid convergence and accurate detection of materials and site operatives. Then, DenseNet was deployed to recognise these objects. Finally, a material-counting module based on morphology operations and the Hough transform was applied to automatically count stacks of building materials.

Findings

The proposed approach was tested by counting site operatives and stacks of elevated floor tiles in video footage from a real indoor construction site. The proposed YOLO v4 object-detection system provided higher average accuracy within a shorter time than the traditional YOLO v4 approach.

Originality/value

The proposed framework makes it feasible to separately monitor stockpiled, installed and waste materials in low-light construction environments. The improved YOLO v4 detection method is superior to the current YOLO v4 approach and advances the existing object detection algorithm. This framework can potentially reduce the time required to track construction progress and count materials, thereby increasing the efficiency of work-in-progress evaluation. It also exhibits great potential for developing a more reliable system for monitoring construction materials and activities.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 31 October 2023

Yangze Liang and Zhao Xu

Monitoring of the quality of precast concrete (PC) components is crucial for the success of prefabricated construction projects. Currently, quality monitoring of PC components…

Abstract

Purpose

Monitoring of the quality of precast concrete (PC) components is crucial for the success of prefabricated construction projects. Currently, quality monitoring of PC components during the construction phase is predominantly done manually, resulting in low efficiency and hindering the progress of intelligent construction. This paper presents an intelligent inspection method for assessing the appearance quality of PC components, utilizing an enhanced you look only once (YOLO) model and multi-source data. The aim of this research is to achieve automated management of the appearance quality of precast components in the prefabricated construction process through digital means.

Design/methodology/approach

The paper begins by establishing an improved YOLO model and an image dataset for evaluating appearance quality. Through object detection in the images, a preliminary and efficient assessment of the precast components' appearance quality is achieved. Moreover, the detection results are mapped onto the point cloud for high-precision quality inspection. In the case of precast components with quality defects, precise quality inspection is conducted by combining the three-dimensional model data obtained from forward design conversion with the captured point cloud data through registration. Additionally, the paper proposes a framework for an automated inspection platform dedicated to assessing appearance quality in prefabricated buildings, encompassing the platform's hardware network.

Findings

The improved YOLO model achieved a best mean average precision of 85.02% on the VOC2007 dataset, surpassing the performance of most similar models. After targeted training, the model exhibits excellent recognition capabilities for the four common appearance quality defects. When mapped onto the point cloud, the accuracy of quality inspection based on point cloud data and forward design is within 0.1 mm. The appearance quality inspection platform enables feedback and optimization of quality issues.

Originality/value

The proposed method in this study enables high-precision, visualized and automated detection of the appearance quality of PC components. It effectively meets the demand for quality inspection of precast components on construction sites of prefabricated buildings, providing technological support for the development of intelligent construction. The design of the appearance quality inspection platform's logic and framework facilitates the integration of the method, laying the foundation for efficient quality management in the future.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 20 April 2023

Vishva Payghode, Ayush Goyal, Anupama Bhan, Sailesh Suryanarayan Iyer and Ashwani Kumar Dubey

This paper aims to implement and extend the You Only Live Once (YOLO) algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural…

Abstract

Purpose

This paper aims to implement and extend the You Only Live Once (YOLO) algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural network once to detect the objects in an image, which is why it is powerful and fast. Cameras are found at many different crossroads and locations, but video processing of the feed through an object detection algorithm allows determining and tracking what is captured. Video Surveillance has many applications such as Car Tracking and tracking of people related to crime prevention. This paper provides exhaustive comparison between the existing methods and proposed method. Proposed method is found to have highest object detection accuracy.

Design/methodology/approach

The goal of this research is to develop a deep learning framework to automate the task of analyzing video footage through object detection in images. This framework processes video feed or image frames from CCTV, webcam or a DroidCam, which allows the camera in a mobile phone to be used as a webcam for a laptop. The object detection algorithm, with its model trained on a large data set of images, is able to load in each image given as an input, process the image and determine the categories of the matching objects that it finds. As a proof of concept, this research demonstrates the algorithm on images of several different objects. This research implements and extends the YOLO algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural network once to detect the objects in an image, which is why it is powerful and fast. Cameras are found at many different crossroads and locations, but video processing of the feed through an object detection algorithm allows determining and tracking what is captured. For video surveillance of traffic cameras, this has many applications, such as car tracking and person tracking for crime prevention. In this research, the implemented algorithm with the proposed methodology is compared against several different prior existing methods in literature. The proposed method was found to have the highest object detection accuracy for object detection and activity recognition, better than other existing methods.

Findings

The results indicate that the proposed deep learning–based model can be implemented in real-time for object detection and activity recognition. The added features of car crash detection, fall detection and social distancing detection can be used to implement a real-time video surveillance system that can help save lives and protect people. Such a real-time video surveillance system could be installed at street and traffic cameras and in CCTV systems. When this system would detect a car crash or a fatal human or pedestrian fall with injury, it can be programmed to send automatic messages to the nearest local police, emergency and fire stations. When this system would detect a social distancing violation, it can be programmed to inform the local authorities or sound an alarm with a warning message to alert the public to maintain their distance and avoid spreading their aerosol particles that may cause the spread of viruses, including the COVID-19 virus.

Originality/value

This paper proposes an improved and augmented version of the YOLOv3 model that has been extended to perform activity recognition, such as car crash detection, human fall detection and social distancing detection. The proposed model is based on a deep learning convolutional neural network model used to detect objects in images. The model is trained using the widely used and publicly available Common Objects in Context data set. The proposed model, being an extension of YOLO, can be implemented for real-time object and activity recognition. The proposed model had higher accuracies for both large-scale and all-scale object detection. This proposed model also exceeded all the other previous methods that were compared in extending and augmenting the object detection to activity recognition. The proposed model resulted in the highest accuracy for car crash detection, fall detection and social distancing detection.

Details

International Journal of Web Information Systems, vol. 19 no. 3/4
Type: Research Article
ISSN: 1744-0084

Keywords

Abstract

Subject Area

Business Ethics, Corporate Social Responsibility.

Study Level

This case is suitable to be used in advanced undergraduate and MBA/MSc level.

Case Overview

This case demonstrates the dilemma of a team of students who initiated a CSR project under the supervision of their Business Ethics, Responsibility, and Sustainability (BERS) course lecturer Dr Qanitah at Azman Hashim International Business School, UTM. The team faced challenge in getting sufficient sponsorship from the outside parties involved. In order to create awareness about CSR issues among general public, the team came up with a project plan and named it as You Only Live Once (YOLO). Two weeks before the YOLO project, one of the main sponsors withdrawn the agreement to sponsor the event. Lack of sufficient funding could contribute to the failure of the YOLO project. Dr Qanitah and the team were in a dilemma to sort out this issue.

Expected Learning Outcomes

By utilizing this case, the students will be able to:

  • understand the need for undertaking CSR initiatives;

  • expose to the obstacles faced by organizer with regard to the sudden withdrawal of sponsorships; and

  • understand the importance of building awareness about CSR among general public.

understand the need for undertaking CSR initiatives;

expose to the obstacles faced by organizer with regard to the sudden withdrawal of sponsorships; and

understand the importance of building awareness about CSR among general public.

Details

Green Behavior and Corporate Social Responsibility in Asia
Type: Book
ISBN: 978-1-78756-684-2

Keywords

Article
Publication date: 13 July 2021

Ruoxin Xiong and Pingbo Tang

Automated dust monitoring in workplaces helps provide timely alerts to over-exposed workers and effective mitigation measures for proactive dust control. However, the cluttered…

Abstract

Purpose

Automated dust monitoring in workplaces helps provide timely alerts to over-exposed workers and effective mitigation measures for proactive dust control. However, the cluttered nature of construction sites poses a practical challenge to obtain enough high-quality images in the real world. The study aims to establish a framework that overcomes the challenges of lacking sufficient imagery data (“data-hungry problem”) for training computer vision algorithms to monitor construction dust.

Design/methodology/approach

This study develops a synthetic image generation method that incorporates virtual environments of construction dust for producing training samples. Three state-of-the-art object detection algorithms, including Faster-RCNN, you only look once (YOLO) and single shot detection (SSD), are trained using solely synthetic images. Finally, this research provides a comparative analysis of object detection algorithms for real-world dust monitoring regarding the accuracy and computational efficiency.

Findings

This study creates a construction dust emission (CDE) dataset consisting of 3,860 synthetic dust images as the training dataset and 1,015 real-world images as the testing dataset. The YOLO-v3 model achieves the best performance with a 0.93 F1 score and 31.44 fps among all three object detection models. The experimental results indicate that training dust detection algorithms with only synthetic images can achieve acceptable performance on real-world images.

Originality/value

This study provides insights into two questions: (1) how synthetic images could help train dust detection models to overcome data-hungry problems and (2) how well state-of-the-art deep learning algorithms can detect nonrigid construction dust.

Details

Smart and Sustainable Built Environment, vol. 10 no. 3
Type: Research Article
ISSN: 2046-6099

Keywords

Content available
Article
Publication date: 6 November 2009

42

Abstract

Details

Pigment & Resin Technology, vol. 38 no. 6
Type: Research Article
ISSN: 0369-9420

Article
Publication date: 8 September 2023

Tolga Özer and Ömer Türkmen

This paper aims to design an AI-based drone that can facilitate the complicated and time-intensive control process for detecting healthy and defective solar panels. Today, the use…

Abstract

Purpose

This paper aims to design an AI-based drone that can facilitate the complicated and time-intensive control process for detecting healthy and defective solar panels. Today, the use of solar panels is becoming widespread, and control problems are increasing. Physical control of the solar panels is critical in obtaining electrical power. Controlling solar panel power plants and rooftop panel applications installed in large areas can be difficult and time-consuming. Therefore, this paper designs a system that aims to panel detection.

Design/methodology/approach

This paper designed a low-cost AI-based unmanned aerial vehicle to reduce the difficulty of the control process. Convolutional neural network based AI models were developed to classify solar panels as damaged, dusty and normal. Two approaches to the solar panel detection model were adopted: Approach 1 and Approach 2.

Findings

The training was conducted with YOLOv5, YOLOv6 and YOLOv8 models in Approach 1. The best F1 score was 81% at 150 epochs with YOLOv5m. In total, 87% and 89% of the best F1 score and mAP values were obtained with the YOLOv5s model at 100 epochs in Approach 2 as a proposed method. The best models at Approaches 1 and 2 were used with a developed AI-based drone in the real-time test application.

Originality/value

The AI-based low-cost solar panel detection drone was developed with an original data set of 1,100 images. A detailed comparative analysis of YOLOv5, YOLOv6 and YOLOv8 models regarding performance metrics was realized. Gaussian, salt-pepper noise addition and wavelet transform noise removal preprocessing techniques were applied to the created data set under the proposed method. The proposed method demonstrated expressive and remarkable performance in panel detection applications.

Details

Robotic Intelligence and Automation, vol. 43 no. 6
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 4 August 2023

Can Uzun and Raşit Eren Cangür

This study presents an ontological approach to assess the architectural outputs of generative adversarial networks. This paper aims to assess the performance of the generative…

Abstract

Purpose

This study presents an ontological approach to assess the architectural outputs of generative adversarial networks. This paper aims to assess the performance of the generative adversarial network in representing building knowledge.

Design/methodology/approach

The proposed ontological assessment consists of five steps. These are, respectively, creating an architectural data set, developing ontology for the architectural data set, training the You Only Look Once object detection with labels within the proposed ontology, training the StyleGAN algorithm with the images in the data set and finally, detecting the ontological labels and calculating the ontological relations of StyleGAN-generated pixel-based architectural images. The authors propose and calculate ontological identity and ontological inclusion metrics to assess the StyleGAN-generated ontological labels. This study uses 300 bay window images as an architectural data set for the ontological assessment experiments.

Findings

The ontological assessment provides semantic-based queries on StyleGAN-generated architectural images by checking the validity of the building knowledge representation. Moreover, this ontological validity reveals the building element label-specific failure and success rates simultaneously.

Originality/value

This study contributes to the assessment process of the generative adversarial networks through ontological validity checks rather than only conducting pixel-based similarity checks; semantic-based queries can introduce the GAN-generated, pixel-based building elements into the architecture, engineering and construction industry.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

1 – 10 of 134