Search results
1 – 10 of over 42000Johnny Kwok Wai Wong, Fateme Bameri, Alireza Ahmadian Fard Fini and Mojtaba Maghrebi
Accurate and rapid tracking and counting of building materials are crucial in managing on-site construction processes and evaluating their progress. Such processes are typically…
Abstract
Purpose
Accurate and rapid tracking and counting of building materials are crucial in managing on-site construction processes and evaluating their progress. Such processes are typically conducted by visual inspection, making them time-consuming and error prone. This paper aims to propose a video-based deep-learning approach to the automated detection and counting of building materials.
Design/methodology/approach
A framework for accurately counting building materials at indoor construction sites with low light levels was developed using state-of-the-art deep learning methods. An existing object-detection model, the You Only Look Once version 4 (YOLO v4) algorithm, was adapted to achieve rapid convergence and accurate detection of materials and site operatives. Then, DenseNet was deployed to recognise these objects. Finally, a material-counting module based on morphology operations and the Hough transform was applied to automatically count stacks of building materials.
Findings
The proposed approach was tested by counting site operatives and stacks of elevated floor tiles in video footage from a real indoor construction site. The proposed YOLO v4 object-detection system provided higher average accuracy within a shorter time than the traditional YOLO v4 approach.
Originality/value
The proposed framework makes it feasible to separately monitor stockpiled, installed and waste materials in low-light construction environments. The improved YOLO v4 detection method is superior to the current YOLO v4 approach and advances the existing object detection algorithm. This framework can potentially reduce the time required to track construction progress and count materials, thereby increasing the efficiency of work-in-progress evaluation. It also exhibits great potential for developing a more reliable system for monitoring construction materials and activities.
Details
Keywords
Alireza Ahmadian Fard Fini, Mojtaba Maghrebi, Perry John Forsythe and Travis Steven Waller
Measuring onsite productivity has been a substance of debate in the construction industry, mainly due to concerns about accuracy, repeatability and unbiasedness. Such…
Abstract
Purpose
Measuring onsite productivity has been a substance of debate in the construction industry, mainly due to concerns about accuracy, repeatability and unbiasedness. Such characteristics are central to demonstrate construction speed that can be achieved through adopting new prefabricated systems. Existing productivity measurement methods, however, cannot cost-effectively provide solid and replicable evidence of prefabrication benefits. This research proposes a low-cost automated method for measuring onsite installation productivity of prefabricated systems.
Design/methodology/approach
Firstly, the captured ultra-wide footages are undistorted by extracting the curvature contours and performing a developed meta-heuristic algorithm to straighten these contours. Then a preprocessing algorithm is developed that could automatically detect and remove the noises caused by vibrations and movements. Because this study aims to accurately measure the productivity the noise free images are double checked in a specific time window to make sure that even a tiny error, which have not been detected in the previous steps, will not been amplified through the process. In the next step, the existing side view provided by the camera is converted to a top view by using a spatial transformation method. Finally, the processed images are compared with the site drawings in order to detect the construction process over time and report the measured productivity.
Findings
The developed algorithms perform nearly real-time productivity computations through exact matching of actual installation process and digital design layout. The accuracy and noninterpretive use of the proposed method is demonstrated in construction of a multistorey cross-laminated timber building.
Originality/value
This study uses footages of an already installed surveillance camera where the camera's features are unknown and then image processing algorithms are deployed to retrieve accurate installation quantities and cycle times. The algorithms are almost generalized and versatile to be adjusted to measure installation productivity of other prefabricated building systems.
Details
Keywords
Automated dust monitoring in workplaces helps provide timely alerts to over-exposed workers and effective mitigation measures for proactive dust control. However, the cluttered…
Abstract
Purpose
Automated dust monitoring in workplaces helps provide timely alerts to over-exposed workers and effective mitigation measures for proactive dust control. However, the cluttered nature of construction sites poses a practical challenge to obtain enough high-quality images in the real world. The study aims to establish a framework that overcomes the challenges of lacking sufficient imagery data (“data-hungry problem”) for training computer vision algorithms to monitor construction dust.
Design/methodology/approach
This study develops a synthetic image generation method that incorporates virtual environments of construction dust for producing training samples. Three state-of-the-art object detection algorithms, including Faster-RCNN, you only look once (YOLO) and single shot detection (SSD), are trained using solely synthetic images. Finally, this research provides a comparative analysis of object detection algorithms for real-world dust monitoring regarding the accuracy and computational efficiency.
Findings
This study creates a construction dust emission (CDE) dataset consisting of 3,860 synthetic dust images as the training dataset and 1,015 real-world images as the testing dataset. The YOLO-v3 model achieves the best performance with a 0.93 F1 score and 31.44 fps among all three object detection models. The experimental results indicate that training dust detection algorithms with only synthetic images can achieve acceptable performance on real-world images.
Originality/value
This study provides insights into two questions: (1) how synthetic images could help train dust detection models to overcome data-hungry problems and (2) how well state-of-the-art deep learning algorithms can detect nonrigid construction dust.
Details
Keywords
Hadi Mahamivanan, Navid Ghassemi, Mohammad Tayarani Darbandy, Afshin Shoeibi, Sadiq Hussain, Farnad Nasirzadeh, Roohallah Alizadehsani, Darius Nahavandi, Abbas Khosravi and Saeid Nahavandi
This paper aims to propose a new deep learning technique to detect the type of material to improve automated construction quality monitoring.
Abstract
Purpose
This paper aims to propose a new deep learning technique to detect the type of material to improve automated construction quality monitoring.
Design/methodology/approach
A new data augmentation approach that has improved the model robustness against different illumination conditions and overfitting is proposed. This study uses data augmentation at test time and adds outlier samples to training set to prevent over-fitted network training. For data augmentation at test time, five segments are extracted from each sample image and fed to the network. For these images, the network outputting average values is used as the final prediction. Then, the proposed approach is evaluated on multiple deep networks used as material classifiers. The fully connected layers are removed from the end of the networks, and only convolutional layers are retained.
Findings
The proposed method is evaluated on recognizing 11 types of building materials which include 1,231 images taken from several construction sites. Each image resolution is 4,000 × 3,000. The images are captured with different illumination and camera positions. Different illumination conditions lead to trained networks that are more robust against various environmental conditions. Using VGG16 model, an accuracy of 97.35% is achieved outperforming existing approaches.
Practical implications
It is believed that the proposed method presents a new and robust tool for detecting and classifying different material types. The automated detection of material will aid to monitor the quality and see whether the right type of material has been used in the project based on contract specifications. In addition, the proposed model can be used as a guideline for performing quality control (QC) in construction projects based on project quality plan. It can also be used as an input for automated progress monitoring because the material type detection will provide a critical input for object detection.
Originality/value
Several studies have been conducted to perform quality management, but there are some issues that need to be addressed. In most previous studies, a very limited number of material types were examined. In addition, although some studies have reported high accuracy to detect material types (Bunrit et al., 2020), their accuracy is dramatically reduced when they are used to detect materials with similar texture and color. In this research, the authors propose a new method to solve the mentioned shortcomings.
Details
Keywords
Chang Liu, Samad M.E. Sepasgozar, Sara Shirowzhan and Gelareh Mohammadi
The practice of artificial intelligence (AI) is increasingly being promoted by technology developers. However, its adoption rate is still reported as low in the construction…
Abstract
Purpose
The practice of artificial intelligence (AI) is increasingly being promoted by technology developers. However, its adoption rate is still reported as low in the construction industry due to a lack of expertise and the limited reliable applications for AI technology. Hence, this paper aims to present the detailed outcome of experimentations evaluating the applicability and the performance of AI object detection algorithms for construction modular object detection.
Design/methodology/approach
This paper provides a thorough evaluation of two deep learning algorithms for object detection, including the faster region-based convolutional neural network (faster RCNN) and single shot multi-box detector (SSD). Two types of metrics are also presented; first, the average recall and mean average precision by image pixels; second, the recall and precision by counting. To conduct the experiments using the selected algorithms, four infrastructure and building construction sites are chosen to collect the required data, including a total of 990 images of three different but common modular objects, including modular panels, safety barricades and site fences.
Findings
The results of the comprehensive evaluation of the algorithms show that the performance of faster RCNN and SSD depends on the context that detection occurs. Indeed, surrounding objects and the backgrounds of the objects affect the level of accuracy obtained from the AI analysis and may particularly effect precision and recall. The analysis of loss lines shows that the loss lines for selected objects depend on both their geometry and the image background. The results on selected objects show that faster RCNN offers higher accuracy than SSD for detection of selected objects.
Research limitations/implications
The results show that modular object detection is crucial in construction for the achievement of the required information for project quality and safety objectives. The detection process can significantly improve monitoring object installation progress in an accurate and machine-based manner avoiding human errors. The results of this paper are limited to three construction sites, but future investigations can cover more tasks or objects from different construction sites in a fully automated manner.
Originality/value
This paper’s originality lies in offering new AI applications in modular construction, using a large first-hand data set collected from three construction sites. Furthermore, the paper presents the scientific evaluation results of implementing recent object detection algorithms across a set of extended metrics using the original training and validation data sets to improve the generalisability of the experimentation. This paper also provides the practitioners and scholars with a workflow on AI applications in the modular context and the first-hand referencing data.
Details
Keywords
Johnny Kwok Wai Wong, Mojtaba Maghrebi, Alireza Ahmadian Fard Fini, Mohammad Amin Alizadeh Golestani, Mahdi Ahmadnia and Michael Er
Images taken from construction site interiors often suffer from low illumination and poor natural colors, which restrict their application for high-level site management purposes…
Abstract
Purpose
Images taken from construction site interiors often suffer from low illumination and poor natural colors, which restrict their application for high-level site management purposes. The state-of-the-art low-light image enhancement method provides promising image enhancement results. However, they generally require a longer execution time to complete the enhancement. This study aims to develop a refined image enhancement approach to improve execution efficiency and performance accuracy.
Design/methodology/approach
To develop the refined illumination enhancement algorithm named enhanced illumination quality (EIQ), a quadratic expression was first added to the initial illumination map. Subsequently, an adjusted weight matrix was added to improve the smoothness of the illumination map. A coordinated descent optimization algorithm was then applied to minimize the processing time. Gamma correction was also applied to further enhance the illumination map. Finally, a frame comparing and averaging method was used to identify interior site progress.
Findings
The proposed refined approach took around 4.36–4.52 s to achieve the expected results while outperforming the current low-light image enhancement method. EIQ demonstrated a lower lightness-order error and provided higher object resolution in enhanced images. EIQ also has a higher structural similarity index and peak-signal-to-noise ratio, which indicated better image reconstruction performance.
Originality/value
The proposed approach provides an alternative to shorten the execution time, improve equalization of the illumination map and provide a better image reconstruction. The approach could be applied to low-light video enhancement tasks and other dark or poor jobsite images for object detection processes.
Details
Keywords
Anne Rindell and Oriol Iglesias
– The purpose of this paper is to further understanding of the roles that time and context play in consumers’ evolving brand image construction processes over time.
Abstract
Purpose
The purpose of this paper is to further understanding of the roles that time and context play in consumers’ evolving brand image construction processes over time.
Design/methodology/approach
This exploratory, qualitative research is based on the analysis and interpretation of 164 online consumer narratives pertaining to the consumers’ most memorable coffee moments.
Findings
Consumers build images of a brand through both fleeting moments over time linked to special occasions and everyday moments in their lives over time. Understanding image construction processes thus must go beyond just physical (location) and psychological (social) circumstances. Activity processes (“When I am doing […]”) also are central to this understanding.
Research limitations/implications
Time and context emerge as key determinants of consumers’ brand image processes and should hence be explicitly recognised in branding research. This study focuses only on brand admirers; because the study context refers to a business-to-consumer product, the focus is the product brand.
Practical implications
Considering the key role of memorable past moments (time and context) in consumers’ brand image construction processes, branding strategies should reflect systematic efforts to identify these moments. Such an approach can provide opportunities for companies to deepen their consumer understanding and achieve a favourable presence in consumer contexts during which brand images get constructed.
Originality/value
This study identifies key dimensions of time and context and thus furthers understanding of these dimensions in relation to brand images.
Details
Keywords
Jiayue Zhao, Yunzhong Cao and Yuanzhi Xiang
The safety management of construction machines is of primary importance. Considering that traditional construction machine safety monitoring and evaluation methods cannot adapt to…
Abstract
Purpose
The safety management of construction machines is of primary importance. Considering that traditional construction machine safety monitoring and evaluation methods cannot adapt to the complex construction environment, and the monitoring methods based on sensor equipment cost too much. This paper aims to introduce computer vision and deep learning technologies to propose the YOLOv5-FastPose (YFP) model to realize the pose estimation of construction machines by improving the AlphaPose human pose model.
Design/methodology/approach
This model introduced the object detection module YOLOv5m to improve the recognition accuracy for detecting construction machines. Meanwhile, to better capture the pose characteristics, the FastPose network optimized feature extraction was introduced into the Single-Machine Pose Estimation Module (SMPE) of AlphaPose. This study used Alberta Construction Image Dataset (ACID) and Construction Equipment Poses Dataset (CEPD) to establish the dataset of object detection and pose estimation of construction machines through data augmentation technology and Labelme image annotation software for training and testing the YFP model.
Findings
The experimental results show that the improved model YFP achieves an average normalization error (NE) of 12.94 × 10–3, an average Percentage of Correct Keypoints (PCK) of 98.48% and an average Area Under the PCK Curve (AUC) of 37.50 × 10–3. Compared with existing methods, this model has higher accuracy in the pose estimation of the construction machine.
Originality/value
This study extends and optimizes the human pose estimation model AlphaPose to make it suitable for construction machines, improving the performance of pose estimation for construction machines.
Details
Keywords
Zubair Ahmed Memon, Muhd Zaimi Abd. and Mushairry Mustaffar
This main purpose of this study is to summarize the experience at the Construction Technology and Management Center (CTMC) to develop a Digitalizing Construction Monitoring (DCM…
Abstract
Purpose
This main purpose of this study is to summarize the experience at the Construction Technology and Management Center (CTMC) to develop a Digitalizing Construction Monitoring (DCM) system by integrating 3DAutoCAD drawings and digital images. The objective of this paper is to propose a framework model for the DCM system and discuss in detail the steps involved for developing and calculating the 3D coordinate values from 2D digital images.
Design/methodology/approach
As digital images are one of the major sources of information from site, the process of measuring the project progress from images is quite challenging. This study used Photogrammetry techniques to extract the information from digital images, which can be concisely defined as the science of calculating 3D object coordinates from images, with PhotoModeler pro‐version software. Issues pertaining to the quality of the 3D model derived from 2D digital images are also discussed.
Findings
A framework model for DCM was proposed and different phases were discussed. A pilot case study on Larkin Mosque Car Parking Project was conducted to check the validity of using Photogrammetry techniques to extract 3D coordinate values by using PhotoModeler Software. Preliminary results show that significant control has been achieved to extract 3D coordinate values from 2D digital images, which and can be integrated into the digitalized system to automate the construction project monitoring process.
Originality/value
The techniques discussed in this paper are used for monitoring the project progress systematically. The results of this study will be incorporated to develop a fully automated project progress monitoring system, which can be updated automatically as the project progresses automatically.
Details
Keywords
Mikias Gugssa, Long Li, Lina Pu, Ali Gurbuz, Yu Luo and Jun Wang
Computer vision and deep learning (DL) methods have been investigated for personal protective equipment (PPE) monitoring and detection for construction workers’ safety. However…
Abstract
Purpose
Computer vision and deep learning (DL) methods have been investigated for personal protective equipment (PPE) monitoring and detection for construction workers’ safety. However, it is still challenging to implement automated safety monitoring methods in near real time or in a time-efficient manner in real construction practices. Therefore, this study developed a novel solution to enhance the time efficiency to achieve near-real-time safety glove detection and meanwhile preserve data privacy.
Design/methodology/approach
The developed method comprises two primary components: (1) transfer learning methods to detect safety gloves and (2) edge computing to improve time efficiency and data privacy. To compare the developed edge computing-based method with the currently widely used cloud computing-based methods, a comprehensive comparative analysis was conducted from both the implementation and theory perspectives, providing insights into the developed approach’s performance.
Findings
Three DL models achieved mean average precision (mAP) scores ranging from 74.92% to 84.31% for safety glove detection. The other two methods by combining object detection and classification achieved mAP as 89.91% for hand detection and 100% for glove classification. From both implementation and theory perspectives, the edge computing-based method detected gloves faster than the cloud computing-based method. The edge computing-based method achieved a detection latency of 36%–68% shorter than the cloud computing-based method in the implementation perspective. The findings highlight edge computing’s potential for near-real-time detection with improved data privacy.
Originality/value
This study implemented and evaluated DL-based safety monitoring methods on different computing infrastructures to investigate their time efficiency. This study contributes to existing knowledge by demonstrating how edge computing can be used with DL models (without sacrificing their performance) to improve PPE-glove monitoring in a time-efficient manner as well as maintain data privacy.
Details