Search results

1 – 10 of 86
Article
Publication date: 5 October 2022

H.P.M.N.L.B. Moragane, B.A.K.S. Perera, Asha Dulanjalie Palihakkara and Biyanka Ekanayake

Construction progress monitoring (CPM) is considered a difficult and tedious task in construction projects, which focuses on identifying discrepancies between the as-built product…

Abstract

Purpose

Construction progress monitoring (CPM) is considered a difficult and tedious task in construction projects, which focuses on identifying discrepancies between the as-built product and the as-planned design. Computer vision (CV) technology is applied to automate the CPM process. However, the synergy between the CV and CPM in literature and industry practice is lacking. This study aims to fulfil this research gap.

Design/methodology/approach

A Delphi qualitative approach was used in this study by conducting two interview rounds. The collected data was analysed using manual content analysis.

Findings

This study identified seven stages of CPM; data acquisition, information retrieval, verification, progress estimation and comparison, visualisation of the results and schedule updating. Factors such as higher accuracy in data, less labourious process, efficiency and near real-time access are some of the significant enablers in instigating CV for CPM. Major challenges identified were occlusions and lighting issues in the site images and lack of support from the management. The challenges can be easily overcome by implementing suitable strategies such as familiarisation of the workforce with CV technology and application of CV research for the construction industry to grow with the technology in line with other industries.

Originality/value

This study addresses the gap pertaining to the synergy between the CV in CPM literature and the industry practice. This research contributes by enabling the construction personnel to identify the shortcomings and the opportunities to apply automated technologies concerning each stage in the progress monitoring process.

Details

Construction Innovation , vol. 24 no. 2
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 16 April 2024

Shuyuan Xu, Jun Wang, Xiangyu Wang, Wenchi Shou and Tuan Ngo

This paper covers the development of a novel defect model for concrete highway bridges. The proposed defect model is intended to facilitate the identification of bridge’s…

Abstract

Purpose

This paper covers the development of a novel defect model for concrete highway bridges. The proposed defect model is intended to facilitate the identification of bridge’s condition information (i.e. defects), improve the efficiency and accuracy of bridge inspections by supporting practitioners and even machines with digitalised expert knowledge, and ultimately automate the process.

Design/methodology/approach

The research design consists of three major phases so as to (1) categorise common defect with regard to physical entities (i.e. bridge element), (2) establish internal relationships among those defects and (3) relate defects to their properties and potential causes. A mixed-method research approach, which includes a comprehensive literature review, focus groups and case studies, was employed to develop and validate the proposed defect model.

Findings

The data collected through the literature and focus groups were analysed and knowledge were extracted to form the novel defect model. The defect model was then validated and further calibrated through case study. Inspection reports of nearly 300 bridges in China were collected and analysed. The study uncovered the relationships between defects and a variety of inspection-related elements and represented in the form of an accessible, digitalised and user-friendly knowledge model.

Originality/value

The contribution of this paper is the development of a defect model that can assist inexperienced practitioners and even machines in the near future to conduct inspection tasks. For one, the proposed defect model can standardise the data collection process of bridge inspection, including the identification of defects and documentation of their vital properties, paving the path for the automation in subsequent stages (e.g. condition evaluation). For another, by retrieving rich experience and expert knowledge which have long been reserved and inherited in the industrial sector, the inspection efficiency and accuracy can be considerably improved.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 13 February 2024

Wenzhen Yang, Shuo Shan, Mengting Jin, Yu Liu, Yang Zhang and Dongya Li

This paper aims to realize an in-situ quality inspection system rapidly for new injection molding (IM) tasks via transfer learning (TL) approach and automation technology.

Abstract

Purpose

This paper aims to realize an in-situ quality inspection system rapidly for new injection molding (IM) tasks via transfer learning (TL) approach and automation technology.

Design/methodology/approach

The proposed in-situ quality inspection system consists of an injection machine, USB camera, programmable logic controller and personal computer, interconnected via OPC or USB communication interfaces. This configuration enables seamless automation of the IM process, real-time quality inspection and automated decision-making. In addition, a MobileNet-based deep learning (DL) model is proposed for quality inspection of injection parts, fine-tuned using the TL approach.

Findings

Using the TL approach, the MobileNet-based DL model demonstrates exceptional performance, achieving validation accuracy of 99.1% with the utilization of merely 50 images per category. Its detection speed and accuracy surpass those of DenseNet121-based, VGG16-based, ResNet50-based and Xception-based convolutional neural networks. Further evaluation using a random data set of 120 images, as assessed through the confusion matrix, attests to an accuracy rate of 96.67%.

Originality/value

The proposed MobileNet-based DL model achieves higher accuracy with less resource consumption using the TL approach. It is integrated with automation technologies to build the in-situ quality inspection system of injection parts, which improves the cost-efficiency by facilitating the acquisition and labeling of task-specific images, enabling automatic defect detection and decision-making online, thus holding profound significance for the IM industry and its pursuit of enhanced quality inspection measures.

Details

Robotic Intelligence and Automation, vol. 44 no. 1
Type: Research Article
ISSN: 2754-6969

Keywords

Article
Publication date: 1 December 2023

Hao Wang, Hamzeh Al Shraida and Yu Jin

Limited geometric accuracy is one of the major challenges that hinder the wider application of additive manufacturing (AM). This paper aims to predict in-plane shape deviation for…

Abstract

Purpose

Limited geometric accuracy is one of the major challenges that hinder the wider application of additive manufacturing (AM). This paper aims to predict in-plane shape deviation for online inspection and compensation to prevent error accumulation and improve shape fidelity in AM.

Design/methodology/approach

A sequence-to-sequence model with an attention mechanism (Seq2Seq+Attention) is proposed and implemented to predict subsequent layers or the occluded toolpath deviations after the multiresolution alignment. A shape compensation plan can be performed for the large deviation predicted.

Findings

The proposed Seq2Seq+Attention model is able to provide consistent prediction accuracy. The compensation plan proposed based on the predicted deviation can significantly improve the printing fidelity for those layers detected with large deviations.

Practical implications

Based on the experiments conducted on the knee joint samples, the proposed method outperforms the other three machine learning methods for both subsequent layer and occluded toolpath deviation prediction.

Originality/value

This work fills a research gap for predicting in-plane deviation not only for subsequent layers but also for occluded paths due to the missing scanning measurements. It is also combined with the multiresolution alignment and change point detection to determine the necessity of a compensation plan with updated G-code.

Details

Rapid Prototyping Journal, vol. 30 no. 2
Type: Research Article
ISSN: 1355-2546

Keywords

Article
Publication date: 23 January 2024

Guoyang Wan, Yaocong Hu, Bingyou Liu, Shoujun Bai, Kaisheng Xing and Xiuwen Tao

Presently, 6 Degree of Freedom (6DOF) visual pose measurement methods enjoy popularity in the industrial sector. However, challenges persist in accurately measuring the visual…

Abstract

Purpose

Presently, 6 Degree of Freedom (6DOF) visual pose measurement methods enjoy popularity in the industrial sector. However, challenges persist in accurately measuring the visual pose of blank and rough metal casts. Therefore, this paper introduces a 6DOF pose measurement method utilizing stereo vision, and aims to the 6DOF pose measurement of blank and rough metal casts.

Design/methodology/approach

This paper studies the 6DOF pose measurement of metal casts from three aspects: sample enhancement of industrial objects, optimization of detector and attention mechanism. Virtual reality technology is used for sample enhancement of metal casts, which solves the problem of large-scale sample sampling in industrial application. The method also includes a novel deep learning detector that uses multiple key points on the object surface as regression objects to detect industrial objects with rotation characteristics. By introducing a mixed paths attention module, the detection accuracy of the detector and the convergence speed of the training are improved.

Findings

The experimental results show that the proposed method has a better detection effect for metal casts with smaller size scaling and rotation characteristics.

Originality/value

A method for 6DOF pose measurement of industrial objects is proposed, which realizes the pose measurement and grasping of metal blanks and rough machined casts by industrial robots.

Details

Sensor Review, vol. 44 no. 1
Type: Research Article
ISSN: 0260-2288

Keywords

Open Access
Article
Publication date: 26 April 2024

Adela Sobotkova, Ross Deans Kristensen-McLachlan, Orla Mallon and Shawn Adrian Ross

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite…

Abstract

Purpose

This paper provides practical advice for archaeologists and heritage specialists wishing to use ML approaches to identify archaeological features in high-resolution satellite imagery (or other remotely sensed data sources). We seek to balance the disproportionately optimistic literature related to the application of ML to archaeological prospection through a discussion of limitations, challenges and other difficulties. We further seek to raise awareness among researchers of the time, effort, expertise and resources necessary to implement ML successfully, so that they can make an informed choice between ML and manual inspection approaches.

Design/methodology/approach

Automated object detection has been the holy grail of archaeological remote sensing for the last two decades. Machine learning (ML) models have proven able to detect uniform features across a consistent background, but more variegated imagery remains a challenge. We set out to detect burial mounds in satellite imagery from a diverse landscape in Central Bulgaria using a pre-trained Convolutional Neural Network (CNN) plus additional but low-touch training to improve performance. Training was accomplished using MOUND/NOT MOUND cutouts, and the model assessed arbitrary tiles of the same size from the image. Results were assessed using field data.

Findings

Validation of results against field data showed that self-reported success rates were misleadingly high, and that the model was misidentifying most features. Setting an identification threshold at 60% probability, and noting that we used an approach where the CNN assessed tiles of a fixed size, tile-based false negative rates were 95–96%, false positive rates were 87–95% of tagged tiles, while true positives were only 5–13%. Counterintuitively, the model provided with training data selected for highly visible mounds (rather than all mounds) performed worse. Development of the model, meanwhile, required approximately 135 person-hours of work.

Research limitations/implications

Our attempt to deploy a pre-trained CNN demonstrates the limitations of this approach when it is used to detect varied features of different sizes within a heterogeneous landscape that contains confounding natural and modern features, such as roads, forests and field boundaries. The model has detected incidental features rather than the mounds themselves, making external validation with field data an essential part of CNN workflows. Correcting the model would require refining the training data as well as adopting different approaches to model choice and execution, raising the computational requirements beyond the level of most cultural heritage practitioners.

Practical implications

Improving the pre-trained model’s performance would require considerable time and resources, on top of the time already invested. The degree of manual intervention required – particularly around the subsetting and annotation of training data – is so significant that it raises the question of whether it would be more efficient to identify all of the mounds manually, either through brute-force inspection by experts or by crowdsourcing the analysis to trained – or even untrained – volunteers. Researchers and heritage specialists seeking efficient methods for extracting features from remotely sensed data should weigh the costs and benefits of ML versus manual approaches carefully.

Social implications

Our literature review indicates that use of artificial intelligence (AI) and ML approaches to archaeological prospection have grown exponentially in the past decade, approaching adoption levels associated with “crossing the chasm” from innovators and early adopters to the majority of researchers. The literature itself, however, is overwhelmingly positive, reflecting some combination of publication bias and a rhetoric of unconditional success. This paper presents the failure of a good-faith attempt to utilise these approaches as a counterbalance and cautionary tale to potential adopters of the technology. Early-majority adopters may find ML difficult to implement effectively in real-life scenarios.

Originality/value

Unlike many high-profile reports from well-funded projects, our paper represents a serious but modestly resourced attempt to apply an ML approach to archaeological remote sensing, using techniques like transfer learning that are promoted as solutions to time and cost problems associated with, e.g. annotating and manipulating training data. While the majority of articles uncritically promote ML, or only discuss how challenges were overcome, our paper investigates how – despite reasonable self-reported scores – the model failed to locate the target features when compared to field data. We also present time, expertise and resourcing requirements, a rarity in ML-for-archaeology publications.

Details

Journal of Documentation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 9 April 2024

Shola Usharani, R. Gayathri, Uday Surya Deveswar Reddy Kovvuri, Maddukuri Nivas, Abdul Quadir Md, Kong Fah Tee and Arun Kumar Sivaraman

Automation of detecting cracked surfaces on buildings or in any industrially manufactured products is emerging nowadays. Detection of the cracked surface is a challenging task for…

Abstract

Purpose

Automation of detecting cracked surfaces on buildings or in any industrially manufactured products is emerging nowadays. Detection of the cracked surface is a challenging task for inspectors. Image-based automatic inspection of cracks can be very effective when compared to human eye inspection. With the advancement in deep learning techniques, by utilizing these methods the authors can create automation of work in a particular sector of various industries.

Design/methodology/approach

In this study, an upgraded convolutional neural network-based crack detection method has been proposed. The dataset consists of 3,886 images which include cracked and non-cracked images. Further, these data have been split into training and validation data. To inspect the cracks more accurately, data augmentation was performed on the dataset, and regularization techniques have been utilized to reduce the overfitting problems. In this work, VGG19, Xception and Inception V3, along with Resnet50 V2 CNN architectures to train the data.

Findings

A comparison between the trained models has been performed and from the obtained results, Xception performs better than other algorithms with 99.54% test accuracy. The results show detecting cracked regions and firm non-cracked regions is very efficient by the Xception algorithm.

Originality/value

The proposed method can be way better back to an automatic inspection of cracks in buildings with different design patterns such as decorated historical monuments.

Details

International Journal of Structural Integrity, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 14 November 2022

Abdul Hannan Qureshi, Wesam Salah Alaloul, Wong Kai Wing, Syed Saad, Khalid Mhmoud Alzubi and Muhammad Ali Musarat

Rebar is the prime component of reinforced concrete structures, and rebar monitoring is a time-consuming and technical job. With the emergence of the fourth industrial revolution…

Abstract

Purpose

Rebar is the prime component of reinforced concrete structures, and rebar monitoring is a time-consuming and technical job. With the emergence of the fourth industrial revolution, the construction industry practices have evolved toward digitalization. Still, hesitation remains among stakeholders toward the adoption of advanced technologies and one of the significant reasons is the unavailability of knowledge frameworks and implementation guidelines. This study aims to investigate technical factors impacting automated monitoring of rebar for the understanding, confidence gain and effective implementation by construction industry stakeholders.

Design/methodology/approach

A structured study pipeline has been adopted, which includes a systematic literature collection, semistructured interviews, pilot survey, questionnaire survey and statistical analyses via merging two techniques, i.e. structural equation modeling and relative importance index.

Findings

The achieved model highlights “digital images” and “scanning” as two main categories being adopted for automated rebar monitoring. Moreover, “external influence”, “data-capturing”, “image quality”, and “environment” have been identified as the main factors under “digital images”. On the other hand, “object distance”, “rebar shape”, “occlusion” and “rebar spacing” have been highlighted as the main contributing factors under “scanning”.

Originality/value

The study provides a base guideline for the construction industry stakeholders to gain confidence in automated monitoring of rebar via vision-based technologies and effective implementation of the progress-monitoring processes. This study, via structured data collection, performed qualitative and quantitative analyses to investigate technical factors for effective rebar monitoring via vision-based technologies in the form of a mathematical model.

Details

Construction Innovation , vol. 24 no. 3
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 30 April 2024

Jacqueline Humphries, Pepijn Van de Ven, Nehal Amer, Nitin Nandeshwar and Alan Ryan

Maintaining the safety of the human is a major concern in factories where humans co-exist with robots and other physical tools. Typically, the area around the robots is monitored…

Abstract

Purpose

Maintaining the safety of the human is a major concern in factories where humans co-exist with robots and other physical tools. Typically, the area around the robots is monitored using lasers. However, lasers cannot distinguish between human and non-human objects in the robot’s path. Stopping or slowing down the robot when non-human objects approach is unproductive. This research contribution addresses that inefficiency by showing how computer-vision techniques can be used instead of lasers which improve up-time of the robot.

Design/methodology/approach

A computer-vision safety system is presented. Image segmentation, 3D point clouds, face recognition, hand gesture recognition, speed and trajectory tracking and a digital twin are used. Using speed and separation, the robot’s speed is controlled based on the nearest location of humans accurate to their body shape. The computer-vision safety system is compared to a traditional laser measure. The system is evaluated in a controlled test, and in the field.

Findings

Computer-vision and lasers are shown to be equivalent by a measure of relationship and measure of agreement. R2 is given as 0.999983. The two methods are systematically producing similar results, as the bias is close to zero, at 0.060 mm. Using Bland–Altman analysis, 95% of the differences lie within the limits of maximum acceptable differences.

Originality/value

In this paper an original model for future computer-vision safety systems is described which is equivalent to existing laser systems, identifies and adapts to particular humans and reduces the need to slow and stop systems thereby improving efficiency. The implication is that computer-vision can be used to substitute lasers and permit adaptive robotic control in human–robot collaboration systems.

Details

Technological Sustainability, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-1312

Keywords

Article
Publication date: 7 November 2023

Metin Sabuncu and Hakan Özdemir

This study aims to identify leather type and authenticity through optical coherence tomography.

Abstract

Purpose

This study aims to identify leather type and authenticity through optical coherence tomography.

Design/methodology/approach

Optical coherence tomography images taken from genuine and faux leather samples were used to create an image dataset, and automated machine learning algorithms were also used to distinguish leather types.

Findings

The optical coherence tomography scan results in a different image based on leather type. This information was used to determine the leather type correctly by optical coherence tomography and automatic machine learning algorithms. Please note that this system also recognized whether the leather was genuine or synthetic. Hence, this demonstrates that optical coherence tomography and automatic machine learning can be used to distinguish leather type and determine whether it is genuine.

Originality/value

For the first time to the best of the authors' knowledge, spectral-domain optical coherence tomography and automated machine learning algorithms were applied to identify leather authenticity in a noncontact and non-invasive manner. Since this model runs online, it can readily be employed in automated quality monitoring systems in the leather industry. With recent technological progress, optical coherence tomography combined with automated machine learning algorithms will be used more frequently in automatic authentication and identification systems.

Details

International Journal of Clothing Science and Technology, vol. 36 no. 1
Type: Research Article
ISSN: 0955-6222

Keywords

1 – 10 of 86