Search results

1 – 10 of 82
Article
Publication date: 9 April 2024

Shola Usharani, R. Gayathri, Uday Surya Deveswar Reddy Kovvuri, Maddukuri Nivas, Abdul Quadir Md, Kong Fah Tee and Arun Kumar Sivaraman

Automation of detecting cracked surfaces on buildings or in any industrially manufactured products is emerging nowadays. Detection of the cracked surface is a challenging task for…

Abstract

Purpose

Automation of detecting cracked surfaces on buildings or in any industrially manufactured products is emerging nowadays. Detection of the cracked surface is a challenging task for inspectors. Image-based automatic inspection of cracks can be very effective when compared to human eye inspection. With the advancement in deep learning techniques, by utilizing these methods the authors can create automation of work in a particular sector of various industries.

Design/methodology/approach

In this study, an upgraded convolutional neural network-based crack detection method has been proposed. The dataset consists of 3,886 images which include cracked and non-cracked images. Further, these data have been split into training and validation data. To inspect the cracks more accurately, data augmentation was performed on the dataset, and regularization techniques have been utilized to reduce the overfitting problems. In this work, VGG19, Xception and Inception V3, along with Resnet50 V2 CNN architectures to train the data.

Findings

A comparison between the trained models has been performed and from the obtained results, Xception performs better than other algorithms with 99.54% test accuracy. The results show detecting cracked regions and firm non-cracked regions is very efficient by the Xception algorithm.

Originality/value

The proposed method can be way better back to an automatic inspection of cracks in buildings with different design patterns such as decorated historical monuments.

Details

International Journal of Structural Integrity, vol. 15 no. 3
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 25 April 2024

Abdul-Manan Sadick, Argaw Gurmu and Chathuri Gunarathna

Developing a reliable cost estimate at the early stage of construction projects is challenging due to inadequate project information. Most of the information during this stage is…

37

Abstract

Purpose

Developing a reliable cost estimate at the early stage of construction projects is challenging due to inadequate project information. Most of the information during this stage is qualitative, posing additional challenges to achieving accurate cost estimates. Additionally, there is a lack of tools that use qualitative project information and forecast the budgets required for project completion. This research, therefore, aims to develop a model for setting project budgets (excluding land) during the pre-conceptual stage of residential buildings, where project information is mainly qualitative.

Design/methodology/approach

Due to the qualitative nature of project information at the pre-conception stage, a natural language processing model, DistilBERT (Distilled Bidirectional Encoder Representations from Transformers), was trained to predict the cost range of residential buildings at the pre-conception stage. The training and evaluation data included 63,899 building permit activity records (2021–2022) from the Victorian State Building Authority, Australia. The input data comprised the project description of each record, which included project location and basic material types (floor, frame, roofing, and external wall).

Findings

This research designed a novel tool for predicting the project budget based on preliminary project information. The model achieved 79% accuracy in classifying residential buildings into three cost_classes ($100,000-$300,000, $300,000-$500,000, $500,000-$1,200,000) and F1-scores of 0.85, 0.73, and 0.74, respectively. Additionally, the results show that the model learnt the contextual relationship between qualitative data like project location and cost.

Research limitations/implications

The current model was developed using data from Victoria state in Australia; hence, it would not return relevant outcomes for other contexts. However, future studies can adopt the methods to develop similar models for their context.

Originality/value

This research is the first to leverage a deep learning model, DistilBERT, for cost estimation at the pre-conception stage using basic project information like location and material types. Therefore, the model would contribute to overcoming data limitations for cost estimation at the pre-conception stage. Residential building stakeholders, like clients, designers, and estimators, can use the model to forecast the project budget at the pre-conception stage to facilitate decision-making.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 30 April 2024

Baoxu Tu, Yuanfei Zhang, Kang Min, Fenglei Ni and Minghe Jin

This paper aims to estimate contact location from sparse and high-dimensional soft tactile array sensor data using the tactile image. The authors used three feature extraction…

Abstract

Purpose

This paper aims to estimate contact location from sparse and high-dimensional soft tactile array sensor data using the tactile image. The authors used three feature extraction methods: handcrafted features, convolutional features and autoencoder features. Subsequently, these features were mapped to contact locations through a contact location regression network. Finally, the network performance was evaluated using spherical fittings of three different radii to further determine the optimal feature extraction method.

Design/methodology/approach

This paper aims to estimate contact location from sparse and high-dimensional soft tactile array sensor data using the tactile image.

Findings

This research indicates that data collected by probes can be used for contact localization. Introducing a batch normalization layer after the feature extraction stage significantly enhances the model’s generalization performance. Through qualitative and quantitative analyses, the authors conclude that convolutional methods can more accurately estimate contact locations.

Originality/value

The paper provides both qualitative and quantitative analyses of the performance of three contact localization methods across different datasets. To address the challenge of obtaining accurate contact locations in quantitative analysis, an indirect measurement metric is proposed.

Details

Industrial Robot: the international journal of robotics research and application, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0143-991X

Keywords

Article
Publication date: 14 December 2023

Huaxiang Song, Chai Wei and Zhou Yong

The paper aims to tackle the classification of Remote Sensing Images (RSIs), which presents a significant challenge for computer algorithms due to the inherent characteristics of…

Abstract

Purpose

The paper aims to tackle the classification of Remote Sensing Images (RSIs), which presents a significant challenge for computer algorithms due to the inherent characteristics of clustered ground objects and noisy backgrounds. Recent research typically leverages larger volume models to achieve advanced performance. However, the operating environments of remote sensing commonly cannot provide unconstrained computational and storage resources. It requires lightweight algorithms with exceptional generalization capabilities.

Design/methodology/approach

This study introduces an efficient knowledge distillation (KD) method to build a lightweight yet precise convolutional neural network (CNN) classifier. This method also aims to substantially decrease the training time expenses commonly linked with traditional KD techniques. This approach entails extensive alterations to both the model training framework and the distillation process, each tailored to the unique characteristics of RSIs. In particular, this study establishes a robust ensemble teacher by independently training two CNN models using a customized, efficient training algorithm. Following this, this study modifies a KD loss function to mitigate the suppression of non-target category predictions, which are essential for capturing the inter- and intra-similarity of RSIs.

Findings

This study validated the student model, termed KD-enhanced network (KDE-Net), obtained through the KD process on three benchmark RSI data sets. The KDE-Net surpasses 42 other state-of-the-art methods in the literature published from 2020 to 2023. Compared to the top-ranked method’s performance on the challenging NWPU45 data set, KDE-Net demonstrated a noticeable 0.4% increase in overall accuracy with a significant 88% reduction in parameters. Meanwhile, this study’s reformed KD framework significantly enhances the knowledge transfer speed by at least three times.

Originality/value

This study illustrates that the logit-based KD technique can effectively develop lightweight CNN classifiers for RSI classification without substantial sacrifices in computation and storage costs. Compared to neural architecture search or other methods aiming to provide lightweight solutions, this study’s KDE-Net, based on the inherent characteristics of RSIs, is currently more efficient in constructing accurate yet lightweight classifiers for RSI classification.

Details

International Journal of Web Information Systems, vol. 20 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 16 April 2024

Kunpeng Shi, Guodong Jin, Weichao Yan and Huilin Xing

Accurately evaluating fluid flow behaviors and determining permeability for deforming porous media is time-consuming and remains challenging. This paper aims to propose a novel…

Abstract

Purpose

Accurately evaluating fluid flow behaviors and determining permeability for deforming porous media is time-consuming and remains challenging. This paper aims to propose a novel machine-learning method for the rapid estimation of permeability of porous media at different deformation stages constrained by hydro-mechanical coupling analysis.

Design/methodology/approach

A convolutional neural network (CNN) is proposed in this paper, which is guided by the results of finite element coupling analysis of equilibrium equation for mechanical deformation and Boltzmann equation for fluid dynamics during the hydro-mechanical coupling process [denoted as Finite element lattice Boltzmann model (FELBM) in this paper]. The FELBM ensures the Lattice Boltzmann analysis of coupled fluid flow with an unstructured mesh, which varies with the corresponding nodal displacement resulting from mechanical deformation. It provides reliable label data for permeability estimation at different stages using CNN.

Findings

The proposed CNN can rapidly and accurately estimate the permeability of deformable porous media, significantly reducing processing time. The application studies demonstrate high accuracy in predicting the permeability of deformable porous media for both the test and validation sets. The corresponding correlation coefficients (R2) is 0.93 for the validation set, and the R2 for the test set A and test set B are 0.93 and 0.94, respectively.

Originality/value

This study proposes an innovative approach with the CNN to rapidly estimate permeability in porous media under dynamic deformations, guided by FELBM coupling analysis. The fast and accurate performance of CNN underscores its promising potential for future applications.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0961-5539

Keywords

Open Access
Article
Publication date: 10 May 2024

Michelle Grace Tetteh-Caesar, Sumit Gupta, Konstantinos Salonitis and Sandeep Jagtap

The purpose of this systematic review is to critically analyze pharmaceutical industry case studies on the implementation of Lean 4.0 methodologies to synthesize key lessons…

Abstract

Purpose

The purpose of this systematic review is to critically analyze pharmaceutical industry case studies on the implementation of Lean 4.0 methodologies to synthesize key lessons, benefits and best practices. The goal is to inform decisions and guide investments in related technologies for enhancing quality, compliance, efficiency and responsiveness across production and supply chain processes.

Design/methodology/approach

The article utilized a systematic literature review (SLR) methodology following five phases: formulating research questions, locating relevant articles, selecting and evaluating articles, analyzing and synthesizing findings and reporting results. The SLR aimed to critically analyze pharmaceutical industry case studies on Lean 4.0 implementation to synthesize key lessons, benefits and best practices.

Findings

Key findings reveal recurrent efficiency gains, obstacles around legacy system integration and data governance as well as necessary operator training investments alongside technological upgrades. On average, quality assurance reliability improved by over 50%, while inventory waste declined by 57% based on quantified metrics across documented initiatives synthesizing robotics, sensors and analytics.

Research limitations/implications

As a comprehensive literature review, findings depend on available documented implementations within the search period rather than direct case evaluations. Reporting bias may also skew toward more successful accounts.

Practical implications

Synthesized implementation patterns, performance outcomes and concealed pitfalls provide pharmaceutical leaders with an evidence-based reference guide aiding adoption strategy development, resource planning and workforce transitioning crucial for Lean 4.0 assimilation.

Originality/value

This systematic assessment of pharmaceutical Lean 4.0 adoption offers an unprecedented perspective into the real-world issues, dependencies and modifications necessary for successful integration, absent from conceptual projections or isolated case studies alone until now.

Details

Technological Sustainability, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2754-1312

Keywords

Article
Publication date: 12 April 2024

Ahmad Honarjoo and Ehsan Darvishan

This study aims to obtain methods to identify and find the place of damage, which is one of the topics that has always been discussed in structural engineering. The cost of…

Abstract

Purpose

This study aims to obtain methods to identify and find the place of damage, which is one of the topics that has always been discussed in structural engineering. The cost of repairing and rehabilitating massive bridges and buildings is very high, highlighting the need to monitor the structures continuously. One way to track the structure's health is to check the cracks in the concrete. Meanwhile, the current methods of concrete crack detection have complex and heavy calculations.

Design/methodology/approach

This paper presents a new lightweight architecture based on deep learning for crack classification in concrete structures. The proposed architecture was identified and classified in less time and with higher accuracy than other traditional and valid architectures in crack detection. This paper used a standard dataset to detect two-class and multi-class cracks.

Findings

Results show that two images were recognized with 99.53% accuracy based on the proposed method, and multi-class images were classified with 91% accuracy. The low execution time of the proposed architecture compared to other valid architectures in deep learning on the same hardware platform. The use of Adam's optimizer in this research had better performance than other optimizers.

Originality/value

This paper presents a framework based on a lightweight convolutional neural network for nondestructive monitoring of structural health to optimize the calculation costs and reduce execution time in processing.

Details

International Journal of Structural Integrity, vol. 15 no. 3
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 2 May 2024

Mikias Gugssa, Long Li, Lina Pu, Ali Gurbuz, Yu Luo and Jun Wang

Computer vision and deep learning (DL) methods have been investigated for personal protective equipment (PPE) monitoring and detection for construction workers’ safety. However…

Abstract

Purpose

Computer vision and deep learning (DL) methods have been investigated for personal protective equipment (PPE) monitoring and detection for construction workers’ safety. However, it is still challenging to implement automated safety monitoring methods in near real time or in a time-efficient manner in real construction practices. Therefore, this study developed a novel solution to enhance the time efficiency to achieve near-real-time safety glove detection and meanwhile preserve data privacy.

Design/methodology/approach

The developed method comprises two primary components: (1) transfer learning methods to detect safety gloves and (2) edge computing to improve time efficiency and data privacy. To compare the developed edge computing-based method with the currently widely used cloud computing-based methods, a comprehensive comparative analysis was conducted from both the implementation and theory perspectives, providing insights into the developed approach’s performance.

Findings

Three DL models achieved mean average precision (mAP) scores ranging from 74.92% to 84.31% for safety glove detection. The other two methods by combining object detection and classification achieved mAP as 89.91% for hand detection and 100% for glove classification. From both implementation and theory perspectives, the edge computing-based method detected gloves faster than the cloud computing-based method. The edge computing-based method achieved a detection latency of 36%–68% shorter than the cloud computing-based method in the implementation perspective. The findings highlight edge computing’s potential for near-real-time detection with improved data privacy.

Originality/value

This study implemented and evaluated DL-based safety monitoring methods on different computing infrastructures to investigate their time efficiency. This study contributes to existing knowledge by demonstrating how edge computing can be used with DL models (without sacrificing their performance) to improve PPE-glove monitoring in a time-efficient manner as well as maintain data privacy.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 27 February 2024

Feng Qian, Yongsheng Tu, Chenyu Hou and Bin Cao

Automatic modulation recognition (AMR) is a challenging problem in intelligent communication systems and has wide application prospects. At present, although many AMR methods…

Abstract

Purpose

Automatic modulation recognition (AMR) is a challenging problem in intelligent communication systems and has wide application prospects. At present, although many AMR methods based on deep learning have been proposed, the methods proposed by these works cannot be directly applied to the actual wireless communication scenario, because there are usually two kinds of dilemmas when recognizing the real modulated signal, namely, long sequence and noise. This paper aims to effectively process in-phase quadrature (IQ) sequences of very long signals interfered by noise.

Design/methodology/approach

This paper proposes a general model for a modulation classifier based on a two-layer nested structure of long short-term memory (LSTM) networks, called a two-layer nested structure (TLN)-LSTM, which exploits the time sensitivity of LSTM and the ability of the nested network structure to extract more features, and can achieve effective processing of ultra-long signal IQ sequences collected from real wireless communication scenarios that are interfered by noise.

Findings

Experimental results show that our proposed model has higher recognition accuracy for five types of modulation signals, including amplitude modulation, frequency modulation, gaussian minimum shift keying, quadrature phase shift keying and differential quadrature phase shift keying, collected from real wireless communication scenarios. The overall classification accuracy of the proposed model for these signals can reach 73.11%, compared with 40.84% for the baseline model. Moreover, this model can also achieve high classification performance for analog signals with the same modulation method in the public data set HKDD_AMC36.

Originality/value

At present, although many AMR methods based on deep learning have been proposed, these works are based on the model’s classification results of various modulated signals in the AMR public data set to evaluate the signal recognition performance of the proposed method rather than collecting real modulated signals for identification in actual wireless communication scenarios. The methods proposed in these works cannot be directly applied to actual wireless communication scenarios. Therefore, this paper proposes a new AMR method, dedicated to the effective processing of the collected ultra-long signal IQ sequences that are interfered by noise.

Details

International Journal of Web Information Systems, vol. 20 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 12 January 2024

Patrik Jonsson, Johan Öhlin, Hafez Shurrab, Johan Bystedt, Azam Sheikh Muhammad and Vilhelm Verendel

This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?

Abstract

Purpose

This study aims to explore and empirically test variables influencing material delivery schedule inaccuracies?

Design/methodology/approach

A mixed-method case approach is applied. Explanatory variables are identified from the literature and explored in a qualitative analysis at an automotive original equipment manufacturer. Using logistic regression and random forest classification models, quantitative data (historical schedule transactions and internal data) enables the testing of the predictive difference of variables under various planning horizons and inaccuracy levels.

Findings

The effects on delivery schedule inaccuracies are contingent on a decoupling point, and a variable may have a combined amplifying (complexity generating) and stabilizing (complexity absorbing) moderating effect. Product complexity variables are significant regardless of the time horizon, and the item’s order life cycle is a significant variable with predictive differences that vary. Decoupling management is identified as a mechanism for generating complexity absorption capabilities contributing to delivery schedule accuracy.

Practical implications

The findings provide guidelines for exploring and finding patterns in specific variables to improve material delivery schedule inaccuracies and input into predictive forecasting models.

Originality/value

The findings contribute to explaining material delivery schedule variations, identifying potential root causes and moderators, empirically testing and validating effects and conceptualizing features that cause and moderate inaccuracies in relation to decoupling management and complexity theory literature?

Details

International Journal of Operations & Production Management, vol. 44 no. 13
Type: Research Article
ISSN: 0144-3577

Keywords

1 – 10 of 82