Search results

1 – 10 of 454
Article
Publication date: 26 July 2024

Guilherme Fonseca Gonçalves, Rui Pedro Cardoso Coelho and Igor André Rodrigues Lopes

The purpose of this research is to establish a robust numerical framework for the calibration of macroscopic constitutive parameters, based on the analysis of polycrystalline RVEs…

Abstract

Purpose

The purpose of this research is to establish a robust numerical framework for the calibration of macroscopic constitutive parameters, based on the analysis of polycrystalline RVEs with computational homogenisation.

Design/methodology/approach

This framework is composed of four building-blocks: (1) the multi-scale model, consisting of polycrystalline RVEs, where the grains are modelled with anisotropic crystal plasticity, and computational homogenisation to link the scales, (2) a set of loading cases to generate the reference responses, (3) the von Mises elasto-plastic model to be calibrated, and (4) the optimisation algorithms to solve the inverse identification problem. Several optimisation algorithms are assessed through a reference identification problem. Thereafter, different calibration strategies are tested. The accuracy of the calibrated models is evaluated by comparing their results against an FE2 model and experimental data.

Findings

In the initial tests, the LIPO optimiser performs the best. Good results accuracy is obtained with the calibrated constitutive models. The computing time needed by the FE2 simulations is 5 orders of magnitude larger, compared to the standard macroscopic simulations, demonstrating how this framework is suitable to obtain efficient micro-mechanics-informed constitutive models.

Originality/value

This contribution proposes a numerical framework, based on FE2 and macro-scale single element simulations, where the calibration of constitutive laws is informed by multi-scale analysis. The most efficient combination of optimisation algorithm and definition of the objective function is studied, and the robustness of the proposed approach is demonstrated by validation with both numerical and experimental data.

Details

Engineering Computations, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-4401

Keywords

Article
Publication date: 11 July 2024

Chunxiu Qin, Yulong Wang, XuBu Ma, Yaxi Liu and Jin Zhang

To address the shortcomings of existing academic user information needs identification methods, such as low efficiency and high subjectivity, this study aims to propose an…

Abstract

Purpose

To address the shortcomings of existing academic user information needs identification methods, such as low efficiency and high subjectivity, this study aims to propose an automated method of identifying online academic user information needs.

Design/methodology/approach

This study’s method consists of two main parts: the first is the automatic classification of academic user information needs based on the bidirectional encoder representations from transformers (BERT) model. The second is the key content extraction of academic user information needs based on the improved MDERank key phrase extraction (KPE) algorithm. Finally, the applicability and effectiveness of the method are verified by an example of identifying the information needs of academic users in the field of materials science.

Findings

Experimental results show that the BERT-based information needs classification model achieved the highest weighted average F1 score of 91.61%. The improved MDERank KPE algorithm achieves the highest F1 score of 61%. The empirical analysis results reveal that the information needs of the categories “methods,” “experimental phenomena” and “experimental materials” are relatively high in the materials science field.

Originality/value

This study provides a solution for automated identification of academic user information needs. It helps online academic resource platforms to better understand their users’ information needs, which in turn facilitates the platform’s academic resource organization and services.

Details

The Electronic Library , vol. 42 no. 5
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 27 February 2024

Feng Qian, Yongsheng Tu, Chenyu Hou and Bin Cao

Automatic modulation recognition (AMR) is a challenging problem in intelligent communication systems and has wide application prospects. At present, although many AMR methods…

Abstract

Purpose

Automatic modulation recognition (AMR) is a challenging problem in intelligent communication systems and has wide application prospects. At present, although many AMR methods based on deep learning have been proposed, the methods proposed by these works cannot be directly applied to the actual wireless communication scenario, because there are usually two kinds of dilemmas when recognizing the real modulated signal, namely, long sequence and noise. This paper aims to effectively process in-phase quadrature (IQ) sequences of very long signals interfered by noise.

Design/methodology/approach

This paper proposes a general model for a modulation classifier based on a two-layer nested structure of long short-term memory (LSTM) networks, called a two-layer nested structure (TLN)-LSTM, which exploits the time sensitivity of LSTM and the ability of the nested network structure to extract more features, and can achieve effective processing of ultra-long signal IQ sequences collected from real wireless communication scenarios that are interfered by noise.

Findings

Experimental results show that our proposed model has higher recognition accuracy for five types of modulation signals, including amplitude modulation, frequency modulation, gaussian minimum shift keying, quadrature phase shift keying and differential quadrature phase shift keying, collected from real wireless communication scenarios. The overall classification accuracy of the proposed model for these signals can reach 73.11%, compared with 40.84% for the baseline model. Moreover, this model can also achieve high classification performance for analog signals with the same modulation method in the public data set HKDD_AMC36.

Originality/value

At present, although many AMR methods based on deep learning have been proposed, these works are based on the model’s classification results of various modulated signals in the AMR public data set to evaluate the signal recognition performance of the proposed method rather than collecting real modulated signals for identification in actual wireless communication scenarios. The methods proposed in these works cannot be directly applied to actual wireless communication scenarios. Therefore, this paper proposes a new AMR method, dedicated to the effective processing of the collected ultra-long signal IQ sequences that are interfered by noise.

Details

International Journal of Web Information Systems, vol. 20 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 29 March 2024

Sihao Li, Jiali Wang and Zhao Xu

The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information…

Abstract

Purpose

The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information carried by BIM models have made compliance checking more challenging, and manual methods are prone to errors. Therefore, this study aims to propose an integrative conceptual framework for automated compliance checking of BIM models, allowing for the identification of errors within BIM models.

Design/methodology/approach

This study first analyzed the typical building standards in the field of architecture and fire protection, and then the ontology of these elements is developed. Based on this, a building standard corpus is built, and deep learning models are trained to automatically label the building standard texts. The Neo4j is utilized for knowledge graph construction and storage, and a data extraction method based on the Dynamo is designed to obtain checking data files. After that, a matching algorithm is devised to express the logical rules of knowledge graph triples, resulting in automated compliance checking for BIM models.

Findings

Case validation results showed that this theoretical framework can achieve the automatic construction of domain knowledge graphs and automatic checking of BIM model compliance. Compared with traditional methods, this method has a higher degree of automation and portability.

Originality/value

This study introduces knowledge graphs and natural language processing technology into the field of BIM model checking and completes the automated process of constructing domain knowledge graphs and checking BIM model data. The validation of its functionality and usability through two case studies on a self-developed BIM checking platform.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 25 December 2023

Umair Khan, William Pao, Karl Ezra Salgado Pilario, Nabihah Sallih and Muhammad Rehan Khan

Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime…

119

Abstract

Purpose

Identifying the flow regime is a prerequisite for accurately modeling two-phase flow. This paper aims to introduce a comprehensive data-driven workflow for flow regime identification.

Design/methodology/approach

A numerical two-phase flow model was validated against experimental data and was used to generate dynamic pressure signals for three different flow regimes. First, four distinct methods were used for feature extraction: discrete wavelet transform (DWT), empirical mode decomposition, power spectral density and the time series analysis method. Kernel Fisher discriminant analysis (KFDA) was used to simultaneously perform dimensionality reduction and machine learning (ML) classification for each set of features. Finally, the Shapley additive explanations (SHAP) method was applied to make the workflow explainable.

Findings

The results highlighted that the DWT + KFDA method exhibited the highest testing and training accuracy at 95.2% and 88.8%, respectively. Results also include a virtual flow regime map to facilitate the visualization of features in two dimension. Finally, SHAP analysis showed that minimum and maximum values extracted at the fourth and second signal decomposition levels of DWT are the best flow-distinguishing features.

Practical implications

This workflow can be applied to opaque pipes fitted with pressure sensors to achieve flow assurance and automatic monitoring of two-phase flow occurring in many process industries.

Originality/value

This paper presents a novel flow regime identification method by fusing dynamic pressure measurements with ML techniques. The authors’ novel DWT + KFDA method demonstrates superior performance for flow regime identification with explainability.

Details

International Journal of Numerical Methods for Heat & Fluid Flow, vol. 34 no. 8
Type: Research Article
ISSN: 0961-5539

Keywords

Article
Publication date: 31 May 2024

Fanfan Meng and Xinying Cao

This study establishes an ontology-based framework for rework risk identification (RRI) by integrating heterogeneous data from the information flow of the prefabricated…

Abstract

Purpose

This study establishes an ontology-based framework for rework risk identification (RRI) by integrating heterogeneous data from the information flow of the prefabricated construction (PC) process. The main objective is to enhance the automation level of rework management and reduce the degree of reliance on human factors and manual operations.

Design/methodology/approach

The proposed framework comprises four levels aimed at managing dispersed rework risk knowledge and integrating heterogeneous data. The functionalities were realised through an integrated ontology that aligned the rework risk ontology with the PC ontology. The ontologies were developed and edited with Protégé. Ultimately, the potential benefit of the framework was validated through a case study and an expert questionnaire survey.

Findings

The framework is proven to effectively manage rework risk knowledge and can identify risk objects, clarify risk factors, determine risk events, and retrieve risk measures, thereby enabling the pre-identification of prefabricated rework risk (PRR) and improving the automation level. This study is meaningful and lays the foundation for the application of other computer methods in rework management research and practice in the future.

Originality/value

This research provides insights into the application of ontology to solve rework risk issues in the PC process and introduces a novel risk management method for future prefabricated project research and practice. The findings have significant theoretical value in terms of enriching the methods of risk assessment and control and the information management system of prefabricated projects.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 14 August 2024

Lizhi Zhou, Chuan Wang, Pei Niu, Hanming Zhang, Ning Zhang, Quanyi Xie, Jianhong Wang, Xiao Zhang and Jian Liu

Laser point clouds are a 3D reconstruction method with wide range, high accuracy and strong adaptability. Therefore, the purpose is to discover a construction point cloud…

Abstract

Purpose

Laser point clouds are a 3D reconstruction method with wide range, high accuracy and strong adaptability. Therefore, the purpose is to discover a construction point cloud extraction method that can obtain complete information about the construction of rebar, facilitating construction quality inspection and tunnel data archiving, to reduce the cost and complexity of construction management.

Design/methodology/approach

Firstly, this paper analyzes the point cloud data of the tunnel during the construction phase, extracts the main features of the rebar data and proposes an M-E-L recognition method. Secondly, based on the actual conditions of the tunnel and the specifications of Chinese tunnel engineering, a rebar model experiment is designed to obtain experimental data. Finally, the feasibility and accuracy of the M-E-L recognition method are analyzed and tested based on the experimental data from the model.

Findings

Based on tunnel morphology characteristics, data preprocessing, Euclidean clustering and PCA shape extraction methods, a M-E-L identification algorithm is proposed for identifying secondary lining rebars in highway tunnel construction stages. The algorithm achieves 100% extraction of the first-layer rebars, allowing for the three-dimensional visualization of the on-site rebar situation. Subsequently, through data processing, rebar dimensions and spacings can be obtained. For the second-layer rebars, 55% extraction is achieved, providing information on the rebar skeleton and partial rebar details at the construction site. These extracted data can be further processed to verify compliance with construction requirements.

Originality/value

This paper introduces a laser point cloud method for double-layer rebar identification in tunnels. Current methods rely heavily on manual detection, lacking objectivity. Objective approaches for automatic rebar identification include image-based and LiDAR-based methods. Image-based methods are constrained by tunnel lighting conditions, while LiDAR focuses on straight rebar skeletons. Our research proposes a 3D point cloud recognition algorithm for tunnel lining rebar. This method can extract double-layer rebars and obtain construction rebar dimensions, enhancing management efficiency.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 9 April 2024

Shola Usharani, R. Gayathri, Uday Surya Deveswar Reddy Kovvuri, Maddukuri Nivas, Abdul Quadir Md, Kong Fah Tee and Arun Kumar Sivaraman

Automation of detecting cracked surfaces on buildings or in any industrially manufactured products is emerging nowadays. Detection of the cracked surface is a challenging task for…

Abstract

Purpose

Automation of detecting cracked surfaces on buildings or in any industrially manufactured products is emerging nowadays. Detection of the cracked surface is a challenging task for inspectors. Image-based automatic inspection of cracks can be very effective when compared to human eye inspection. With the advancement in deep learning techniques, by utilizing these methods the authors can create automation of work in a particular sector of various industries.

Design/methodology/approach

In this study, an upgraded convolutional neural network-based crack detection method has been proposed. The dataset consists of 3,886 images which include cracked and non-cracked images. Further, these data have been split into training and validation data. To inspect the cracks more accurately, data augmentation was performed on the dataset, and regularization techniques have been utilized to reduce the overfitting problems. In this work, VGG19, Xception and Inception V3, along with Resnet50 V2 CNN architectures to train the data.

Findings

A comparison between the trained models has been performed and from the obtained results, Xception performs better than other algorithms with 99.54% test accuracy. The results show detecting cracked regions and firm non-cracked regions is very efficient by the Xception algorithm.

Originality/value

The proposed method can be way better back to an automatic inspection of cracks in buildings with different design patterns such as decorated historical monuments.

Details

International Journal of Structural Integrity, vol. 15 no. 3
Type: Research Article
ISSN: 1757-9864

Keywords

Article
Publication date: 1 November 2023

Juan Yang, Zhenkun Li and Xu Du

Although numerous signal modalities are available for emotion recognition, audio and visual modalities are the most common and predominant forms for human beings to express their…

Abstract

Purpose

Although numerous signal modalities are available for emotion recognition, audio and visual modalities are the most common and predominant forms for human beings to express their emotional states in daily communication. Therefore, how to achieve automatic and accurate audiovisual emotion recognition is significantly important for developing engaging and empathetic human–computer interaction environment. However, two major challenges exist in the field of audiovisual emotion recognition: (1) how to effectively capture representations of each single modality and eliminate redundant features and (2) how to efficiently integrate information from these two modalities to generate discriminative representations.

Design/methodology/approach

A novel key-frame extraction-based attention fusion network (KE-AFN) is proposed for audiovisual emotion recognition. KE-AFN attempts to integrate key-frame extraction with multimodal interaction and fusion to enhance audiovisual representations and reduce redundant computation, filling the research gaps of existing approaches. Specifically, the local maximum–based content analysis is designed to extract key-frames from videos for the purpose of eliminating data redundancy. Two modules, including “Multi-head Attention-based Intra-modality Interaction Module” and “Multi-head Attention-based Cross-modality Interaction Module”, are proposed to mine and capture intra- and cross-modality interactions for further reducing data redundancy and producing more powerful multimodal representations.

Findings

Extensive experiments on two benchmark datasets (i.e. RAVDESS and CMU-MOSEI) demonstrate the effectiveness and rationality of KE-AFN. Specifically, (1) KE-AFN is superior to state-of-the-art baselines for audiovisual emotion recognition. (2) Exploring the supplementary and complementary information of different modalities can provide more emotional clues for better emotion recognition. (3) The proposed key-frame extraction strategy can enhance the performance by more than 2.79 per cent on accuracy. (4) Both exploring intra- and cross-modality interactions and employing attention-based audiovisual fusion can lead to better prediction performance.

Originality/value

The proposed KE-AFN can support the development of engaging and empathetic human–computer interaction environment.

Article
Publication date: 10 September 2024

G.R. Nisha and V. Ravi

Quality 4.0 is essential to the Industry 4.0 framework, notably in the electronics sector. It evaluates product quality in real-time using automatic process controls, quality…

Abstract

Purpose

Quality 4.0 is essential to the Industry 4.0 framework, notably in the electronics sector. It evaluates product quality in real-time using automatic process controls, quality tools and procedures. The implementation of Quality 4.0 criteria in the electronics industry is the subject of this study’s investigation and analysis. In this study, nine Customer Requirements (CRs) and 18 Design Requirements (DRs) have been defined to adopt Quality 4.0, aiming to increase yield while reducing defects. This study has developed a Quality 4.0 framework for effective implementation, incorporating the People, Process and Technology categories.

Design/methodology/approach

Many CRs and DRs of Quality 4.0 exhibit interdependencies. The Analytic Network Process (ANP) considers interdependencies among the criteria at various levels. Quality Function Deployment (QFD) can capture the customer’s voice, which is particularly important in Quality 4.0. Therefore, in this research, we use an integrated ANP-QFD methodology for prioritizing DRs based on the customers' needs and preferences, ultimately leading to better product and service development.

Findings

According to the research findings, the most critical consumer criteria for Quality 4.0 in the electronics sector are automatic systems, connectivity, compliance and leadership. The Intelligent Internet of Things (IIOTs) has emerged as the most significant design requirement that enables effective control in production. It is observed that robotics process automation and a workforce aligned with Quality 4.0 also play crucial roles.

Originality/value

Existing literature does not include studies on identifying CRs and DRs for implementing Quality 4.0 in the electronics industry. To address this gap, we propose a framework to integrate real-time quality measures into the Industry 4.0 context, thereby facilitating the implementation of Quality 4.0 in the electronics industry. This study can provide valuable insights for industry practitioners to implement Quality 4.0 effectively in their organizations.

Details

International Journal of Quality & Reliability Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0265-671X

Keywords

1 – 10 of 454