Search results
1 – 10 of over 4000Sihao Li, Jiali Wang and Zhao Xu
The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information…
Abstract
Purpose
The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information carried by BIM models have made compliance checking more challenging, and manual methods are prone to errors. Therefore, this study aims to propose an integrative conceptual framework for automated compliance checking of BIM models, allowing for the identification of errors within BIM models.
Design/methodology/approach
This study first analyzed the typical building standards in the field of architecture and fire protection, and then the ontology of these elements is developed. Based on this, a building standard corpus is built, and deep learning models are trained to automatically label the building standard texts. The Neo4j is utilized for knowledge graph construction and storage, and a data extraction method based on the Dynamo is designed to obtain checking data files. After that, a matching algorithm is devised to express the logical rules of knowledge graph triples, resulting in automated compliance checking for BIM models.
Findings
Case validation results showed that this theoretical framework can achieve the automatic construction of domain knowledge graphs and automatic checking of BIM model compliance. Compared with traditional methods, this method has a higher degree of automation and portability.
Originality/value
This study introduces knowledge graphs and natural language processing technology into the field of BIM model checking and completes the automated process of constructing domain knowledge graphs and checking BIM model data. The validation of its functionality and usability through two case studies on a self-developed BIM checking platform.
Details
Keywords
The-Quan Nguyen, Eric C.W. Lou and Bao Ngoc Nguyen
This paper aims to provide an integrated BIM-based approach for quantity take-off for progress payments in the context of high-rise buildings in Vietnam. It tries to find answers…
Abstract
Purpose
This paper aims to provide an integrated BIM-based approach for quantity take-off for progress payments in the context of high-rise buildings in Vietnam. It tries to find answers for the following questions: (1) When to start the QTO processes to facilitate the contract progress payments? (2) What information is required to measure the quantity of works to estimate contract progress payment (3) What are the challenges to manage (i.e. create, store, update and exploit)? What are the required information for this BIM use? and (4) How to process the information to deliver BIM-based QTO to facilitate contract progress payment?
Design/methodology/approach
The paper applied a deductive approach and expert consensus through a Delphi procedure to adapt to current innovation around BIM-based QTO. Starting with a literature review, it then discusses current practices in BIM-based QTO in general and high-rise building projects in particular. Challenges were compiled from the previous studies for references for BIM-based QTO to facilitate contract progress payment for high-rise building projects in Vietnam. A framework was developed considering a standard information management process throughout the construction lifecycle, when the BIM use of this study is delivered. The framework was validated with Delphi technique.
Findings
Four major challenges for BIM-based QTO discovered: new types of information required for the BIM model, changes and updates as projects progress, low interoperability between BIM model and estimation software, potentiality of low productivity and accuracy in data entry. Required information for QTO to facilitate progress payments in high-rise building projects include Object Geometric/Appearance Information, Structural Components' Definition and Contextual Information. Trade-offs between “Speed – Level of Detail–Applicable Breadth” and “Quality – Productivity” are proposed to consider the information amount to input at a time when creating/updating BIM objects. Interoperability check needed for creating, authoring/updating processing the BIM model's objects.
Research limitations/implications
This paper is not flawless. The first limitation lies in that the theoretical framework was established only based on desk research and small number of expert judgment. Further primary data collection would be needed to determine exactly how the framework underlies widespread practices. Secondly, this study only discussed the quantity take-off specifically for contract progress payment, but not for other purposes or broader BIM uses. Further research in this field would be of great help in developing a standard protocol for automatic quantity surveying system in Vietnam.
Originality/value
A new theoretical framework for BIM-based QTO validated with Delphi technique to facilitate progress payments for high-rise building projects, considering all information management stages and the phases of information development in the project lifecycle. The framework identified four types of information required for this QTO, detailed considerations for strategies (Library Objects Development, BIM Objects Information Declaration, BIM-based QTO) for better managing the information for this BIM use. Two trade-offs of “Speed – LOD–Applicable Breadth” and “Quality – Productivity” have been proposed for facilitating the strategies and also for enhancing the total efficiency and effectiveness of the QTO process.
Details
Keywords
Xiaobo Tang, Heshen Zhou and Shixuan Li
Predicting highly cited papers can enable an evaluation of the potential of papers and the early detection and determination of academic achievement value. However, most highly…
Abstract
Purpose
Predicting highly cited papers can enable an evaluation of the potential of papers and the early detection and determination of academic achievement value. However, most highly cited paper prediction studies consider early citation information, so predicting highly cited papers by publication is challenging. Therefore, the authors propose a method for predicting early highly cited papers based on their own features.
Design/methodology/approach
This research analyzed academic papers published in the Journal of the Association for Computing Machinery (ACM) from 2000 to 2013. Five types of features were extracted: paper features, journal features, author features, reference features and semantic features. Subsequently, the authors applied a deep neural network (DNN), support vector machine (SVM), decision tree (DT) and logistic regression (LGR), and they predicted highly cited papers 1–3 years after publication.
Findings
Experimental results showed that early highly cited academic papers are predictable when they are first published. The authors’ prediction models showed considerable performance. This study further confirmed that the features of references and authors play an important role in predicting early highly cited papers. In addition, the proportion of high-quality journal references has a more significant impact on prediction.
Originality/value
Based on the available information at the time of publication, this study proposed an effective early highly cited paper prediction model. This study facilitates the early discovery and realization of the value of scientific and technological achievements.
Details
Keywords
Wenjing Wu, Caifeng Wen, Qi Yuan, Qiulan Chen and Yunzhong Cao
Learning from safety accidents and sharing safety knowledge has become an important part of accident prevention and improving construction safety management. Considering the…
Abstract
Purpose
Learning from safety accidents and sharing safety knowledge has become an important part of accident prevention and improving construction safety management. Considering the difficulty of reusing unstructured data in the construction industry, the knowledge in it is difficult to be used directly for safety analysis. The purpose of this paper is to explore the construction of construction safety knowledge representation model and safety accident graph through deep learning methods, extract construction safety knowledge entities through BERT-BiLSTM-CRF model and propose a data management model of data–knowledge–services.
Design/methodology/approach
The ontology model of knowledge representation of construction safety accidents is constructed by integrating entity relation and logic evolution. Then, the database of safety incidents in the architecture, engineering and construction (AEC) industry is established based on the collected construction safety incident reports and related dispute cases. The construction method of construction safety accident knowledge graph is studied, and the precision of BERT-BiLSTM-CRF algorithm in information extraction is verified through comparative experiments. Finally, a safety accident report is used as an example to construct the AEC domain construction safety accident knowledge graph (AEC-KG), which provides visual query knowledge service and verifies the operability of knowledge management.
Findings
The experimental results show that the combined BERT-BiLSTM-CRF algorithm has a precision of 84.52%, a recall of 92.35%, and an F1 value of 88.26% in named entity recognition from the AEC domain database. The construction safety knowledge representation model and safety incident knowledge graph realize knowledge visualization.
Originality/value
The proposed framework provides a new knowledge management approach to improve the safety management of practitioners and also enriches the application scenarios of knowledge graph. On the one hand, it innovatively proposes a data application method and knowledge management method of safety accident report that integrates entity relationship and matter evolution logic. On the other hand, the legal adjudication dimension is innovatively added to the knowledge graph in the construction safety field as the basis for the postincident disposal measures of safety accidents, which provides reference for safety managers' decision-making in all aspects.
Details
Keywords
Na Xu, Yanxiang Liang, Chaoran Guo, Bo Meng, Xueqing Zhou, Yuting Hu and Bo Zhang
Safety management plays an important part in coal mine construction. Due to complex data, the implementation of the construction safety knowledge scattered in standards poses a…
Abstract
Purpose
Safety management plays an important part in coal mine construction. Due to complex data, the implementation of the construction safety knowledge scattered in standards poses a challenge. This paper aims to develop a knowledge extraction model to automatically and efficiently extract domain knowledge from unstructured texts.
Design/methodology/approach
Bidirectional encoder representations from transformers (BERT)-bidirectional long short-term memory (BiLSTM)-conditional random field (CRF) method based on a pre-training language model was applied to carry out knowledge entity recognition in the field of coal mine construction safety in this paper. Firstly, 80 safety standards for coal mine construction were collected, sorted out and marked as a descriptive corpus. Then, the BERT pre-training language model was used to obtain dynamic word vectors. Finally, the BiLSTM-CRF model concluded the entity’s optimal tag sequence.
Findings
Accordingly, 11,933 entities and 2,051 relationships in the standard specifications texts of this paper were identified and a language model suitable for coal mine construction safety management was proposed. The experiments showed that F1 values were all above 60% in nine types of entities such as security management. F1 value of this model was more than 60% for entity extraction. The model identified and extracted entities more accurately than conventional methods.
Originality/value
This work completed the domain knowledge query and built a Q&A platform via entities and relationships identified by the standard specifications suitable for coal mines. This paper proposed a systematic framework for texts in coal mine construction safety to improve efficiency and accuracy of domain-specific entity extraction. In addition, the pretraining language model was also introduced into the coal mine construction safety to realize dynamic entity recognition, which provides technical support and theoretical reference for the optimization of safety management platforms.
Details
Keywords
Thien Le, Thanh Ho, Van-Ho Nguyen and Hoanh-Su Le
This study aims to use the voice of the customer (VoC) strategy to collect user-generated content (UGC) compare customer expectations with reality, make the necessary improvements…
Abstract
Purpose
This study aims to use the voice of the customer (VoC) strategy to collect user-generated content (UGC) compare customer expectations with reality, make the necessary improvements for the business and create personalized strategies for each customer to maximize revenue, focus on hospitality industry in Vietnam market.
Design/methodology/approach
This study proposes a synthesis of techniques for a deep understanding of the VoC based on online reviews in the hospitality industry. First, 409,054 comments were collected from websites in the hospitality sector. Second, the data will be organized, stored, cleaned, analyzed and evaluated. Next, research using business intelligence (BI) solutions integrating three models, including net promoter score (NPS), graph model and latent Dirichlet allocation (LDA), based on natural language processing (NLP) technique, experiment on Vietnamese and English data to explore the multidimensional voice of customer’s row. Finally, a dashboard system will be implemented to visualize analysis results and recommendations on marketing strategies to improve product and service quality.
Findings
Experimental results allow analysts and managers to “listen to the customer’s voice” accurately and effectively, identify relationships between entities, topics of discussion in favor of positive and negative trends.
Originality/value
The novelty in this study is the integration of three models, including NPS, graph model and LDA. These models are combined based on the BI solution and NLP technique. The study also conducted experiments on both Vietnamese and English languages, which ensures more effective practical application.
Details
Keywords
Mostafa Dadashi Haji and Behrouz Behnam
It is a well-accepted note that to enhance safety performance in a project by preventing hazards, recognizing the safety leading indicators is of paramount importance.
Abstract
Purpose
It is a well-accepted note that to enhance safety performance in a project by preventing hazards, recognizing the safety leading indicators is of paramount importance.
Design/methodology/approach
In this research, the relationship between safety leading indicators is determined, and their impacts on the project are assessed and visualized throughout the time of the project in a proactive manner. Construction and safety experts are first interviewed to determine the most important safety leading indicators of the construction industry, and then the relationships that may exist between them are identified. Furthermore, a system dynamics model is generated using the interviews and integrated with an add-on developed on the building information modeling (BIM) platform. Finally, the impacts of the safety leading indicators on the project are calculated based on their time of occurrence, impact time and effective radius.
Findings
The add-on generates a heat-map that visualizes the impacts of the safety leading indicators on the project through time. Moreover, to assess the effectiveness of the developed tool, a case study is conducted on a station located on a water transfer line. In order to validate the results of the tool, a survey is also conducted from the project's staff and experts in the field. Previous studies have so far focused on active safety leading indicators that may result in a particular hazard, and the importance of the effects that safety leading indicators have on another is not considered. This study considers their effects on each other in a real-time manner.
Originality/value
Using this tool project's stakeholders and staff can identify the hazards proactively; hence, they can make the required decisions in advance to reduce the impact of associated events. Moreover, two other potentially contributions of the presented work can be enumerated as: firstly, the findings provide a knowledge framework of active safety leading indicators and their interactions for construction safety researchers who can go on to further study safety management. Secondly, the proposed framework contributes to encouragement of time-based location-based preventive strategies on construction sites.
Details
Keywords
Wenbo Ma, Kai Li, Wei-Fong Pan and Xinjie Wang
The purpose of this paper is to construct an index for systemic risk in China.
Abstract
Purpose
The purpose of this paper is to construct an index for systemic risk in China.
Design/methodology/approach
This paper develops a systemic risk index for China (SRIC) using textual information from 26 leading newspapers in China. Our index measures the systematic risk from 21 topics relating to China’s economy and provides narratives of the sources of systemic risk.
Findings
SRIC effectively predicts changes in GDP, aggregate financing to the real economy and the purchasing managers’ index. Moreover, SRIC explains several other commonly used macroeconomic indicators. Our risk measure provides a helpful monitoring tool for policymakers to manage systemic risk.
Originality/value
The paper construct an index of systemic risk based on the information extracted from newspaper articles. This approach is new to the literature.
Details
Keywords
Diego Augusto de Jesus Pacheco and Thomas Schougaard
This study aims to investigate how to identify and address production levelling problems in assembly lines utilising an intensive manual workforce when higher productivity levels…
Abstract
Purpose
This study aims to investigate how to identify and address production levelling problems in assembly lines utilising an intensive manual workforce when higher productivity levels are urgently requested to meet market demands.
Design/methodology/approach
A mixed-methods approach was used in the research design, integrating case study analysis, interviews and qualitative/quantitative data collection and analysis. The methodology implemented also introduces to the literature on operational performance a novel combination of data analysis methods by introducing the use of the Natural Language Understanding (NLU) methods.
Findings
First, the findings unveil the impacts on operational performance that transportation, limited documentation and waiting times play in assembly lines composed of an intensive workforce. Second, the paper unveils the understanding of the role that a limited understanding of how the assembly line functions play in productivity. Finally, the authors provide actionable insights into the levelling problems in manual assembly lines.
Practical implications
This research supports industries operating assembly lines with intensive utilisation of manual workforce to improve operational performance. The paper also proposed a novel conceptual model prescriptively guiding quick and long-term improvements in intensive manual workforce assembly lines. The article assists industrial decision-makers with subsequent turnaround strategies to ensure higher efficiency levels requested by the market.
Originality/value
The paper offers actionable findings relevant to other manual assembly lines utilising an intensive workforce looking to improve operational performance. Some of the methods and strategies examined in this study to improve productivity require minimal capital investments. Lastly, the study contributes to the empirical literature by identifying production levelling problems in a real context.
Details
Keywords
Xiaoxian Yang, Zhifeng Wang, Qi Wang, Ke Wei, Kaiqi Zhang and Jiangang Shi
This study aims to adopt a systematic review approach to examine the existing literature on law and LLMs.It involves analyzing and synthesizing relevant research papers, reports…
Abstract
Purpose
This study aims to adopt a systematic review approach to examine the existing literature on law and LLMs.It involves analyzing and synthesizing relevant research papers, reports and scholarly articles that discuss the use of LLMs in the legal domain. The review encompasses various aspects, including an analysis of LLMs, legal natural language processing (NLP), model tuning techniques, data processing strategies and frameworks for addressing the challenges associated with legal question-and-answer (Q&A) systems. Additionally, the study explores potential applications and services that can benefit from the integration of LLMs in the field of intelligent justice.
Design/methodology/approach
This paper surveys the state-of-the-art research on law LLMs and their application in the field of intelligent justice. The study aims to identify the challenges associated with developing Q&A systems based on LLMs and explores potential directions for future research and development. The ultimate goal is to contribute to the advancement of intelligent justice by effectively leveraging LLMs.
Findings
To effectively apply a law LLM, systematic research on LLM, legal NLP and model adjustment technology is required.
Originality/value
This study contributes to the field of intelligent justice by providing a comprehensive review of the current state of research on law LLMs.
Details