Search results
1 – 10 of 285Sihao Li, Jiali Wang and Zhao Xu
The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information…
Abstract
Purpose
The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information carried by BIM models have made compliance checking more challenging, and manual methods are prone to errors. Therefore, this study aims to propose an integrative conceptual framework for automated compliance checking of BIM models, allowing for the identification of errors within BIM models.
Design/methodology/approach
This study first analyzed the typical building standards in the field of architecture and fire protection, and then the ontology of these elements is developed. Based on this, a building standard corpus is built, and deep learning models are trained to automatically label the building standard texts. The Neo4j is utilized for knowledge graph construction and storage, and a data extraction method based on the Dynamo is designed to obtain checking data files. After that, a matching algorithm is devised to express the logical rules of knowledge graph triples, resulting in automated compliance checking for BIM models.
Findings
Case validation results showed that this theoretical framework can achieve the automatic construction of domain knowledge graphs and automatic checking of BIM model compliance. Compared with traditional methods, this method has a higher degree of automation and portability.
Originality/value
This study introduces knowledge graphs and natural language processing technology into the field of BIM model checking and completes the automated process of constructing domain knowledge graphs and checking BIM model data. The validation of its functionality and usability through two case studies on a self-developed BIM checking platform.
Details
Keywords
Yuzhuo Wang, Chengzhi Zhang, Min Song, Seongdeok Kim, Youngsoo Ko and Juhee Lee
In the era of artificial intelligence (AI), algorithms have gained unprecedented importance. Scientific studies have shown that algorithms are frequently mentioned in papers…
Abstract
Purpose
In the era of artificial intelligence (AI), algorithms have gained unprecedented importance. Scientific studies have shown that algorithms are frequently mentioned in papers, making mention frequency a classical indicator of their popularity and influence. However, contemporary methods for evaluating influence tend to focus solely on individual algorithms, disregarding the collective impact resulting from the interconnectedness of these algorithms, which can provide a new way to reveal their roles and importance within algorithm clusters. This paper aims to build the co-occurrence network of algorithms in the natural language processing field based on the full-text content of academic papers and analyze the academic influence of algorithms in the group based on the features of the network.
Design/methodology/approach
We use deep learning models to extract algorithm entities from articles and construct the whole, cumulative and annual co-occurrence networks. We first analyze the characteristics of algorithm networks and then use various centrality metrics to obtain the score and ranking of group influence for each algorithm in the whole domain and each year. Finally, we analyze the influence evolution of different representative algorithms.
Findings
The results indicate that algorithm networks also have the characteristics of complex networks, with tight connections between nodes developing over approximately four decades. For different algorithms, algorithms that are classic, high-performing and appear at the junctions of different eras can possess high popularity, control, central position and balanced influence in the network. As an algorithm gradually diminishes its sway within the group, it typically loses its core position first, followed by a dwindling association with other algorithms.
Originality/value
To the best of the authors’ knowledge, this paper is the first large-scale analysis of algorithm networks. The extensive temporal coverage, spanning over four decades of academic publications, ensures the depth and integrity of the network. Our results serve as a cornerstone for constructing multifaceted networks interlinking algorithms, scholars and tasks, facilitating future exploration of their scientific roles and semantic relations.
Details
Keywords
The tender documents, an essential data source for internet-based logistics tendering platforms, incorporate massive fine-grained data, ranging from information on tenderee…
Abstract
Purpose
The tender documents, an essential data source for internet-based logistics tendering platforms, incorporate massive fine-grained data, ranging from information on tenderee, shipping location and shipping items. Automated information extraction in this area is, however, under-researched, making the extraction process a time- and effort-consuming one. For Chinese logistics tender entities, in particular, existing named entity recognition (NER) solutions are mostly unsuitable as they involve domain-specific terminologies and possess different semantic features.
Design/methodology/approach
To tackle this problem, a novel lattice long short-term memory (LSTM) model, combining a variant contextual feature representation and a conditional random field (CRF) layer, is proposed in this paper for identifying valuable entities from logistic tender documents. Instead of traditional word embedding, the proposed model uses the pretrained Bidirectional Encoder Representations from Transformers (BERT) model as input to augment the contextual feature representation. Subsequently, with the Lattice-LSTM model, the information of characters and words is effectively utilized to avoid error segmentation.
Findings
The proposed model is then verified by the Chinese logistic tender named entity corpus. Moreover, the results suggest that the proposed model excels in the logistics tender corpus over other mainstream NER models. The proposed model underpins the automatic extraction of logistics tender information, enabling logistic companies to perceive the ever-changing market trends and make far-sighted logistic decisions.
Originality/value
(1) A practical model for logistic tender NER is proposed in the manuscript. By employing and fine-tuning BERT into the downstream task with a small amount of data, the experiment results show that the model has a better performance than other existing models. This is the first study, to the best of the authors' knowledge, to extract named entities from Chinese logistic tender documents. (2) A real logistic tender corpus for practical use is constructed and a program of the model for online-processing real logistic tender documents is developed in this work. The authors believe that the model will facilitate logistic companies in converting unstructured documents to structured data and further perceive the ever-changing market trends to make far-sighted logistic decisions.
Details
Keywords
Ying Gao, Qiang Zhang, Xiaoran Wang, Yanmei Huang, Fanshuang Meng and Wan Tao
Currently, the Tang tomb mural cultural relic resources are presented in a multi-source and heterogeneous manner, with a lack of effective organization and sharing between…
Abstract
Purpose
Currently, the Tang tomb mural cultural relic resources are presented in a multi-source and heterogeneous manner, with a lack of effective organization and sharing between resources. Therefore, this study aims to propose a multidimensional knowledge discovery solution for Tang tomb mural cultural relic resources.
Design/methodology/approach
Taking the Tang tomb murals collected by the Shaanxi History Museum as an example, based on clarifying the relevant concepts of Tang tomb mural resources and considering both dynamic and static dimensions, a top-down approach was adopted to first construct an ontology model of Tang tomb mural type cultural relics resources. Then, the actual case data was imported into the Neo4J graph database according to the defined pattern hierarchy to complete the static organization of knowledge, and presented in a multimodal form in knowledge reasoning and retrieval. In addition, geographic information system (GIS) technology is used to dynamically display the spatiotemporal distribution of Tang tomb mural resources, and the distribution trend is analysed from a digital humanistic perspective.
Findings
The multi-dimensional knowledge discovery of Tang tomb mural cultural relics resources can help establish the correlation and spatiotemporal relationship between resources, providing support for semantic retrieval and navigation, knowledge discovery and visualization and so on.
Originality/value
This study takes the murals in the collection of the Shaanxi History Museum as an example, revealing potential knowledge associations in a static and intelligent way, achieving knowledge discovery and management of Tang tomb murals, and dynamically presents the spatial distribution of Tang tomb murals through GIS technology, meeting the knowledge presentation needs of different users and opening up new ideas for the study of Tang tomb murals.
Details
Keywords
Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a…
Abstract
Purpose
Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a knowledge graph that depicts the diverse relationships between heterogeneous digital archive entities.
Design/methodology/approach
This study introduces and describes a method for applying knowledge graphs to digital archives in a step-by-step manner. It examines archival metadata standards, such as Records in Context Ontology (RiC-O), for characterising digital records; explains the process of data refinement, enrichment and reconciliation with examples; and demonstrates the use of knowledge graphs constructed using semantic queries.
Findings
This study introduced the 97imf.kr archive as a knowledge graph, enabling meaningful exploration of relationships within the archive’s records. This approach facilitated comprehensive record descriptions about different record entities. Applying archival ontologies with general-purpose vocabularies to digital records was advised to enhance metadata coherence and semantic search.
Originality/value
Most digital archives serviced in Korea are limited in the proper use of archival metadata standards. The contribution of this study is to propose a practical application of knowledge graph technology for linking and exploring digital records. This study details the process of collecting raw data on archives, data preprocessing and data enrichment, and demonstrates how to build a knowledge graph connected to external data. In particular, the knowledge graph of RiC-O vocabulary, Wikidata and Schema.org vocabulary and the semantic query using it can be applied to supplement keyword search in conventional digital archives.
Details
Keywords
Anna Sokolova, Polina Lobanova and Ilya Kuzminov
The purpose of the paper is to present an integrated methodology for identifying trends in a particular subject area based on a combination of advanced text mining and expert…
Abstract
Purpose
The purpose of the paper is to present an integrated methodology for identifying trends in a particular subject area based on a combination of advanced text mining and expert methods. The authors aim to test it in an area of clinical psychology and psychotherapy in 2010–2019.
Design/methodology/approach
The authors demonstrate the way of applying text-mining and the Word2Vec model to identify hot topics (HT) and emerging trends (ET) in clinical psychology and psychotherapy. The analysis of 11.3 million scientific publications in the Microsoft Academic Graph database revealed the most rapidly growing clinical psychology and psychotherapy terms – those with the largest increase in the number of publications reflecting real or potential trends.
Findings
The proposed approach allows one to identify HT and ET for the six thematic clusters related to mental disorders, symptoms, pharmacology, psychotherapy, treatment techniques and important psychological skills.
Practical implications
The developed methodology allows one to see the broad picture of the most dynamic research areas in the field of clinical psychology and psychotherapy in 2010–2019. For clinicians, who are often overwhelmed by practical work, this map of the current research can help identify the areas worthy of further attention to improve the effectiveness of their clinical work. This methodology might be applied for the identification of trends in any other subject area by taking into account its specificity.
Originality/value
The paper demonstrates the value of the advanced text-mining approach for understanding trends in a subject area. To the best of the authors’ knowledge, for the first time, text-mining and the Word2Vec model have been applied to identifying trends in the field of clinical psychology and psychotherapy.
Details
Keywords
Chao Zhang, Fang Wang, Yi Huang and Le Chang
This paper aims to reveal the interdisciplinarity of information science (IS) from the perspective of the evolution of theory application.
Abstract
Purpose
This paper aims to reveal the interdisciplinarity of information science (IS) from the perspective of the evolution of theory application.
Design/methodology/approach
Select eight representative IS journals as data sources, extract the theories mentioned in the full texts of the research papers and then measure annual interdisciplinarity of IS by conducting theory co-occurrence network analysis, diversity measure and evolution analysis.
Findings
As a young and vibrant discipline, IS has been continuously absorbing and internalizing external theoretical knowledge and thus formed a high degree of interdisciplinarity. With the continuous application of some kernel theories, the interdisciplinarity of IS appears to be decreasing and gradually converging into a few neighboring disciplines. Influenced by big data and artificial intelligence, the research paradigm of IS is shifting from a theory centered one to a technology centered one.
Research limitations/implications
This study helps to understand the evolution of the interdisciplinarity of IS in the past 21 years. The main limitation is that the data were collected from eight journals indexed by the Social Sciences Citation Index and a small amount of theories might have been omitted.
Originality/value
This study identifies the kernel theories in IS research, measures the interdisciplinarity of IS based on the evolution of the co-occurrence network of theory source disciplines and reveals the paradigm shift being happening in IS.
Details
Keywords
This study investigated the visibility of carbon emissions allowances accounting in the financial reports of 32 clean development mechanism (CDM) projects in the UAE to uncover…
Abstract
Purpose
This study investigated the visibility of carbon emissions allowances accounting in the financial reports of 32 clean development mechanism (CDM) projects in the UAE to uncover the obstacles to setting consistent standards for carbon emission accounting. As carbon emissions are monetized as credits, consistent accounting standards can aid decision-makers in the development of carbon emission mitigation strategies.
Design/methodology/approach
This study used a grounded theoretical framework for exploring the terms used in the policy documents of international accounting bodies regarding accounting standards and guidelines for carbon emission credits. Raw qualitative data were gathered, and an inductive approach was used by analyzing documents from various sources using the qualitative data text analysis software QDA Miner 6.
Findings
The findings showed that the financial statement reports of the corporations did not include disclosure of the carbon credit account. This omission was due to the lack of global standardization of carbon credit accounts and emission allowance recognition. This may hinder the production of a comprehensive report containing accurate and valuable financial information relevant to all stakeholders.
Originality/value
The study is among the first to use a grounded theoretical framework to investigate whether corporations are applying common standards and guidelines for carbon emissions accounting.
Details
Keywords
Constantin Bratianu, Alexeis Garcia-Perez, Francesca Dal Mas and Denise Bedford
Peyman Jafary, Davood Shojaei, Abbas Rajabifard and Tuan Ngo
Building information modeling (BIM) is a striking development in the architecture, engineering and construction (AEC) industry, which provides in-depth information on different…
Abstract
Purpose
Building information modeling (BIM) is a striking development in the architecture, engineering and construction (AEC) industry, which provides in-depth information on different stages of the building lifecycle. Real estate valuation, as a fully interconnected field with the AEC industry, can benefit from 3D technical achievements in BIM technologies. Some studies have attempted to use BIM for real estate valuation procedures. However, there is still a limited understanding of appropriate mechanisms to utilize BIM for valuation purposes and the consequent impact that BIM can have on decreasing the existing uncertainties in the valuation methods. Therefore, the paper aims to analyze the literature on BIM for real estate valuation practices.
Design/methodology/approach
This paper presents a systematic review to analyze existing utilizations of BIM for real estate valuation practices, discovers the challenges, limitations and gaps of the current applications and presents potential domains for future investigations. Research was conducted on the Web of Science, Scopus and Google Scholar databases to find relevant references that could contribute to the study. A total of 52 publications including journal papers, conference papers and proceedings, book chapters and PhD and master's theses were identified and thoroughly reviewed. There was no limitation on the starting date of research, but the end date was May 2022.
Findings
Four domains of application have been identified: (1) developing machine learning-based valuation models using the variables that could directly be captured through BIM and industry foundation classes (IFC) data instances of building objects and their attributes; (2) evaluating the capacity of 3D factors extractable from BIM and 3D GIS in increasing the accuracy of existing valuation models; (3) employing BIM for accurate estimation of components of cost approach-based valuation practices; and (4) extraction of useful visual features for real estate valuation from BIM representations instead of 2D images through deep learning and computer vision.
Originality/value
This paper contributes to research efforts on utilization of 3D modeling in real estate valuation practices. In this regard, this paper presents a broad overview of the current applications of BIM for valuation procedures and provides potential ways forward for future investigations.
Details