Search results

1 – 10 of over 1000
Article
Publication date: 28 December 2023

Na Xu, Yanxiang Liang, Chaoran Guo, Bo Meng, Xueqing Zhou, Yuting Hu and Bo Zhang

Safety management plays an important part in coal mine construction. Due to complex data, the implementation of the construction safety knowledge scattered in standards poses a…

Abstract

Purpose

Safety management plays an important part in coal mine construction. Due to complex data, the implementation of the construction safety knowledge scattered in standards poses a challenge. This paper aims to develop a knowledge extraction model to automatically and efficiently extract domain knowledge from unstructured texts.

Design/methodology/approach

Bidirectional encoder representations from transformers (BERT)-bidirectional long short-term memory (BiLSTM)-conditional random field (CRF) method based on a pre-training language model was applied to carry out knowledge entity recognition in the field of coal mine construction safety in this paper. Firstly, 80 safety standards for coal mine construction were collected, sorted out and marked as a descriptive corpus. Then, the BERT pre-training language model was used to obtain dynamic word vectors. Finally, the BiLSTM-CRF model concluded the entity’s optimal tag sequence.

Findings

Accordingly, 11,933 entities and 2,051 relationships in the standard specifications texts of this paper were identified and a language model suitable for coal mine construction safety management was proposed. The experiments showed that F1 values were all above 60% in nine types of entities such as security management. F1 value of this model was more than 60% for entity extraction. The model identified and extracted entities more accurately than conventional methods.

Originality/value

This work completed the domain knowledge query and built a Q&A platform via entities and relationships identified by the standard specifications suitable for coal mines. This paper proposed a systematic framework for texts in coal mine construction safety to improve efficiency and accuracy of domain-specific entity extraction. In addition, the pretraining language model was also introduced into the coal mine construction safety to realize dynamic entity recognition, which provides technical support and theoretical reference for the optimization of safety management platforms.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 14 November 2023

Shaodan Sun, Jun Deng and Xugong Qin

This paper aims to amplify the retrieval and utilization of historical newspapers through the application of semantic organization, all from the vantage point of a fine-grained…

Abstract

Purpose

This paper aims to amplify the retrieval and utilization of historical newspapers through the application of semantic organization, all from the vantage point of a fine-grained knowledge element perspective. This endeavor seeks to unlock the latent value embedded within newspaper contents while simultaneously furnishing invaluable guidance within methodological paradigms for research in the humanities domain.

Design/methodology/approach

According to the semantic organization process and knowledge element concept, this study proposes a holistic framework, including four pivotal stages: knowledge element description, extraction, association and application. Initially, a semantic description model dedicated to knowledge elements is devised. Subsequently, harnessing the advanced deep learning techniques, the study delves into the realm of entity recognition and relationship extraction. These techniques are instrumental in identifying entities within the historical newspaper contents and capturing the interdependencies that exist among them. Finally, an online platform based on Flask is developed to enable the recognition of entities and relationships within historical newspapers.

Findings

This article utilized the Shengjing Times·Changchun Compilation as the datasets for describing, extracting, associating and applying newspapers contents. Regarding knowledge element extraction, the BERT + BS consistently outperforms Bi-LSTM, CRF++ and even BERT in terms of Recall and F1 scores, making it a favorable choice for entity recognition in this context. Particularly noteworthy is the Bi-LSTM-Pro model, which stands out with the highest scores across all metrics, notably achieving an exceptional F1 score in knowledge element relationship recognition.

Originality/value

Historical newspapers transcend their status as mere artifacts, evolving into invaluable reservoirs safeguarding the societal and historical memory. Through semantic organization from a fine-grained knowledge element perspective, it can facilitate semantic retrieval, semantic association, information visualization and knowledge discovery services for historical newspapers. In practice, it can empower researchers to unearth profound insights within the historical and cultural context, broadening the landscape of digital humanities research and practical applications.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 3 February 2023

Huyen Nguyen, Haihua Chen, Jiangping Chen, Kate Kargozari and Junhua Ding

This study aims to evaluate a method of building a biomedical knowledge graph (KG).

Abstract

Purpose

This study aims to evaluate a method of building a biomedical knowledge graph (KG).

Design/methodology/approach

This research first constructs a COVID-19 KG on the COVID-19 Open Research Data Set, covering information over six categories (i.e. disease, drug, gene, species, therapy and symptom). The construction used open-source tools to extract entities, relations and triples. Then, the COVID-19 KG is evaluated on three data-quality dimensions: correctness, relatedness and comprehensiveness, using a semiautomatic approach. Finally, this study assesses the application of the KG by building a question answering (Q&A) system. Five queries regarding COVID-19 genomes, symptoms, transmissions and therapeutics were submitted to the system and the results were analyzed.

Findings

With current extraction tools, the quality of the KG is moderate and difficult to improve, unless more efforts are made to improve the tools for entity extraction, relation extraction and others. This study finds that comprehensiveness and relatedness positively correlate with the data size. Furthermore, the results indicate the performances of the Q&A systems built on the larger-scale KGs are better than the smaller ones for most queries, proving the importance of relatedness and comprehensiveness to ensure the usefulness of the KG.

Originality/value

The KG construction process, data-quality-based and application-based evaluations discussed in this paper provide valuable references for KG researchers and practitioners to build high-quality domain-specific knowledge discovery systems.

Details

Information Discovery and Delivery, vol. 51 no. 4
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 29 March 2024

Sihao Li, Jiali Wang and Zhao Xu

The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information…

Abstract

Purpose

The compliance checking of Building Information Modeling (BIM) models is crucial throughout the lifecycle of construction. The increasing amount and complexity of information carried by BIM models have made compliance checking more challenging, and manual methods are prone to errors. Therefore, this study aims to propose an integrative conceptual framework for automated compliance checking of BIM models, allowing for the identification of errors within BIM models.

Design/methodology/approach

This study first analyzed the typical building standards in the field of architecture and fire protection, and then the ontology of these elements is developed. Based on this, a building standard corpus is built, and deep learning models are trained to automatically label the building standard texts. The Neo4j is utilized for knowledge graph construction and storage, and a data extraction method based on the Dynamo is designed to obtain checking data files. After that, a matching algorithm is devised to express the logical rules of knowledge graph triples, resulting in automated compliance checking for BIM models.

Findings

Case validation results showed that this theoretical framework can achieve the automatic construction of domain knowledge graphs and automatic checking of BIM model compliance. Compared with traditional methods, this method has a higher degree of automation and portability.

Originality/value

This study introduces knowledge graphs and natural language processing technology into the field of BIM model checking and completes the automated process of constructing domain knowledge graphs and checking BIM model data. The validation of its functionality and usability through two case studies on a self-developed BIM checking platform.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 22 February 2024

Yuzhuo Wang, Chengzhi Zhang, Min Song, Seongdeok Kim, Youngsoo Ko and Juhee Lee

In the era of artificial intelligence (AI), algorithms have gained unprecedented importance. Scientific studies have shown that algorithms are frequently mentioned in papers…

186

Abstract

Purpose

In the era of artificial intelligence (AI), algorithms have gained unprecedented importance. Scientific studies have shown that algorithms are frequently mentioned in papers, making mention frequency a classical indicator of their popularity and influence. However, contemporary methods for evaluating influence tend to focus solely on individual algorithms, disregarding the collective impact resulting from the interconnectedness of these algorithms, which can provide a new way to reveal their roles and importance within algorithm clusters. This paper aims to build the co-occurrence network of algorithms in the natural language processing field based on the full-text content of academic papers and analyze the academic influence of algorithms in the group based on the features of the network.

Design/methodology/approach

We use deep learning models to extract algorithm entities from articles and construct the whole, cumulative and annual co-occurrence networks. We first analyze the characteristics of algorithm networks and then use various centrality metrics to obtain the score and ranking of group influence for each algorithm in the whole domain and each year. Finally, we analyze the influence evolution of different representative algorithms.

Findings

The results indicate that algorithm networks also have the characteristics of complex networks, with tight connections between nodes developing over approximately four decades. For different algorithms, algorithms that are classic, high-performing and appear at the junctions of different eras can possess high popularity, control, central position and balanced influence in the network. As an algorithm gradually diminishes its sway within the group, it typically loses its core position first, followed by a dwindling association with other algorithms.

Originality/value

To the best of the authors’ knowledge, this paper is the first large-scale analysis of algorithm networks. The extensive temporal coverage, spanning over four decades of academic publications, ensures the depth and integrity of the network. Our results serve as a cornerstone for constructing multifaceted networks interlinking algorithms, scholars and tasks, facilitating future exploration of their scientific roles and semantic relations.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 8 November 2022

Yohanes Sigit Purnomo W.P., Yogan Jaya Kumar and Nur Zareen Zulkarnain

By far, the corpus for the quotation extraction and quotation attribution tasks in Indonesian is still limited in quantity and depth. This study aims to develop an Indonesian…

Abstract

Purpose

By far, the corpus for the quotation extraction and quotation attribution tasks in Indonesian is still limited in quantity and depth. This study aims to develop an Indonesian corpus of public figure statements attributions and a baseline model for attribution extraction, so it will contribute to fostering research in information extraction for the Indonesian language.

Design/methodology/approach

The methodology is divided into corpus development and extraction model development. During corpus development, data were collected and annotated. The development of the extraction model entails feature extraction, the definition of the model architecture, parameter selection and configuration, model training and evaluation, as well as model selection.

Findings

The Indonesian corpus of public figure statements attribution achieved 90.06% agreement level between the annotator and experts and could serve as a gold standard corpus. Furthermore, the baseline model predicted most labels and achieved 82.026% F-score.

Originality/value

To the best of the authors’ knowledge, the resulting corpus is the first corpus for attribution of public figures’ statements in the Indonesian language, which makes it a significant step for research on attribution extraction in the language. The resulting corpus and the baseline model can be used as a benchmark for further research. Other researchers could follow the methods presented in this paper to develop a new corpus and baseline model for other languages.

Details

Global Knowledge, Memory and Communication, vol. 73 no. 6/7
Type: Research Article
ISSN: 2514-9342

Keywords

Article
Publication date: 22 July 2024

Haoqiang Sun, Haozhe Xu, Jing Wu, Shaolong Sun and Shouyang Wang

The purpose of this paper is to study the importance of image data in hotel selection-recommendation using different types of cognitive features and to explore whether there are…

Abstract

Purpose

The purpose of this paper is to study the importance of image data in hotel selection-recommendation using different types of cognitive features and to explore whether there are reinforcing effects among these cognitive features.

Design/methodology/approach

This study represents user-generated images “cognitive” in a knowledge graph through multidimensional (shallow, middle and deep) analysis. This approach highlights the clustering of hotel destination imagery.

Findings

This study develops a novel hotel selection-recommendation model based on image sentiment and attribute representation within the construction of a knowledge graph. Furthermore, the experimental results show an enhanced effect between different types of cognitive features and hotel selection-recommendation.

Practical implications

This study enhances hotel recommendation accuracy and user satisfaction by incorporating cognitive and emotional image attributes into knowledge graphs using advanced machine learning and computer vision techniques.

Social implications

This study advances the understanding of user-generated images’ impact on hotel selection, helping users make better decisions and enabling marketers to understand users’ preferences and trends.

Originality/value

This research is one of the first to propose a new method for exploring the cognitive dimensions of hotel image data. Furthermore, multi-dimensional cognitive features can effectively enhance the selection-recommendation process, and the authors have proposed a novel hotel selection-recommendation model.

Details

International Journal of Contemporary Hospitality Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0959-6119

Keywords

Article
Publication date: 5 May 2023

Ying Yu and Jing Ma

The tender documents, an essential data source for internet-based logistics tendering platforms, incorporate massive fine-grained data, ranging from information on tenderee…

Abstract

Purpose

The tender documents, an essential data source for internet-based logistics tendering platforms, incorporate massive fine-grained data, ranging from information on tenderee, shipping location and shipping items. Automated information extraction in this area is, however, under-researched, making the extraction process a time- and effort-consuming one. For Chinese logistics tender entities, in particular, existing named entity recognition (NER) solutions are mostly unsuitable as they involve domain-specific terminologies and possess different semantic features.

Design/methodology/approach

To tackle this problem, a novel lattice long short-term memory (LSTM) model, combining a variant contextual feature representation and a conditional random field (CRF) layer, is proposed in this paper for identifying valuable entities from logistic tender documents. Instead of traditional word embedding, the proposed model uses the pretrained Bidirectional Encoder Representations from Transformers (BERT) model as input to augment the contextual feature representation. Subsequently, with the Lattice-LSTM model, the information of characters and words is effectively utilized to avoid error segmentation.

Findings

The proposed model is then verified by the Chinese logistic tender named entity corpus. Moreover, the results suggest that the proposed model excels in the logistics tender corpus over other mainstream NER models. The proposed model underpins the automatic extraction of logistics tender information, enabling logistic companies to perceive the ever-changing market trends and make far-sighted logistic decisions.

Originality/value

(1) A practical model for logistic tender NER is proposed in the manuscript. By employing and fine-tuning BERT into the downstream task with a small amount of data, the experiment results show that the model has a better performance than other existing models. This is the first study, to the best of the authors' knowledge, to extract named entities from Chinese logistic tender documents. (2) A real logistic tender corpus for practical use is constructed and a program of the model for online-processing real logistic tender documents is developed in this work. The authors believe that the model will facilitate logistic companies in converting unstructured documents to structured data and further perceive the ever-changing market trends to make far-sighted logistic decisions.

Details

Data Technologies and Applications, vol. 58 no. 1
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 8 August 2024

Chih-Ming Chen and Xian-Xu Chen

This study aims to develop an associative text analyzer (ATA) to support users in quickly grasping and interpreting the content of large amounts of text through text association…

Abstract

Purpose

This study aims to develop an associative text analyzer (ATA) to support users in quickly grasping and interpreting the content of large amounts of text through text association recommendations, facilitating the identification of the contextual relationships between people, events, organization and locations for digital humanities. Additionally, by providing text summaries, the tool allows users to link between distant and close readings, thereby enabling more efficient exploration of related texts.

Design/methodology/approach

To verify the effectiveness of this tool in supporting exploration of historical texts, this study uses a counterbalanced design to compare the use of the digital humanities platform for Mr. Lo Chia-Lun’s Writings (DHP-LCLW) with and without the ATA to assist in exploring different aspects of text. The study investigated whether there were significant differences in effectiveness for exploring textual contexts and technological acceptance as well as used semi-structured in-depth interviews to understand the research participants’ viewpoints and experiences with the ATA.

Findings

The results of the experiment revealed that the effectiveness of text exploration using the DHP-LCLW with and without the ATA varied significantly depending on the topic of the text being explored. The DHP-LCLW with the ATA was found to be more suitable for exploring historical texts, while the DHP-LCLW without the ATA was more suitable for exploring educational texts. The DHP-LCLW with the DHP-LCLW was found to be significantly more useful in terms of perceived usefulness than the DHP-LCLW without the ATA, indicating that the research participants believed the ATA was more effective in helping them efficiently grasp the related texts and topics during text exploration.

Practical implications

The study’s practical implications lie in the development of an ATA for digital humanities, offering a valuable tool for efficiently exploring historical texts. The ATA enhances users’ ability to grasp and interpret large volumes of text, facilitating contextual relationship identification. Its practical utility is evident in the improved effectiveness of text exploration, particularly for historical content, as indicated by users’ perceived usefulness.

Originality/value

This study proposes an ATA for digital humanities, enhancing text exploration by offering association recommendations and efficient linking between distant and close readings. The study contributes by providing a specialized tool and demonstrating its perceived usefulness in facilitating efficient exploration of related texts in digital humanities.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 2 May 2023

Giovanna Aracri, Antonietta Folino and Stefano Silvestri

The purpose of this paper is to propose a methodology for the enrichment and tailoring of a knowledge organization system (KOS), in order to support the information extraction…

Abstract

Purpose

The purpose of this paper is to propose a methodology for the enrichment and tailoring of a knowledge organization system (KOS), in order to support the information extraction (IE) task for the analysis of documents in the tourism domain. In particular, the KOS is used to develop a named entity recognition (NER) system.

Design/methodology/approach

A method to improve and customize an available thesaurus by leveraging documents related to the tourism in Italy is firstly presented. Then, the obtained thesaurus is used to create an annotated NER corpus, exploiting both distant supervision, deep learning and a light human supervision.

Findings

The study shows that a customized KOS can effectively support IE tasks when applied to documents belonging to the same domains and types used for its construction. Moreover, it is very useful to support and ease the annotation task using the proposed methodology, allowing to annotate a corpus with a fraction of the effort required for a manual annotation.

Originality/value

The paper explores an alternative use of a KOS, proposing an innovative NER corpus annotation methodology. Moreover, the KOS and the annotated NER data set will be made publicly available.

Details

Journal of Documentation, vol. 79 no. 6
Type: Research Article
ISSN: 0022-0418

Keywords

1 – 10 of over 1000