Search results

1 – 10 of 159
Article
Publication date: 20 July 2023

Elaheh Hosseini, Kimiya Taghizadeh Milani and Mohammad Shaker Sabetnasab

This research aimed to visualize and analyze the co-word network and thematic clusters of the intellectual structure in the field of linked data during 1900–2021.

Abstract

Purpose

This research aimed to visualize and analyze the co-word network and thematic clusters of the intellectual structure in the field of linked data during 1900–2021.

Design/methodology/approach

This applied research employed a descriptive and analytical method, scientometric indicators, co-word techniques, and social network analysis. VOSviewer, SPSS, Python programming, and UCINet software were used for data analysis and network structure visualization.

Findings

The top ranks of the Web of Science (WOS) subject categorization belonged to various fields of computer science. Besides, the USA was the most prolific country. The keyword ontology had the highest frequency of co-occurrence. Ontology and semantic were the most frequent co-word pairs. In terms of the network structure, nine major topic clusters were identified based on co-occurrence, and 29 thematic clusters were identified based on hierarchical clustering. Comparisons between the two clustering techniques indicated that three clusters, namely semantic bioinformatics, knowledge representation, and semantic tools were in common. The most mature and mainstream thematic clusters were natural language processing techniques to boost modeling and visualization, context-aware knowledge discovery, probabilistic latent semantic analysis (PLSA), semantic tools, latent semantic indexing, web ontology language (OWL) syntax, and ontology-based deep learning.

Originality/value

This study adopted various techniques such as co-word analysis, social network analysis network structure visualization, and hierarchical clustering to represent a suitable, visual, methodical, and comprehensive perspective into linked data.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 28 December 2023

Na Xu, Yanxiang Liang, Chaoran Guo, Bo Meng, Xueqing Zhou, Yuting Hu and Bo Zhang

Safety management plays an important part in coal mine construction. Due to complex data, the implementation of the construction safety knowledge scattered in standards poses a…

Abstract

Purpose

Safety management plays an important part in coal mine construction. Due to complex data, the implementation of the construction safety knowledge scattered in standards poses a challenge. This paper aims to develop a knowledge extraction model to automatically and efficiently extract domain knowledge from unstructured texts.

Design/methodology/approach

Bidirectional encoder representations from transformers (BERT)-bidirectional long short-term memory (BiLSTM)-conditional random field (CRF) method based on a pre-training language model was applied to carry out knowledge entity recognition in the field of coal mine construction safety in this paper. Firstly, 80 safety standards for coal mine construction were collected, sorted out and marked as a descriptive corpus. Then, the BERT pre-training language model was used to obtain dynamic word vectors. Finally, the BiLSTM-CRF model concluded the entity’s optimal tag sequence.

Findings

Accordingly, 11,933 entities and 2,051 relationships in the standard specifications texts of this paper were identified and a language model suitable for coal mine construction safety management was proposed. The experiments showed that F1 values were all above 60% in nine types of entities such as security management. F1 value of this model was more than 60% for entity extraction. The model identified and extracted entities more accurately than conventional methods.

Originality/value

This work completed the domain knowledge query and built a Q&A platform via entities and relationships identified by the standard specifications suitable for coal mines. This paper proposed a systematic framework for texts in coal mine construction safety to improve efficiency and accuracy of domain-specific entity extraction. In addition, the pretraining language model was also introduced into the coal mine construction safety to realize dynamic entity recognition, which provides technical support and theoretical reference for the optimization of safety management platforms.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 4 August 2023

Can Uzun and Raşit Eren Cangür

This study presents an ontological approach to assess the architectural outputs of generative adversarial networks. This paper aims to assess the performance of the generative…

Abstract

Purpose

This study presents an ontological approach to assess the architectural outputs of generative adversarial networks. This paper aims to assess the performance of the generative adversarial network in representing building knowledge.

Design/methodology/approach

The proposed ontological assessment consists of five steps. These are, respectively, creating an architectural data set, developing ontology for the architectural data set, training the You Only Look Once object detection with labels within the proposed ontology, training the StyleGAN algorithm with the images in the data set and finally, detecting the ontological labels and calculating the ontological relations of StyleGAN-generated pixel-based architectural images. The authors propose and calculate ontological identity and ontological inclusion metrics to assess the StyleGAN-generated ontological labels. This study uses 300 bay window images as an architectural data set for the ontological assessment experiments.

Findings

The ontological assessment provides semantic-based queries on StyleGAN-generated architectural images by checking the validity of the building knowledge representation. Moreover, this ontological validity reveals the building element label-specific failure and success rates simultaneously.

Originality/value

This study contributes to the assessment process of the generative adversarial networks through ontological validity checks rather than only conducting pixel-based similarity checks; semantic-based queries can introduce the GAN-generated, pixel-based building elements into the architecture, engineering and construction industry.

Details

Construction Innovation , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1471-4175

Keywords

Article
Publication date: 8 March 2024

Feng Zhang, Youliang Wei and Tao Feng

GraphQL is a new Open API specification that allows clients to send queries and obtain data flexibly according to their needs. However, a high-complexity GraphQL query may lead to…

Abstract

Purpose

GraphQL is a new Open API specification that allows clients to send queries and obtain data flexibly according to their needs. However, a high-complexity GraphQL query may lead to an excessive data volume of the query result, which causes problems such as resource overload of the API server. Therefore, this paper aims to address this issue by predicting the response data volume of a GraphQL query statement.

Design/methodology/approach

This paper proposes a GraphQL response data volume prediction approach based on Code2Vec and AutoML. First, a GraphQL query statement is transformed into a path collection of an abstract syntax tree based on the idea of Code2Vec, and then the query is aggregated into a vector with the fixed length. Finally, the response result data volume is predicted by a fully connected neural network. To further improve the prediction accuracy, the prediction results of embedded features are combined with the field features and summary features of the query statement to predict the final response data volume by the AutoML model.

Findings

Experiments on two public GraphQL API data sets, GitHub and Yelp, show that the accuracy of the proposed approach is 15.85% and 50.31% higher than existing GraphQL response volume prediction approaches based on machine learning techniques, respectively.

Originality/value

This paper proposes an approach that combines Code2Vec and AutoML for GraphQL query response data volume prediction with higher accuracy.

Details

International Journal of Web Information Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 30 August 2023

Yi-Hung Liu, Sheng-Fong Chen and Dan-Wei (Marian) Wen

Online medical repositories provide a platform for users to share information and dynamically access abundant electronic health data. It is important to determine whether case…

Abstract

Purpose

Online medical repositories provide a platform for users to share information and dynamically access abundant electronic health data. It is important to determine whether case report information can assist the general public in appropriately managing their diseases. Therefore, this paper aims to introduce a novel deep learning-based method that allows non-professionals to make inquiries using ordinary vocabulary, retrieving the most relevant case reports for accurate and effective health information.

Design/methodology/approach

The dataset of case reports was collected from both the patient-generated research network and the digital medical journal repository. To enhance the accuracy of obtaining relevant case reports, the authors propose a retrieval approach that combines BERT and BiLSTM methods. The authors identified representative health-related case reports and analyzed the retrieval performance, as well as user judgments.

Findings

This study aims to provide the necessary functionalities to deliver relevant health case reports based on input from ordinary terms. The proposed framework includes features for health management, user feedback acquisition and ranking by weights to obtain the most pertinent case reports.

Originality/value

This study contributes to health information systems by analyzing patients' experiences and treatments with the case report retrieval model. The results of this study can provide immense benefit to the general public who intend to find treatment decisions and experiences from relevant case reports.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 14 November 2023

Shaodan Sun, Jun Deng and Xugong Qin

This paper aims to amplify the retrieval and utilization of historical newspapers through the application of semantic organization, all from the vantage point of a fine-grained…

Abstract

Purpose

This paper aims to amplify the retrieval and utilization of historical newspapers through the application of semantic organization, all from the vantage point of a fine-grained knowledge element perspective. This endeavor seeks to unlock the latent value embedded within newspaper contents while simultaneously furnishing invaluable guidance within methodological paradigms for research in the humanities domain.

Design/methodology/approach

According to the semantic organization process and knowledge element concept, this study proposes a holistic framework, including four pivotal stages: knowledge element description, extraction, association and application. Initially, a semantic description model dedicated to knowledge elements is devised. Subsequently, harnessing the advanced deep learning techniques, the study delves into the realm of entity recognition and relationship extraction. These techniques are instrumental in identifying entities within the historical newspaper contents and capturing the interdependencies that exist among them. Finally, an online platform based on Flask is developed to enable the recognition of entities and relationships within historical newspapers.

Findings

This article utilized the Shengjing Times·Changchun Compilation as the datasets for describing, extracting, associating and applying newspapers contents. Regarding knowledge element extraction, the BERT + BS consistently outperforms Bi-LSTM, CRF++ and even BERT in terms of Recall and F1 scores, making it a favorable choice for entity recognition in this context. Particularly noteworthy is the Bi-LSTM-Pro model, which stands out with the highest scores across all metrics, notably achieving an exceptional F1 score in knowledge element relationship recognition.

Originality/value

Historical newspapers transcend their status as mere artifacts, evolving into invaluable reservoirs safeguarding the societal and historical memory. Through semantic organization from a fine-grained knowledge element perspective, it can facilitate semantic retrieval, semantic association, information visualization and knowledge discovery services for historical newspapers. In practice, it can empower researchers to unearth profound insights within the historical and cultural context, broadening the landscape of digital humanities research and practical applications.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 10 January 2024

Sanjay Saifi and Ramiya M. Anandakumar

In an era overshadowed by the alarming consequences of climate change and the escalating peril of recurring floods for communities worldwide, the significance of proficient…

Abstract

Purpose

In an era overshadowed by the alarming consequences of climate change and the escalating peril of recurring floods for communities worldwide, the significance of proficient disaster risk management has reached unprecedented levels. The successful implementation of disaster risk management necessitates the ability to make informed decisions. To this end, the utilization of three-dimensional (3D) visualization and Web-based rendering offers decision-makers the opportunity to engage with interactive data representations. This study aims to focus on Thiruvananthapuram, India, where the analysis of flooding caused by the Karamana River aims to furnish valuable insights for facilitating well-informed decision-making in the realm of disaster management.

Design/methodology/approach

This work introduces a systematic procedure for evaluating the influence of flooding on 3D building models through the utilization of Web-based visualization and rendering techniques. To ensure precision, aerial light detection and ranging (LiDAR) data is used to generate accurate 3D building models in CityGML format, adhering to the standards set by the Open Geospatial Consortium. By using one-meter digital elevation models derived from LiDAR data, flood simulations are conducted to analyze flow patterns at different discharge levels. The integration of 3D building maps with geographic information system (GIS)-based vector maps and a flood risk map enables the assessment of the extent of inundation. To facilitate visualization and querying tasks, a Web-based graphical user interface (GUI) is developed.

Findings

The efficiency of comprehensive 3D building maps in evaluating flood consequences in Thiruvananthapuram has been established by the research. By merging with GIS-based vector maps and a flood risk map, it becomes possible to scrutinize the extent of inundation and the affected structures. Furthermore, the Web-based GUI facilitates interactive data exploration, visualization and querying, thereby assisting in decision-making.

Originality/value

The study introduces an innovative approach that merges LiDAR data, 3D building mapping, flood simulation and Web-based visualization, which can be advantageous for decision-makers in disaster risk management and may have practical use in various regions and urban areas.

Details

International Journal of Disaster Resilience in the Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1759-5908

Keywords

Article
Publication date: 25 March 2024

Bronwyn Eager, Craig Deegan and Terese Fiedler

The purpose of this study is to provide a detailed demonstration of how artificial intelligence (AI) can be used to potentially generate valuable insights and recommendations…

Abstract

Purpose

The purpose of this study is to provide a detailed demonstration of how artificial intelligence (AI) can be used to potentially generate valuable insights and recommendations regarding the role of accounting in addressing key sustainability-related issues.

Design/methodology/approach

The study offers a novel method for leveraging AI tools to augment traditional scoping study techniques. The method was used to show how the authors can produce recommendations for potentially enhancing organisational accountability pertaining to seasonal workers.

Findings

Through the use of AI and informed by the knowledge base that the authors created, the authors have developed prescriptions that have the potential to advance the interests of seasonal workers. In doing so, the authors have focussed on developing a useful and detailed guide to assist their colleagues to apply AI to various research questions.

Originality/value

This study demonstrates the ability of AI to assist researchers in efficiently finding solutions to social problems. By augmenting traditional scoping study techniques with AI tools, the authors present a framework to assist future research in such areas as accounting and accountability.

Details

Meditari Accountancy Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2049-372X

Keywords

Article
Publication date: 19 January 2024

Meng Zhu and Xiaolong Xu

Intent detection (ID) and slot filling (SF) are two important tasks in natural language understanding. ID is to identify the main intent of a paragraph of text. The goal of SF is…

Abstract

Purpose

Intent detection (ID) and slot filling (SF) are two important tasks in natural language understanding. ID is to identify the main intent of a paragraph of text. The goal of SF is to extract the information that is important to the intent from the input sentence. However, most of the existing methods use sentence-level intention recognition, which has the risk of error propagation, and the relationship between intention recognition and SF is not explicitly modeled. Aiming at this problem, this paper proposes a collaborative model of ID and SF for intelligent spoken language understanding called ID-SF-Fusion.

Design/methodology/approach

ID-SF-Fusion uses Bidirectional Encoder Representation from Transformers (BERT) and Bidirectional Long Short-Term Memory (BiLSTM) to extract effective word embedding and context vectors containing the whole sentence information respectively. Fusion layer is used to provide intent–slot fusion information for SF task. In this way, the relationship between ID and SF task is fully explicitly modeled. This layer takes the result of ID and slot context vectors as input to obtain the fusion information which contains both ID result and slot information. Meanwhile, to further reduce error propagation, we use word-level ID for the ID-SF-Fusion model. Finally, two tasks of ID and SF are realized by joint optimization training.

Findings

We conducted experiments on two public datasets, Airline Travel Information Systems (ATIS) and Snips. The results show that the Intent ACC score and Slot F1 score of ID-SF-Fusion on ATIS and Snips are 98.0 per cent and 95.8 per cent, respectively, and the two indicators on Snips dataset are 98.6 per cent and 96.7 per cent, respectively. These models are superior to slot-gated, SF-ID NetWork, stack-Prop and other models. In addition, ablation experiments were performed to further analyze and discuss the proposed model.

Originality/value

This paper uses word-level intent recognition and introduces intent information into the SF process, which is a significant improvement on both data sets.

Details

Data Technologies and Applications, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 25 September 2023

José Félix Yagüe, Ignacio Huitzil, Carlos Bobed and Fernando Bobillo

There is an increasing interest in the use of knowledge graphs to represent real-world knowledge and a common need to manage imprecise knowledge in many real-world applications…

Abstract

Purpose

There is an increasing interest in the use of knowledge graphs to represent real-world knowledge and a common need to manage imprecise knowledge in many real-world applications. This paper aims to study approaches to solve flexible queries over knowledge graphs.

Design/methodology/approach

By introducing fuzzy logic in the query answering process, the authors are able to obtain a novel algorithm to solve flexible queries over knowledge graphs. This approach is implemented in the FUzzy Knowledge Graphs system, a software tool with an intuitive user-graphical interface.

Findings

This approach makes it possible to reuse semantic web standards (RDF, SPARQL and OWL 2) and builds a fuzzy layer on top of them. The application to a use case shows that the system can aggregate information in different ways by selecting different fusion operators and adapting to different user needs.

Originality/value

This approach is more general than similar previous works in the literature and provides a specific way to represent the flexible restrictions (using fuzzy OWL 2 datatypes).

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

1 – 10 of 159