Search results

1 – 10 of 223
Article
Publication date: 28 December 2023

Na Xu, Yanxiang Liang, Chaoran Guo, Bo Meng, Xueqing Zhou, Yuting Hu and Bo Zhang

Safety management plays an important part in coal mine construction. Due to complex data, the implementation of the construction safety knowledge scattered in standards poses a…

Abstract

Purpose

Safety management plays an important part in coal mine construction. Due to complex data, the implementation of the construction safety knowledge scattered in standards poses a challenge. This paper aims to develop a knowledge extraction model to automatically and efficiently extract domain knowledge from unstructured texts.

Design/methodology/approach

Bidirectional encoder representations from transformers (BERT)-bidirectional long short-term memory (BiLSTM)-conditional random field (CRF) method based on a pre-training language model was applied to carry out knowledge entity recognition in the field of coal mine construction safety in this paper. Firstly, 80 safety standards for coal mine construction were collected, sorted out and marked as a descriptive corpus. Then, the BERT pre-training language model was used to obtain dynamic word vectors. Finally, the BiLSTM-CRF model concluded the entity’s optimal tag sequence.

Findings

Accordingly, 11,933 entities and 2,051 relationships in the standard specifications texts of this paper were identified and a language model suitable for coal mine construction safety management was proposed. The experiments showed that F1 values were all above 60% in nine types of entities such as security management. F1 value of this model was more than 60% for entity extraction. The model identified and extracted entities more accurately than conventional methods.

Originality/value

This work completed the domain knowledge query and built a Q&A platform via entities and relationships identified by the standard specifications suitable for coal mines. This paper proposed a systematic framework for texts in coal mine construction safety to improve efficiency and accuracy of domain-specific entity extraction. In addition, the pretraining language model was also introduced into the coal mine construction safety to realize dynamic entity recognition, which provides technical support and theoretical reference for the optimization of safety management platforms.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Article
Publication date: 3 October 2023

Haklae Kim

Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a…

Abstract

Purpose

Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a knowledge graph that depicts the diverse relationships between heterogeneous digital archive entities.

Design/methodology/approach

This study introduces and describes a method for applying knowledge graphs to digital archives in a step-by-step manner. It examines archival metadata standards, such as Records in Context Ontology (RiC-O), for characterising digital records; explains the process of data refinement, enrichment and reconciliation with examples; and demonstrates the use of knowledge graphs constructed using semantic queries.

Findings

This study introduced the 97imf.kr archive as a knowledge graph, enabling meaningful exploration of relationships within the archive’s records. This approach facilitated comprehensive record descriptions about different record entities. Applying archival ontologies with general-purpose vocabularies to digital records was advised to enhance metadata coherence and semantic search.

Originality/value

Most digital archives serviced in Korea are limited in the proper use of archival metadata standards. The contribution of this study is to propose a practical application of knowledge graph technology for linking and exploring digital records. This study details the process of collecting raw data on archives, data preprocessing and data enrichment, and demonstrates how to build a knowledge graph connected to external data. In particular, the knowledge graph of RiC-O vocabulary, Wikidata and Schema.org vocabulary and the semantic query using it can be applied to supplement keyword search in conventional digital archives.

Details

The Electronic Library , vol. 42 no. 1
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 26 April 2024

Mawloud Titah and Mohammed Abdelghani Bouchaala

This paper aims to establish an efficient maintenance management system tailored for healthcare facilities, recognizing the crucial role of medical equipment in providing timely…

Abstract

Purpose

This paper aims to establish an efficient maintenance management system tailored for healthcare facilities, recognizing the crucial role of medical equipment in providing timely and precise patient care.

Design/methodology/approach

The system is designed to function both as an information portal and a decision-support system. A knowledge-based approach is adopted centered on Semantic Web Technologies (SWTs), leveraging a customized ontology model for healthcare facilities’ knowledge capitalization. Semantic Web Rule Language (SWRL) is integrated to address decision-support aspects, including equipment criticality assessment, maintenance strategies selection and contracting policies assignment. Additionally, Semantic Query-enhanced Web Rule Language (SQWRL) is incorporated to streamline the retrieval of decision-support outcomes and other useful information from the system’s knowledge base. A real-life case study conducted at the University Hospital Center of Oran (Algeria) illustrates the applicability and effectiveness of the proposed approach.

Findings

Case study results reveal that 40% of processed equipment is highly critical, 40% is of medium criticality, and 20% is of negligible criticality. The system demonstrates significant efficacy in determining optimal maintenance strategies and contracting policies for the equipment, leveraging combined knowledge and data-driven inference. Overall, SWTs showcases substantial potential in addressing maintenance management challenges within healthcare facilities.

Originality/value

An innovative model for healthcare equipment maintenance management is introduced, incorporating ontology, SWRL and SQWRL, and providing efficient data integration, coordinated workflows and data-driven context-aware decisions, while maintaining optimal flexibility and cross-departmental interoperability, which gives it substantial potential for further development.

Details

Journal of Quality in Maintenance Engineering, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1355-2511

Keywords

Article
Publication date: 31 August 2023

Faycal Touazi and Amel Boustil

The purpose of this paper is to address the need for new approaches in locating items that closely match user preference criteria due to the rise in data volume of knowledge bases…

Abstract

Purpose

The purpose of this paper is to address the need for new approaches in locating items that closely match user preference criteria due to the rise in data volume of knowledge bases resulting from Open Data initiatives. Specifically, the paper focuses on evaluating SPARQL qualitative preference queries over user preferences in SPARQL.

Design/methodology/approach

The paper outlines a novel approach for handling SPARQL preference queries by representing preferences through symbolic weights using the possibilistic logic (PL) framework. This approach allows for the management of symbolic weights without relying on numerical values, using a partial ordering system instead. The paper compares this approach with numerous other approaches, including those based on skylines, fuzzy sets and conditional preference networks.

Findings

The paper highlights the advantages of the proposed approach, which enables the representation of preference criteria through symbolic weights and qualitative considerations. This approach offers a more intuitive way to convey preferences and manage rankings.

Originality/value

The paper demonstrates the usefulness and originality of the proposed SPARQL language in the PL framework. The approach extends SPARQL by incorporating symbolic weights and qualitative preferences.

Details

International Journal of Web Information Systems, vol. 19 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 8 March 2024

Feng Zhang, Youliang Wei and Tao Feng

GraphQL is a new Open API specification that allows clients to send queries and obtain data flexibly according to their needs. However, a high-complexity GraphQL query may lead to…

Abstract

Purpose

GraphQL is a new Open API specification that allows clients to send queries and obtain data flexibly according to their needs. However, a high-complexity GraphQL query may lead to an excessive data volume of the query result, which causes problems such as resource overload of the API server. Therefore, this paper aims to address this issue by predicting the response data volume of a GraphQL query statement.

Design/methodology/approach

This paper proposes a GraphQL response data volume prediction approach based on Code2Vec and AutoML. First, a GraphQL query statement is transformed into a path collection of an abstract syntax tree based on the idea of Code2Vec, and then the query is aggregated into a vector with the fixed length. Finally, the response result data volume is predicted by a fully connected neural network. To further improve the prediction accuracy, the prediction results of embedded features are combined with the field features and summary features of the query statement to predict the final response data volume by the AutoML model.

Findings

Experiments on two public GraphQL API data sets, GitHub and Yelp, show that the accuracy of the proposed approach is 15.85% and 50.31% higher than existing GraphQL response volume prediction approaches based on machine learning techniques, respectively.

Originality/value

This paper proposes an approach that combines Code2Vec and AutoML for GraphQL query response data volume prediction with higher accuracy.

Details

International Journal of Web Information Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 18 March 2024

Raj Kumar Bhardwaj, Ritesh Kumar and Mohammad Nazim

This paper evaluates the precision of four metasearch engines (MSEs) – DuckDuckGo, Dogpile, Metacrawler and Startpage, to determine which metasearch engine exhibits the highest…

Abstract

Purpose

This paper evaluates the precision of four metasearch engines (MSEs) – DuckDuckGo, Dogpile, Metacrawler and Startpage, to determine which metasearch engine exhibits the highest level of precision and to identify the metasearch engine that is most likely to return the most relevant search results.

Design/methodology/approach

The research is divided into two parts: the first phase involves four queries categorized into two segments (4-Q-2-S), while the second phase includes six queries divided into three segments (6-Q-3-S). These queries vary in complexity, falling into three types: simple, phrase and complex. The precision, average precision and the presence of duplicates across all the evaluated metasearch engines are determined.

Findings

The study clearly demonstrated that Startpage returned the most relevant results and achieved the highest precision (0.98) among the four MSEs. Conversely, DuckDuckGo exhibited consistent performance across both phases of the study.

Research limitations/implications

The study only evaluated four metasearch engines, which may not be representative of all available metasearch engines. Additionally, a limited number of queries were used, which may not be sufficient to generalize the findings to all types of queries.

Practical implications

The findings of this study can be valuable for accreditation agencies in managing duplicates, improving their search capabilities and obtaining more relevant and precise results. These findings can also assist users in selecting the best metasearch engine based on precision rather than interface.

Originality/value

The study is the first of its kind which evaluates the four metasearch engines. No similar study has been conducted in the past to measure the performance of metasearch engines.

Details

Performance Measurement and Metrics, vol. 25 no. 1
Type: Research Article
ISSN: 1467-8047

Keywords

Article
Publication date: 14 November 2023

Shaodan Sun, Jun Deng and Xugong Qin

This paper aims to amplify the retrieval and utilization of historical newspapers through the application of semantic organization, all from the vantage point of a fine-grained…

Abstract

Purpose

This paper aims to amplify the retrieval and utilization of historical newspapers through the application of semantic organization, all from the vantage point of a fine-grained knowledge element perspective. This endeavor seeks to unlock the latent value embedded within newspaper contents while simultaneously furnishing invaluable guidance within methodological paradigms for research in the humanities domain.

Design/methodology/approach

According to the semantic organization process and knowledge element concept, this study proposes a holistic framework, including four pivotal stages: knowledge element description, extraction, association and application. Initially, a semantic description model dedicated to knowledge elements is devised. Subsequently, harnessing the advanced deep learning techniques, the study delves into the realm of entity recognition and relationship extraction. These techniques are instrumental in identifying entities within the historical newspaper contents and capturing the interdependencies that exist among them. Finally, an online platform based on Flask is developed to enable the recognition of entities and relationships within historical newspapers.

Findings

This article utilized the Shengjing Times·Changchun Compilation as the datasets for describing, extracting, associating and applying newspapers contents. Regarding knowledge element extraction, the BERT + BS consistently outperforms Bi-LSTM, CRF++ and even BERT in terms of Recall and F1 scores, making it a favorable choice for entity recognition in this context. Particularly noteworthy is the Bi-LSTM-Pro model, which stands out with the highest scores across all metrics, notably achieving an exceptional F1 score in knowledge element relationship recognition.

Originality/value

Historical newspapers transcend their status as mere artifacts, evolving into invaluable reservoirs safeguarding the societal and historical memory. Through semantic organization from a fine-grained knowledge element perspective, it can facilitate semantic retrieval, semantic association, information visualization and knowledge discovery services for historical newspapers. In practice, it can empower researchers to unearth profound insights within the historical and cultural context, broadening the landscape of digital humanities research and practical applications.

Details

Aslib Journal of Information Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2050-3806

Keywords

Article
Publication date: 10 January 2024

Sanjay Saifi and Ramiya M. Anandakumar

In an era overshadowed by the alarming consequences of climate change and the escalating peril of recurring floods for communities worldwide, the significance of proficient…

Abstract

Purpose

In an era overshadowed by the alarming consequences of climate change and the escalating peril of recurring floods for communities worldwide, the significance of proficient disaster risk management has reached unprecedented levels. The successful implementation of disaster risk management necessitates the ability to make informed decisions. To this end, the utilization of three-dimensional (3D) visualization and Web-based rendering offers decision-makers the opportunity to engage with interactive data representations. This study aims to focus on Thiruvananthapuram, India, where the analysis of flooding caused by the Karamana River aims to furnish valuable insights for facilitating well-informed decision-making in the realm of disaster management.

Design/methodology/approach

This work introduces a systematic procedure for evaluating the influence of flooding on 3D building models through the utilization of Web-based visualization and rendering techniques. To ensure precision, aerial light detection and ranging (LiDAR) data is used to generate accurate 3D building models in CityGML format, adhering to the standards set by the Open Geospatial Consortium. By using one-meter digital elevation models derived from LiDAR data, flood simulations are conducted to analyze flow patterns at different discharge levels. The integration of 3D building maps with geographic information system (GIS)-based vector maps and a flood risk map enables the assessment of the extent of inundation. To facilitate visualization and querying tasks, a Web-based graphical user interface (GUI) is developed.

Findings

The efficiency of comprehensive 3D building maps in evaluating flood consequences in Thiruvananthapuram has been established by the research. By merging with GIS-based vector maps and a flood risk map, it becomes possible to scrutinize the extent of inundation and the affected structures. Furthermore, the Web-based GUI facilitates interactive data exploration, visualization and querying, thereby assisting in decision-making.

Originality/value

The study introduces an innovative approach that merges LiDAR data, 3D building mapping, flood simulation and Web-based visualization, which can be advantageous for decision-makers in disaster risk management and may have practical use in various regions and urban areas.

Details

International Journal of Disaster Resilience in the Built Environment, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1759-5908

Keywords

Article
Publication date: 14 February 2024

Yaxi Liu, Chunxiu Qin, Yulong Wang and XuBu Ma

Exploratory search activities are ubiquitous in various information systems. Much potentially useful or even serendipitous information is discovered during the exploratory search…

Abstract

Purpose

Exploratory search activities are ubiquitous in various information systems. Much potentially useful or even serendipitous information is discovered during the exploratory search process. Given its irreplaceable role in information systems, exploratory search has attracted growing attention from the information system community. Since few studies have methodically reviewed current publications, researchers and practitioners are unable to take full advantage of existing achievements, which, in turn, limits their progress in this field. Through a literature review, this study aims to recapitulate important research topics of exploratory search in information systems, providing a research landscape of exploratory search.

Design/methodology/approach

Automatic and manual searches were performed on seven reputable databases to collect relevant literature published between January 2005 and July 2023. The literature pool contains 146 primary studies on exploratory search in information system research.

Findings

This study recapitulated five important topics of exploratory search, namely, conceptual frameworks, theoretical frameworks, influencing factors, design features and evaluation metrics. Moreover, this review revealed research gaps in current studies and proposed a knowledge framework and a research agenda for future studies.

Originality/value

This study has important implications for beginners to quickly get a snapshot of exploratory search studies, for researchers to re-align current research or discover new interesting issues, and for practitioners to design information systems that support exploratory search.

Details

The Electronic Library , vol. 42 no. 2
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 4 May 2023

Zulma Valedon Westney, Inkyoung Hur, Ling Wang and Junping Sun

Disinformation on social media is a serious issue. This study examines the effects of disinformation on COVID-19 vaccination decision-making to understand how social media users…

Abstract

Purpose

Disinformation on social media is a serious issue. This study examines the effects of disinformation on COVID-19 vaccination decision-making to understand how social media users make healthcare decisions when disinformation is presented in their social media feeds. It examines trust in post owners as a moderator on the relationship between information types (i.e. disinformation and factual information) and vaccination decision-making.

Design/methodology/approach

This study conducts a scenario-based web survey experiment to collect extensive survey data from social media users.

Findings

This study reveals that information types differently affect social media users' COVID-19 vaccination decision-making and finds a moderating effect of trust in post owners on the relationship between information types and vaccination decision-making. For those who have a high degree of trust in post owners, the effect of information types on vaccination decision-making becomes large. In contrast, information types do not affect the decision-making of those who have a very low degree of trust in post owners. Besides, identification and compliance are found to affect trust in post owners.

Originality/value

This study contributes to the literature on online disinformation and individual healthcare decision-making by demonstrating the effect of disinformation on vaccination decision-making and providing empirical evidence on how trust in post owners impacts the effects of information types on vaccination decision-making. This study focuses on trust in post owners, unlike prior studies that focus on trust in information or social media platforms.

Details

Information Technology & People, vol. 37 no. 3
Type: Research Article
ISSN: 0959-3845

Keywords

1 – 10 of 223