Search results

1 – 10 of over 19000
Article
Publication date: 8 July 2022

Chuanming Yu, Zhengang Zhang, Lu An and Gang Li

In recent years, knowledge graph completion has gained increasing research focus and shown significant improvements. However, most existing models only use the structures of…

Abstract

Purpose

In recent years, knowledge graph completion has gained increasing research focus and shown significant improvements. However, most existing models only use the structures of knowledge graph triples when obtaining the entity and relationship representations. In contrast, the integration of the entity description and the knowledge graph network structure has been ignored. This paper aims to investigate how to leverage both the entity description and the network structure to enhance the knowledge graph completion with a high generalization ability among different datasets.

Design/methodology/approach

The authors propose an entity-description augmented knowledge graph completion model (EDA-KGC), which incorporates the entity description and network structure. It consists of three modules, i.e. representation initialization, deep interaction and reasoning. The representation initialization module utilizes entity descriptions to obtain the pre-trained representation of entities. The deep interaction module acquires the features of the deep interaction between entities and relationships. The reasoning component performs matrix manipulations with the deep interaction feature vector and entity representation matrix, thus obtaining the probability distribution of target entities. The authors conduct intensive experiments on the FB15K, WN18, FB15K-237 and WN18RR data sets to validate the effect of the proposed model.

Findings

The experiments demonstrate that the proposed model outperforms the traditional structure-based knowledge graph completion model and the entity-description-enhanced knowledge graph completion model. The experiments also suggest that the model has greater feasibility in different scenarios such as sparse data, dynamic entities and limited training epochs. The study shows that the integration of entity description and network structure can significantly increase the effect of the knowledge graph completion task.

Originality/value

The research has a significant reference for completing the missing information in the knowledge graph and improving the application effect of the knowledge graph in information retrieval, question answering and other fields.

Details

Aslib Journal of Information Management, vol. 75 no. 3
Type: Research Article
ISSN: 2050-3806

Keywords

Book part
Publication date: 10 July 2019

Tianxing Wu, Guilin Qi and Cheng Li

With the continuous development of intelligent technologies, knowledge graph, the backbone of artificial intelligence, has attracted much attention from both academic and…

Abstract

With the continuous development of intelligent technologies, knowledge graph, the backbone of artificial intelligence, has attracted much attention from both academic and industrial communities due to its powerful capability of knowledge representation and reasoning. Besides, knowledge graph has been widely applied in different kinds of applications, such as semantic search, question answering, knowledge management, and so on. In recent years, knowledge graph techniques in China are also developing rapidly and different Chinese knowledge graphs have been built to support various applications. Under the background of “One Belt One Road (OBOR)” initiative, cooperating with the countries along OBOR on studying knowledge graph techniques and applications will greatly promote the development of artificial intelligence. At the same time, the accumulated experience of China on developing knowledge graph is also a good reference. Thus, in this chapter, the authors mainly introduce the development of Chinese knowledge graphs and their applications. The authors first describe the background of OBOR, and then introduce the concept of knowledge graph and three typical Chinese knowledge graphs, including Zhishi.me, CN-DBpedia, and XLORE. Finally, the authors demonstrate several applications of Chinese knowledge graphs.

Details

The New Silk Road Leads through the Arab Peninsula: Mastering Global Business and Innovation
Type: Book
ISBN: 978-1-78756-680-4

Keywords

Article
Publication date: 12 September 2023

Wenjing Wu, Caifeng Wen, Qi Yuan, Qiulan Chen and Yunzhong Cao

Learning from safety accidents and sharing safety knowledge has become an important part of accident prevention and improving construction safety management. Considering the…

Abstract

Purpose

Learning from safety accidents and sharing safety knowledge has become an important part of accident prevention and improving construction safety management. Considering the difficulty of reusing unstructured data in the construction industry, the knowledge in it is difficult to be used directly for safety analysis. The purpose of this paper is to explore the construction of construction safety knowledge representation model and safety accident graph through deep learning methods, extract construction safety knowledge entities through BERT-BiLSTM-CRF model and propose a data management model of data–knowledge–services.

Design/methodology/approach

The ontology model of knowledge representation of construction safety accidents is constructed by integrating entity relation and logic evolution. Then, the database of safety incidents in the architecture, engineering and construction (AEC) industry is established based on the collected construction safety incident reports and related dispute cases. The construction method of construction safety accident knowledge graph is studied, and the precision of BERT-BiLSTM-CRF algorithm in information extraction is verified through comparative experiments. Finally, a safety accident report is used as an example to construct the AEC domain construction safety accident knowledge graph (AEC-KG), which provides visual query knowledge service and verifies the operability of knowledge management.

Findings

The experimental results show that the combined BERT-BiLSTM-CRF algorithm has a precision of 84.52%, a recall of 92.35%, and an F1 value of 88.26% in named entity recognition from the AEC domain database. The construction safety knowledge representation model and safety incident knowledge graph realize knowledge visualization.

Originality/value

The proposed framework provides a new knowledge management approach to improve the safety management of practitioners and also enriches the application scenarios of knowledge graph. On the one hand, it innovatively proposes a data application method and knowledge management method of safety accident report that integrates entity relationship and matter evolution logic. On the other hand, the legal adjudication dimension is innovatively added to the knowledge graph in the construction safety field as the basis for the postincident disposal measures of safety accidents, which provides reference for safety managers' decision-making in all aspects.

Details

Engineering, Construction and Architectural Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0969-9988

Keywords

Open Access
Article
Publication date: 8 February 2023

Edoardo Ramalli and Barbara Pernici

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model…

Abstract

Purpose

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model performance. Uncertainty inherently affects experiment measurements and is often missing in the available data sets due to its estimation cost. For similar reasons, experiments are very few compared to other data sources. Discarding experiments based on the missing uncertainty values would preclude the development of predictive models. Data profiling techniques are fundamental to assess data quality, but some data quality dimensions are challenging to evaluate without knowing the uncertainty. In this context, this paper aims to predict the missing uncertainty of the experiments.

Design/methodology/approach

This work presents a methodology to forecast the experiments’ missing uncertainty, given a data set and its ontological description. The approach is based on knowledge graph embeddings and leverages the task of link prediction over a knowledge graph representation of the experiments database. The validity of the methodology is first tested in multiple conditions using synthetic data and then applied to a large data set of experiments in the chemical kinetic domain as a case study.

Findings

The analysis results of different test case scenarios suggest that knowledge graph embedding can be used to predict the missing uncertainty of the experiments when there is a hidden relationship between the experiment metadata and the uncertainty values. The link prediction task is also resilient to random noise in the relationship. The knowledge graph embedding outperforms the baseline results if the uncertainty depends upon multiple metadata.

Originality/value

The employment of knowledge graph embedding to predict the missing experimental uncertainty is a novel alternative to the current and more costly techniques in the literature. Such contribution permits a better data quality profiling of scientific repositories and improves the development process of data-driven models based on scientific experiments.

Open Access
Article
Publication date: 13 October 2022

Neha Keshan, Kathleen Fontaine and James A. Hendler

This paper aims to describe the “InDO: Institute Demographic Ontology” and demonstrates the InDO-based semiautomated process for both generating and extending a knowledge graph to…

Abstract

Purpose

This paper aims to describe the “InDO: Institute Demographic Ontology” and demonstrates the InDO-based semiautomated process for both generating and extending a knowledge graph to provide a comprehensive resource for marginalized US graduate students. The knowledge graph currently consists of instances related to the semistructured National Science Foundation Survey of Earned Doctorates (NSF SED) 2019 analysis report data tables. These tables contain summary statistics of an institute’s doctoral recipients based on a variety of demographics. Incorporating institute Wikidata links ultimately produces a table of unique, clearly readable data.

Design/methodology/approach

The authors use a customized semantic extract transform and loader (SETLr) script to ingest data from 2019 US doctoral-granting institute tables and preprocessed NSF SED Tables 1, 3, 4 and 9. The generated InDO knowledge graph is evaluated using two methods. First, the authors compare competency questions’ sparql results from both the semiautomatically and manually generated graphs. Second, the authors expand the questions to provide a better picture of an institute’s doctoral-recipient demographics within study fields.

Findings

With some preprocessing and restructuring of the NSF SED highly interlinked tables into a more parsable format, one can build the required knowledge graph using a semiautomated process.

Originality/value

The InDO knowledge graph allows the integration of US doctoral-granting institutes demographic data based on NSF SED data tables and presentation in machine-readable form using a new semiautomated methodology.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 8 June 2021

Hui Yuan and Weiwei Deng

Recommending suitable doctors to patients on healthcare consultation platforms is important to both the patients and the platforms. Although doctor recommendation methods have…

1436

Abstract

Purpose

Recommending suitable doctors to patients on healthcare consultation platforms is important to both the patients and the platforms. Although doctor recommendation methods have been proposed, they failed to explain recommendations and address the data sparsity problem, i.e. most patients on the platforms are new and provide little information except disease descriptions. This research aims to develop an interpretable doctor recommendation method based on knowledge graph and interpretable deep learning techniques to fill the research gaps.

Design/methodology/approach

This research proposes an advanced doctor recommendation method that leverages a health knowledge graph to overcome the data sparsity problem and uses deep learning techniques to generate accurate and interpretable recommendations. The proposed method extracts interactive features from the knowledge graph to indicate implicit interactions between patients and doctors and identifies individual features that signal the doctors' service quality. Then, the authors feed the features into a deep neural network with layer-wise relevance propagation to generate readily usable and interpretable recommendation results.

Findings

The proposed method produces more accurate recommendations than diverse baseline methods and can provide interpretations for the recommendations.

Originality/value

This study proposes a novel doctor recommendation method. Experimental results demonstrate the effectiveness and robustness of the method in generating accurate and interpretable recommendations. The research provides a practical solution and some managerial implications to online platforms that confront information overload and transparency issues.

Details

Internet Research, vol. 32 no. 2
Type: Research Article
ISSN: 1066-2243

Keywords

Article
Publication date: 3 October 2023

Haklae Kim

Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a…

Abstract

Purpose

Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a knowledge graph that depicts the diverse relationships between heterogeneous digital archive entities.

Design/methodology/approach

This study introduces and describes a method for applying knowledge graphs to digital archives in a step-by-step manner. It examines archival metadata standards, such as Records in Context Ontology (RiC-O), for characterising digital records; explains the process of data refinement, enrichment and reconciliation with examples; and demonstrates the use of knowledge graphs constructed using semantic queries.

Findings

This study introduced the 97imf.kr archive as a knowledge graph, enabling meaningful exploration of relationships within the archive’s records. This approach facilitated comprehensive record descriptions about different record entities. Applying archival ontologies with general-purpose vocabularies to digital records was advised to enhance metadata coherence and semantic search.

Originality/value

Most digital archives serviced in Korea are limited in the proper use of archival metadata standards. The contribution of this study is to propose a practical application of knowledge graph technology for linking and exploring digital records. This study details the process of collecting raw data on archives, data preprocessing and data enrichment, and demonstrates how to build a knowledge graph connected to external data. In particular, the knowledge graph of RiC-O vocabulary, Wikidata and Schema.org vocabulary and the semantic query using it can be applied to supplement keyword search in conventional digital archives.

Details

The Electronic Library , vol. 42 no. 1
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 5 October 2022

Michael DeBellis and Biswanath Dutta

The purpose of this paper is to describe the CODO ontology (COviD-19 Ontology) that captures epidemiological data about the COVID-19 pandemic in a knowledge graph that follows the…

Abstract

Purpose

The purpose of this paper is to describe the CODO ontology (COviD-19 Ontology) that captures epidemiological data about the COVID-19 pandemic in a knowledge graph that follows the FAIR principles. This study took information from spreadsheets and integrated it into a knowledge graph that could be queried with SPARQL and visualized with the Gruff tool in AllegroGraph.

Design/methodology/approach

The knowledge graph was designed with the Web Ontology Language. The methodology was a hybrid approach integrating the YAMO methodology for ontology design and Agile methods to define iterations and approach to requirements, testing and implementation.

Findings

The hybrid approach demonstrated that Agile can bring the same benefits to knowledge graph projects as it has to other projects. The two-person team went from an ontology to a large knowledge graph with approximately 5 M triples in a few months. The authors gathered useful real-world experience on how to most effectively transform “from strings to things.”

Originality/value

This study is the only FAIR model (to the best of the authors’ knowledge) to address epidemiology data for the COVID-19 pandemic. It also brought to light several practical issues that generalize to other studies wishing to go from an ontology to a large knowledge graph. This study is one of the first studies to document how the Agile approach can be used for knowledge graph development.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 25 September 2023

José Félix Yagüe, Ignacio Huitzil, Carlos Bobed and Fernando Bobillo

There is an increasing interest in the use of knowledge graphs to represent real-world knowledge and a common need to manage imprecise knowledge in many real-world applications…

Abstract

Purpose

There is an increasing interest in the use of knowledge graphs to represent real-world knowledge and a common need to manage imprecise knowledge in many real-world applications. This paper aims to study approaches to solve flexible queries over knowledge graphs.

Design/methodology/approach

By introducing fuzzy logic in the query answering process, the authors are able to obtain a novel algorithm to solve flexible queries over knowledge graphs. This approach is implemented in the FUzzy Knowledge Graphs system, a software tool with an intuitive user-graphical interface.

Findings

This approach makes it possible to reuse semantic web standards (RDF, SPARQL and OWL 2) and builds a fuzzy layer on top of them. The application to a use case shows that the system can aggregate information in different ways by selecting different fusion operators and adapting to different user needs.

Originality/value

This approach is more general than similar previous works in the literature and provides a specific way to represent the flexible restrictions (using fuzzy OWL 2 datatypes).

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 1 February 1978

ELDO C. KOENIG

Important to the performance of intelligent systems is the ability of its members to deduce conclusions as responses from premisses received as knowledge during different time…

Abstract

Important to the performance of intelligent systems is the ability of its members to deduce conclusions as responses from premisses received as knowledge during different time periods. Two types of knowledge associations are established for combing knowledge structures received during different time periods into fewer coherent structures. The knowledge system used has the graphs for a general automaton as a formal way of storing knowledge in a computer. Basic types of arguments arising from the natural deductive processes are identified and established as valid through the procedures for formal logic.

Details

Kybernetes, vol. 7 no. 2
Type: Research Article
ISSN: 0368-492X

1 – 10 of over 19000