Search results

1 – 10 of over 27000
Article
Publication date: 3 October 2023

Haklae Kim

Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a…

Abstract

Purpose

Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a knowledge graph that depicts the diverse relationships between heterogeneous digital archive entities.

Design/methodology/approach

This study introduces and describes a method for applying knowledge graphs to digital archives in a step-by-step manner. It examines archival metadata standards, such as Records in Context Ontology (RiC-O), for characterising digital records; explains the process of data refinement, enrichment and reconciliation with examples; and demonstrates the use of knowledge graphs constructed using semantic queries.

Findings

This study introduced the 97imf.kr archive as a knowledge graph, enabling meaningful exploration of relationships within the archive’s records. This approach facilitated comprehensive record descriptions about different record entities. Applying archival ontologies with general-purpose vocabularies to digital records was advised to enhance metadata coherence and semantic search.

Originality/value

Most digital archives serviced in Korea are limited in the proper use of archival metadata standards. The contribution of this study is to propose a practical application of knowledge graph technology for linking and exploring digital records. This study details the process of collecting raw data on archives, data preprocessing and data enrichment, and demonstrates how to build a knowledge graph connected to external data. In particular, the knowledge graph of RiC-O vocabulary, Wikidata and Schema.org vocabulary and the semantic query using it can be applied to supplement keyword search in conventional digital archives.

Details

The Electronic Library , vol. 42 no. 1
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 12 May 2023

Chang-Sup Park

This paper studies a keyword search over graph-structured data used in various fields such as semantic web, linked open data and social networks. This study aims to propose an…

Abstract

Purpose

This paper studies a keyword search over graph-structured data used in various fields such as semantic web, linked open data and social networks. This study aims to propose an efficient keyword search algorithm on graph data to find top-k answers that are most relevant to the query and have diverse content nodes for the input keywords.

Design/methodology/approach

Based on an aggregative measure of diversity of an answer set, this study proposes an approach to searching the top-k diverse answers to a query on graph data, which finds a set of most relevant answer trees whose average dissimilarity should be no lower than a given threshold. This study defines a diversity constraint that must be satisfied for a subset of answer trees to be included in the solution. Then, an enumeration algorithm and a heuristic search algorithm are proposed to find an optimal solution efficiently based on the diversity constraint and an A* heuristic. This study also provides strategies for improving the performance of the heuristic search method.

Findings

The results of experiments using a real data set demonstrate that the proposed search algorithm can find top-k diverse and relevant answers to a query on large-scale graph data efficiently and outperforms the previous methods.

Originality/value

This study proposes a new keyword search method for graph data that finds an optimal solution with diverse and relevant answers to the query. It can provide users with query results that satisfy their various information needs on large graph data.

Details

International Journal of Web Information Systems, vol. 19 no. 1
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 28 April 2020

Siham Eddamiri, Asmaa Benghabrit and Elmoukhtar Zemmouri

The purpose of this paper is to present a generic pipeline for Resource Description Framework (RDF) graph mining to provide a comprehensive review of each step in the knowledge…

Abstract

Purpose

The purpose of this paper is to present a generic pipeline for Resource Description Framework (RDF) graph mining to provide a comprehensive review of each step in the knowledge discovery from data process. The authors also investigate different approaches and combinations to extract feature vectors from RDF graphs to apply the clustering and theme identification tasks.

Design/methodology/approach

The proposed methodology comprises four steps. First, the authors generate several graph substructures (Walks, Set of Walks, Walks with backward and Set of Walks with backward). Second, the authors build neural language models to extract numerical vectors of the generated sequences by using word embedding techniques (Word2Vec and Doc2Vec) combined with term frequency-inverse document frequency (TF-IDF). Third, the authors use the well-known K-means algorithm to cluster the RDF graph. Finally, the authors extract the most relevant rdf:type from the grouped vertices to describe the semantics of each theme by generating the labels.

Findings

The experimental evaluation on the state of the art data sets (AIFB, BGS and Conference) shows that the combination of Set of Walks-with-backward with TF-IDF and Doc2vec techniques give excellent results. In fact, the clustering results reach more than 97% and 90% in terms of purity and F-measure, respectively. Concerning the theme identification, the results show that by using the same combination, the purity and F-measure criteria reach more than 90% for all the considered data sets.

Originality/value

The originality of this paper lies in two aspects: first, a new machine learning pipeline for RDF data is presented; second, an efficient process to identify and extract relevant graph substructures from an RDF graph is proposed. The proposed techniques were combined with different neural language models to improve the accuracy and relevance of the obtained feature vectors that will be fed to the clustering mechanism.

Details

International Journal of Web Information Systems, vol. 16 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Open Access
Article
Publication date: 29 June 2020

Paolo Manghi, Claudio Atzori, Michele De Bonis and Alessia Bardi

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate…

4556

Abstract

Purpose

Several online services offer functionalities to access information from “big research graphs” (e.g. Google Scholar, OpenAIRE, Microsoft Academic Graph), which correlate scholarly/scientific communication entities such as publications, authors, datasets, organizations, projects, funders, etc. Depending on the target users, access can vary from search and browse content to the consumption of statistics for monitoring and provision of feedback. Such graphs are populated over time as aggregations of multiple sources and therefore suffer from major entity-duplication problems. Although deduplication of graphs is a known and actual problem, existing solutions are dedicated to specific scenarios, operate on flat collections, local topology-drive challenges and cannot therefore be re-used in other contexts.

Design/methodology/approach

This work presents GDup, an integrated, scalable, general-purpose system that can be customized to address deduplication over arbitrary large information graphs. The paper presents its high-level architecture, its implementation as a service used within the OpenAIRE infrastructure system and reports numbers of real-case experiments.

Findings

GDup provides the functionalities required to deliver a fully-fledged entity deduplication workflow over a generic input graph. The system offers out-of-the-box Ground Truth management, acquisition of feedback from data curators and algorithms for identifying and merging duplicates, to obtain an output disambiguated graph.

Originality/value

To our knowledge GDup is the only system in the literature that offers an integrated and general-purpose solution for the deduplication graphs, while targeting big data scalability issues. GDup is today one of the key modules of the OpenAIRE infrastructure production system, which monitors Open Science trends on behalf of the European Commission, National funders and institutions.

Details

Data Technologies and Applications, vol. 54 no. 4
Type: Research Article
ISSN: 2514-9288

Keywords

Article
Publication date: 8 September 2023

Xiancheng Ou, Yuting Chen, Siwei Zhou and Jiandong Shi

With the continuous growth of online education, the quality issue of online educational videos has become increasingly prominent, causing students in online learning to face the…

Abstract

Purpose

With the continuous growth of online education, the quality issue of online educational videos has become increasingly prominent, causing students in online learning to face the dilemma of knowledge confusion. The existing mechanisms for controlling the quality of online educational videos suffer from subjectivity and low timeliness. Monitoring the quality of online educational videos involves analyzing metadata features and log data, which is an important aspect. With the development of artificial intelligence technology, deep learning techniques with strong predictive capabilities can provide new methods for predicting the quality of online educational videos, effectively overcoming the shortcomings of existing methods. The purpose of this study is to find a deep neural network that can model the dynamic and static features of the video itself, as well as the relationships between videos, to achieve dynamic monitoring of the quality of online educational videos.

Design/methodology/approach

The quality of a video cannot be directly measured. According to previous research, the authors use engagement to represent the level of video quality. Engagement is the normalized participation time, which represents the degree to which learners tend to participate in the video. Based on existing public data sets, this study designs an online educational video engagement prediction model based on dynamic graph neural networks (DGNNs). The model is trained based on the video’s static features and dynamic features generated after its release by constructing dynamic graph data. The model includes a spatiotemporal feature extraction layer composed of DGNNs, which can effectively extract the time and space features contained in the video's dynamic graph data. The trained model is used to predict the engagement level of learners with the video on day T after its release, thereby achieving dynamic monitoring of video quality.

Findings

Models with spatiotemporal feature extraction layers consisting of four types of DGNNs can accurately predict the engagement level of online educational videos. Of these, the model using the temporal graph convolutional neural network has the smallest prediction error. In dynamic graph construction, using cosine similarity and Euclidean distance functions with reasonable threshold settings can construct a structurally appropriate dynamic graph. In the training of this model, the amount of historical time series data used will affect the model’s predictive performance. The more historical time series data used, the smaller the prediction error of the trained model.

Research limitations/implications

A limitation of this study is that not all video data in the data set was used to construct the dynamic graph due to memory constraints. In addition, the DGNNs used in the spatiotemporal feature extraction layer are relatively conventional.

Originality/value

In this study, the authors propose an online educational video engagement prediction model based on DGNNs, which can achieve the dynamic monitoring of video quality. The model can be applied as part of a video quality monitoring mechanism for various online educational resource platforms.

Details

International Journal of Web Information Systems, vol. 19 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 5 October 2022

Michael DeBellis and Biswanath Dutta

The purpose of this paper is to describe the CODO ontology (COviD-19 Ontology) that captures epidemiological data about the COVID-19 pandemic in a knowledge graph that follows the…

Abstract

Purpose

The purpose of this paper is to describe the CODO ontology (COviD-19 Ontology) that captures epidemiological data about the COVID-19 pandemic in a knowledge graph that follows the FAIR principles. This study took information from spreadsheets and integrated it into a knowledge graph that could be queried with SPARQL and visualized with the Gruff tool in AllegroGraph.

Design/methodology/approach

The knowledge graph was designed with the Web Ontology Language. The methodology was a hybrid approach integrating the YAMO methodology for ontology design and Agile methods to define iterations and approach to requirements, testing and implementation.

Findings

The hybrid approach demonstrated that Agile can bring the same benefits to knowledge graph projects as it has to other projects. The two-person team went from an ontology to a large knowledge graph with approximately 5 M triples in a few months. The authors gathered useful real-world experience on how to most effectively transform “from strings to things.”

Originality/value

This study is the only FAIR model (to the best of the authors’ knowledge) to address epidemiology data for the COVID-19 pandemic. It also brought to light several practical issues that generalize to other studies wishing to go from an ontology to a large knowledge graph. This study is one of the first studies to document how the Agile approach can be used for knowledge graph development.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 1 November 2021

Maren Parnas Gulnes, Ahmet Soylu and Dumitru Roman

Neuroscience data are spread across a variety of sources, typically provisioned through ad-hoc and non-standard approaches and formats and often have no connection to the related…

Abstract

Purpose

Neuroscience data are spread across a variety of sources, typically provisioned through ad-hoc and non-standard approaches and formats and often have no connection to the related data sources. These make it difficult for researchers to understand, integrate and reuse brain-related data. The aim of this study is to show that a graph-based approach offers an effective mean for representing, analysing and accessing brain-related data, which is highly interconnected, evolving over time and often needed in combination.

Design/methodology/approach

The authors present an approach for organising brain-related data in a graph model. The approach is exemplified in the case of a unique data set of quantitative neuroanatomical data about the murine basal ganglia––a group of nuclei in the brain essential for processing information related to movement. Specifically, the murine basal ganglia data set is modelled as a graph, integrated with relevant data from third-party repositories, published through a Web-based user interface and API, analysed from exploratory and confirmatory perspectives using popular graph algorithms to extract new insights.

Findings

The evaluation of the graph model and the results of the graph data analysis and usability study of the user interface suggest that graph-based data management in the neuroscience domain is a promising approach, since it enables integration of various disparate data sources and improves understanding and usability of data.

Originality/value

The study provides a practical and generic approach for representing, integrating, analysing and provisioning brain-related data and a set of software tools to support the proposed approach.

Details

Data Technologies and Applications, vol. 56 no. 3
Type: Research Article
ISSN: 2514-9288

Keywords

Open Access
Article
Publication date: 8 February 2023

Edoardo Ramalli and Barbara Pernici

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model…

Abstract

Purpose

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model performance. Uncertainty inherently affects experiment measurements and is often missing in the available data sets due to its estimation cost. For similar reasons, experiments are very few compared to other data sources. Discarding experiments based on the missing uncertainty values would preclude the development of predictive models. Data profiling techniques are fundamental to assess data quality, but some data quality dimensions are challenging to evaluate without knowing the uncertainty. In this context, this paper aims to predict the missing uncertainty of the experiments.

Design/methodology/approach

This work presents a methodology to forecast the experiments’ missing uncertainty, given a data set and its ontological description. The approach is based on knowledge graph embeddings and leverages the task of link prediction over a knowledge graph representation of the experiments database. The validity of the methodology is first tested in multiple conditions using synthetic data and then applied to a large data set of experiments in the chemical kinetic domain as a case study.

Findings

The analysis results of different test case scenarios suggest that knowledge graph embedding can be used to predict the missing uncertainty of the experiments when there is a hidden relationship between the experiment metadata and the uncertainty values. The link prediction task is also resilient to random noise in the relationship. The knowledge graph embedding outperforms the baseline results if the uncertainty depends upon multiple metadata.

Originality/value

The employment of knowledge graph embedding to predict the missing experimental uncertainty is a novel alternative to the current and more costly techniques in the literature. Such contribution permits a better data quality profiling of scientific repositories and improves the development process of data-driven models based on scientific experiments.

Article
Publication date: 20 August 2018

Chang-Sup Park

This paper aims to propose a new keyword search method on graph data to improve the relevance of search results and reduce duplication of content nodes in the answer trees…

Abstract

Purpose

This paper aims to propose a new keyword search method on graph data to improve the relevance of search results and reduce duplication of content nodes in the answer trees obtained by previous approaches based on distinct root semantics. The previous approaches are restricted to find answer trees having different root nodes and thus often generate a result consisting of answer trees with low relevance to the query or duplicate content nodes. The method allows limited redundancy in the root nodes of top-k answer trees to produce more effective query results.

Design/methodology/approach

A measure for redundancy in a set of answer trees regarding their root nodes is defined, and according to the metric, a set of answer trees with limited root redundancy is proposed for the result of a keyword query on graph data. For efficient query processing, an index on the useful paths in the graph using inverted lists and a hash map is suggested. Then, based on the path index, a top-k query processing algorithm is presented to find most relevant and diverse answer trees given a maximum amount of root redundancy allowed for a set of answer trees.

Findings

The results of experiments using real graph datasets show that the proposed approach can produce effective query answers which are more diverse in the content nodes and more relevant to the query than the previous approach based on distinct root semantics.

Originality/value

This paper first takes redundancy in the root nodes of answer trees into account to improve the relevance and content nodes redundancy of query results over the previous distinct root semantics. It can satisfy the users’ various information need on a large and complex graph data using a keyword-based query.

Details

International Journal of Web Information Systems, vol. 14 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 6 July 2023

Mengda Xing, Weilong Ding, Tianpu Zhang and Han Li

Remaining useful life (RUL) prediction for power transformer maintenance is a challenging task on heterogeneous data. Monitoring data of power transformers are not always…

Abstract

Purpose

Remaining useful life (RUL) prediction for power transformer maintenance is a challenging task on heterogeneous data. Monitoring data of power transformers are not always compatible or in an identical format; therefore, RUL predictions traditionally work separately on different data. Moreover, chemical molecules used in RUL prediction can be transformed into each other under different conditions, thus forming a complete graph with uncertain adjacency matrix (UAM). This study aims to find and evaluate a new model to achieve better results of RUL prediction than the other baselines.

Design/methodology/approach

In this work, the authors propose a spatiotemporal complete graph convolutional network (STCGCN) for RUL prediction in two branches, in which daily and hourly features are extracted from correlated heterogeneous data separately. This study provides a thorough evaluation of the proposed model on real-world data and compare the proposed model with state-of-the-art RUL prediction models.

Findings

By using the multibranch structure and EucCos similarity aggregation, STCGCN was able to capture the dynamic spatiotemporal patterns on a variety of heterogeneous data and obtain more accurate prediction results, compared to other time series prediction methods.

Originality/value

In this work, the authors propose a novel multibranch structure to compute feature maps from two heterogeneous data sources efficiently and a novel similarity aggregation method to compute the spatial UAM within the complete graph. Compared with traditional time series prediction models, the model pays attention to the spatial relationships in time series data.

Details

International Journal of Web Information Systems, vol. 19 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

1 – 10 of over 27000