Search results

1 – 10 of 22
Article
Publication date: 15 March 2024

Florian Rupp, Benjamin Schnabel and Kai Eckert

The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the…

Abstract

Purpose

The purpose of this work is to explore the new possibilities enabled by the recent introduction of RDF-star, an extension that allows for statements about statements within the Resource Description Framework (RDF). Alongside Named Graphs, this approach offers opportunities to leverage a meta-level for data modeling and data applications.

Design/methodology/approach

In this extended paper, the authors build onto three modeling use cases published in a previous paper: (1) provide provenance information, (2) maintain backwards compatibility for existing models, and (3) reduce the complexity of a data model. The authors present two scenarios where they implement the use of the meta-level to extend a data model with meta-information.

Findings

The authors present three abstract patterns for actively using the meta-level in data modeling. The authors showcase the implementation of the meta-level through two scenarios from our research project: (1) the authors introduce a workflow for triple annotation that uses the meta-level to enable users to comment on individual statements, such as for reporting errors or adding supplementary information. (2) The authors demonstrate how adding meta-information to a data model can accommodate highly specialized data while maintaining the simplicity of the underlying model.

Practical implications

Through the formulation of data modeling patterns with RDF-star and the demonstration of their application in two scenarios, the authors advocate for data modelers to embrace the meta-level.

Originality/value

With RDF-star being a very new extension to RDF, to the best of the authors’ knowledge, they are among the first to relate it to other meta-level approaches and demonstrate its application in real-world scenarios.

Details

The Electronic Library , vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 3 October 2023

Haklae Kim

Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a…

Abstract

Purpose

Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a knowledge graph that depicts the diverse relationships between heterogeneous digital archive entities.

Design/methodology/approach

This study introduces and describes a method for applying knowledge graphs to digital archives in a step-by-step manner. It examines archival metadata standards, such as Records in Context Ontology (RiC-O), for characterising digital records; explains the process of data refinement, enrichment and reconciliation with examples; and demonstrates the use of knowledge graphs constructed using semantic queries.

Findings

This study introduced the 97imf.kr archive as a knowledge graph, enabling meaningful exploration of relationships within the archive’s records. This approach facilitated comprehensive record descriptions about different record entities. Applying archival ontologies with general-purpose vocabularies to digital records was advised to enhance metadata coherence and semantic search.

Originality/value

Most digital archives serviced in Korea are limited in the proper use of archival metadata standards. The contribution of this study is to propose a practical application of knowledge graph technology for linking and exploring digital records. This study details the process of collecting raw data on archives, data preprocessing and data enrichment, and demonstrates how to build a knowledge graph connected to external data. In particular, the knowledge graph of RiC-O vocabulary, Wikidata and Schema.org vocabulary and the semantic query using it can be applied to supplement keyword search in conventional digital archives.

Details

The Electronic Library , vol. 42 no. 1
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 31 August 2023

Faycal Touazi and Amel Boustil

The purpose of this paper is to address the need for new approaches in locating items that closely match user preference criteria due to the rise in data volume of knowledge bases…

Abstract

Purpose

The purpose of this paper is to address the need for new approaches in locating items that closely match user preference criteria due to the rise in data volume of knowledge bases resulting from Open Data initiatives. Specifically, the paper focuses on evaluating SPARQL qualitative preference queries over user preferences in SPARQL.

Design/methodology/approach

The paper outlines a novel approach for handling SPARQL preference queries by representing preferences through symbolic weights using the possibilistic logic (PL) framework. This approach allows for the management of symbolic weights without relying on numerical values, using a partial ordering system instead. The paper compares this approach with numerous other approaches, including those based on skylines, fuzzy sets and conditional preference networks.

Findings

The paper highlights the advantages of the proposed approach, which enables the representation of preference criteria through symbolic weights and qualitative considerations. This approach offers a more intuitive way to convey preferences and manage rankings.

Originality/value

The paper demonstrates the usefulness and originality of the proposed SPARQL language in the PL framework. The approach extends SPARQL by incorporating symbolic weights and qualitative preferences.

Details

International Journal of Web Information Systems, vol. 19 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 16 April 2024

Henrik Dibowski

Adequate means for easily viewing, browsing and searching knowledge graphs (KGs) are a crucial, still limiting factor. Therefore, this paper aims to present virtual properties as…

Abstract

Purpose

Adequate means for easily viewing, browsing and searching knowledge graphs (KGs) are a crucial, still limiting factor. Therefore, this paper aims to present virtual properties as valuable user interface (UI) concept for ontologies and KGs able to improve these issues. Virtual properties provide shortcuts on a KG that can enrich the scope of a class with other information beyond its direct neighborhood.

Design/methodology/approach

Virtual properties can be defined as enhancements of shapes constraint language (SHACL) property shapes. Their values are computed on demand via protocol and RDF query language (SPARQL) queries. An approach is demonstrated that can help to identify suitable virtual property candidates. Virtual properties can be realized as integral functionality of generic, frame-based UIs, which can automatically provide views and masks for viewing and searching a KG.

Findings

The virtual property approach has been implemented at Bosch and is usable by more than 100,000 Bosch employees in a productive deployment, which proves the maturity and relevance of the approach for Bosch. It has successfully been demonstrated that virtual properties can significantly improve KG UIs by enriching the scope of a class with information beyond its direct neighborhood.

Originality/value

SHACL-defined virtual properties and their automatic identification are a novel concept. To the best of the author’s knowledge, no such approach has been established nor standardized so far.

Details

International Journal of Web Information Systems, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 22 February 2022

Apostolos Vlachos, Maria Perifanou and Anastasios A. Economides

The purpose of this paper is to review ontologies and data models currently in use for augmented reality (AR) applications, in the cultural heritage (CH) domain, specifically in…

Abstract

Purpose

The purpose of this paper is to review ontologies and data models currently in use for augmented reality (AR) applications, in the cultural heritage (CH) domain, specifically in an urban environment. The aim is to see the current trends in ontologies and data models used and investigate their applications in real world scenarios. Some special cases of applications or ontologies are also discussed, as being interesting enough to merit special consideration.

Design/methodology/approach

A search using Google Scholar, Scopus, ScienceDirect and IEEE Xplore was done in order to find articles that describe ontologies and data models in AR CH applications. The authors identified the articles that analyze the use of ontologies and/or data models, as well as articles that were deemed to be of special interest.

Findings

This review found that CIDOC-CRM is the most popular ontology closely followed by Historical Context Ontology (HiCO). Also, a combination of current ontologies seems to be the most complete way to fully describe a CH object or site. A layered ontology model is suggested, which can be expanded according to the specific project.

Originality/value

This study provides an overview of ontologies and data models for AR CH applications in urban environments. There are several ontologies currently in use in the CH domain, with none having been universally adopted, while new ontologies or extensions to existing ones are being created, in the attempt to fully describe a CH object or site. Also, this study suggests a combination of popular ontologies in a multi-layer model.

Details

Journal of Cultural Heritage Management and Sustainable Development, vol. 14 no. 2
Type: Research Article
ISSN: 2044-1266

Keywords

Open Access
Article
Publication date: 5 April 2024

Miquel Centelles and Núria Ferran-Ferrer

Develop a comprehensive framework for assessing the knowledge organization systems (KOSs), including the taxonomy of Wikipedia and the ontologies of Wikidata, with a specific…

Abstract

Purpose

Develop a comprehensive framework for assessing the knowledge organization systems (KOSs), including the taxonomy of Wikipedia and the ontologies of Wikidata, with a specific focus on enhancing management and retrieval with a gender nonbinary perspective.

Design/methodology/approach

This study employs heuristic and inspection methods to assess Wikipedia’s KOS, ensuring compliance with international standards. It evaluates the efficiency of retrieving non-masculine gender-related articles using the Catalan Wikipedian category scheme, identifying limitations. Additionally, a novel assessment of Wikidata ontologies examines their structure and coverage of gender-related properties, comparing them to Wikipedia’s taxonomy for advantages and enhancements.

Findings

This study evaluates Wikipedia’s taxonomy and Wikidata’s ontologies, establishing evaluation criteria for gender-based categorization and exploring their structural effectiveness. The evaluation process suggests that Wikidata ontologies may offer a viable solution to address Wikipedia’s categorization challenges.

Originality/value

The assessment of Wikipedia categories (taxonomy) based on KOS standards leads to the conclusion that there is ample room for improvement, not only in matters concerning gender identity but also in the overall KOS to enhance search and retrieval for users. These findings bear relevance for the design of tools to support information retrieval on knowledge-rich websites, as they assist users in exploring topics and concepts.

Abstract

Details

Business and Management Doctorates World-Wide: Developing the Next Generation
Type: Book
ISBN: 978-1-78973-500-0

Article
Publication date: 9 November 2023

Gustavo Candela, Nele Gabriëls, Sally Chambers, Milena Dobreva, Sarah Ames, Meghan Ferriter, Neil Fitzgerald, Victor Harbo, Katrine Hofmann, Olga Holownia, Alba Irollo, Mahendra Mahey, Eileen Manchester, Thuy-An Pham, Abigail Potter and Ellen Van Keer

The purpose of this study is to offer a checklist that can be used for both creating and evaluating digital collections, which are also sometimes referred to as data sets as part…

Abstract

Purpose

The purpose of this study is to offer a checklist that can be used for both creating and evaluating digital collections, which are also sometimes referred to as data sets as part of the collections as data movement, suitable for computational use.

Design/methodology/approach

The checklist was built by synthesising and analysing the results of relevant research literature, articles and studies and the issues and needs obtained in an observational study. The checklist was tested and applied both as a tool for assessing a selection of digital collections made available by galleries, libraries, archives and museums (GLAM) institutions as proof of concept and as a supporting tool for creating collections as data.

Findings

Over the past few years, there has been a growing interest in making available digital collections published by GLAM organisations for computational use. Based on previous work, the authors defined a methodology to build a checklist for the publication of Collections as data. The authors’ evaluation showed several examples of applications that can be useful to encourage other institutions to publish their digital collections for computational use.

Originality/value

While some work on making available digital collections suitable for computational use exists, giving particular attention to data quality, planning and experimentation, to the best of the authors’ knowledge, none of the work to date provides an easy-to-follow and robust checklist to publish collection data sets in GLAM institutions. This checklist intends to encourage small- and medium-sized institutions to adopt the collection as data principles in daily workflows following best practices and guidelines.

Details

Global Knowledge, Memory and Communication, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9342

Keywords

Abstract

Details

Business and Management Doctorates World-Wide: Developing the Next Generation
Type: Book
ISBN: 978-1-78973-500-0

Open Access
Article
Publication date: 8 February 2023

Edoardo Ramalli and Barbara Pernici

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model…

Abstract

Purpose

Experiments are the backbone of the development process of data-driven predictive models for scientific applications. The quality of the experiments directly impacts the model performance. Uncertainty inherently affects experiment measurements and is often missing in the available data sets due to its estimation cost. For similar reasons, experiments are very few compared to other data sources. Discarding experiments based on the missing uncertainty values would preclude the development of predictive models. Data profiling techniques are fundamental to assess data quality, but some data quality dimensions are challenging to evaluate without knowing the uncertainty. In this context, this paper aims to predict the missing uncertainty of the experiments.

Design/methodology/approach

This work presents a methodology to forecast the experiments’ missing uncertainty, given a data set and its ontological description. The approach is based on knowledge graph embeddings and leverages the task of link prediction over a knowledge graph representation of the experiments database. The validity of the methodology is first tested in multiple conditions using synthetic data and then applied to a large data set of experiments in the chemical kinetic domain as a case study.

Findings

The analysis results of different test case scenarios suggest that knowledge graph embedding can be used to predict the missing uncertainty of the experiments when there is a hidden relationship between the experiment metadata and the uncertainty values. The link prediction task is also resilient to random noise in the relationship. The knowledge graph embedding outperforms the baseline results if the uncertainty depends upon multiple metadata.

Originality/value

The employment of knowledge graph embedding to predict the missing experimental uncertainty is a novel alternative to the current and more costly techniques in the literature. Such contribution permits a better data quality profiling of scientific repositories and improves the development process of data-driven models based on scientific experiments.

1 – 10 of 22