Search results

21 – 30 of 355
Article
Publication date: 18 November 2013

Jorge Luis Morato, Sonia Sanchez-Cuadrado, Christos Dimou, Divakar Yadav and Vicente Palacios

– This paper seeks to analyze and evaluate different types of semantic web retrieval systems, with respect to their ability to manage and retrieve semantic documents.

1441

Abstract

Purpose

This paper seeks to analyze and evaluate different types of semantic web retrieval systems, with respect to their ability to manage and retrieve semantic documents.

Design/methodology/approach

The authors provide a brief overview of knowledge modeling and semantic retrieval systems in order to identify their major problems. They classify a set of characteristics to evaluate the management of semantic documents. For doing the same the authors select 12 retrieval systems classified according to these features. The evaluation methodology followed in this work is the one that has been used in the Desmet project for the evaluation of qualitative characteristics.

Findings

A review of the literature has shown deficiencies in the current state of the semantic web to cope with known problems. Additionally, the way semantic retrieval systems are implemented shows discrepancies in their implementation. The authors analyze the presence of a set of functionalities in different types of semantic retrieval systems and find a low degree of implementation of important specifications and in the criteria to evaluate them. The results of this evaluation indicate that, at the moment, the semantic web is characterized by a lack of usability that is derived by the problems related to the management of semantic documents.

Originality/value

This proposal shows a simple way to compare requirements of semantic retrieval systems based in DESMET methodology qualitatively. The functionalities chosen to test the methodology are based on the problems as well as relevant criteria discussed in the literature. This work provides functionalities to design semantic retrieval systems in different scenarios.

Details

Library Hi Tech, vol. 31 no. 4
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 30 August 2011

Hannes Mühleisen, Tilman Walther and Robert Tolksdorf

The purpose of this paper is to show the potential of self‐organized semantic storage services. The semantic web has provided a vision of how to build the applications of the…

1325

Abstract

Purpose

The purpose of this paper is to show the potential of self‐organized semantic storage services. The semantic web has provided a vision of how to build the applications of the future. A software component dedicated to the storage and retrieval of semantic information is an important but generic part of these applications. Apart from mere functionality, these storage components also have to provide good performance regarding the non‐functional requirements scalability, adaptability and robustness. Distributing the task of storing and querying semantic information onto multiple computers is a way of achieving this performance. However, the distribution of a task onto a set of computers connected using a communication network is not trivial. One solution is self‐organized technologies, where no central entity coordinates the system's operation.

Design/methodology/approach

Based on the available literature on large‐scale semantic storage systems, the paper analyzes the underlying distribution algorithm, with special focus on the properties of semantic information and corresponding queries. The paper compares the approaches and identify their shortcomings.

Findings

All analyzed approaches and their underlying technologies were unable to distribute large amounts of semantic information and queries in a generic way while still being able to react on changing network infrastructure. Nonetheless, as each concept represented a unique trade‐off between these goals, the paper points out how self‐organization is crucial to perform well at least in a subset of them.

Originality/value

The contribution of this paper is a literature review aimed at showing the potential of self‐organized semantic storage services. A case is made for self‐organization in a distributed storage system as the key to excellence in the relevant non‐functional requirements: scalability, adaptability and robustness.

Details

International Journal of Web Information Systems, vol. 7 no. 3
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 25 October 2022

Samir Sellami and Nacer Eddine Zarour

Massive amounts of data, manifesting in various forms, are being produced on the Web every minute and becoming the new standard. Exploring these information sources distributed in…

Abstract

Purpose

Massive amounts of data, manifesting in various forms, are being produced on the Web every minute and becoming the new standard. Exploring these information sources distributed in different Web segments in a unified way is becoming a core task for a variety of users’ and companies’ scenarios. However, knowledge creation and exploration from distributed Web data sources is a challenging task. Several data integration conflicts need to be resolved and the knowledge needs to be visualized in an intuitive manner. The purpose of this paper is to extend the authors’ previous integration works to address semantic knowledge exploration of enterprise data combined with heterogeneous social and linked Web data sources.

Design/methodology/approach

The authors synthesize information in the form of a knowledge graph to resolve interoperability conflicts at integration time. They begin by describing KGMap, a mapping model for leveraging knowledge graphs to bridge heterogeneous relational, social and linked web data sources. The mapping model relies on semantic similarity measures to connect the knowledge graph schema with the sources' metadata elements. Then, based on KGMap, this paper proposes KeyFSI, a keyword-based semantic search engine. KeyFSI provides a responsive faceted navigating Web user interface designed to facilitate the exploration and visualization of embedded data behind the knowledge graph. The authors implemented their approach for a business enterprise data exploration scenario where inputs are retrieved on the fly from a local customer relationship management database combined with the DBpedia endpoint and the Facebook Web application programming interface (API).

Findings

The authors conducted an empirical study to test the effectiveness of their approach using different similarity measures. The observed results showed better efficiency when using a semantic similarity measure. In addition, a usability evaluation was conducted to compare KeyFSI features with recent knowledge exploration systems. The obtained results demonstrate the added value and usability of the contributed approach.

Originality/value

Most state-of-the-art interfaces allow users to browse one Web segment at a time. The originality of this paper lies in proposing a cost-effective virtual on-demand knowledge creation approach, a method that enables organizations to explore valuable knowledge across multiple Web segments simultaneously. In addition, the responsive components implemented in KeyFSI allow the interface to adequately handle the uncertainty imposed by the nature of Web information, thereby providing a better user experience.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 3 October 2023

Haklae Kim

Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a…

Abstract

Purpose

Despite ongoing research into archival metadata standards, digital archives are unable to effectively represent records in their appropriate contexts. This study aims to propose a knowledge graph that depicts the diverse relationships between heterogeneous digital archive entities.

Design/methodology/approach

This study introduces and describes a method for applying knowledge graphs to digital archives in a step-by-step manner. It examines archival metadata standards, such as Records in Context Ontology (RiC-O), for characterising digital records; explains the process of data refinement, enrichment and reconciliation with examples; and demonstrates the use of knowledge graphs constructed using semantic queries.

Findings

This study introduced the 97imf.kr archive as a knowledge graph, enabling meaningful exploration of relationships within the archive’s records. This approach facilitated comprehensive record descriptions about different record entities. Applying archival ontologies with general-purpose vocabularies to digital records was advised to enhance metadata coherence and semantic search.

Originality/value

Most digital archives serviced in Korea are limited in the proper use of archival metadata standards. The contribution of this study is to propose a practical application of knowledge graph technology for linking and exploring digital records. This study details the process of collecting raw data on archives, data preprocessing and data enrichment, and demonstrates how to build a knowledge graph connected to external data. In particular, the knowledge graph of RiC-O vocabulary, Wikidata and Schema.org vocabulary and the semantic query using it can be applied to supplement keyword search in conventional digital archives.

Details

The Electronic Library , vol. 42 no. 1
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 2 March 2012

Thomas Baker

Library‐world “languages of description” are increasingly being expressed using the resource description framework (RDF) for compatibility with linked data approaches. This…

5200

Abstract

Purpose

Library‐world “languages of description” are increasingly being expressed using the resource description framework (RDF) for compatibility with linked data approaches. This article aims to look at how issues around the Dublin Core, a small “metadata element set,” exemplify issues that must be resolved in order to ensure that library data meet traditional standards for quality and consistency while remaining broadly interoperable with other data sources in the linked data environment.

Design/methodology/approach

The article focuses on how the Dublin Core – originally seen, in traditional terms, as a simple record format – came increasingly to be seen as an RDF vocabulary for use in metadata based on a “statement” model, and how new approaches to metadata evolved to bridge the gap between these models.

Findings

The translation of library standards into RDF involves the separation of languages of description, per se, from the specific data formats into which they have for so long been embedded. When defined with “minimal ontological commitment,” languages of description lend themselves to the sort of adaptation that is inevitably a part of any human linguistic activity. With description set profiles, the quality and consistency of data traditionally required for sharing records among libraries can be ensured by placing precise constraints on the content of data records – without compromising the interoperability of the underlying vocabularies in the wider linked data context.

Practical implications

In today's environment, library data must continue to meet high standards of consistency and quality, yet it must be possible to link or merge the data with sources that follow other standards. Placing constraints on the data created, more than on the underlying vocabularies, allows both requirements to be met.

Originality/value

This paper examines how issues around the Dublin Core exemplify issues that must be resolved to ensure library data meet quality and consistency standards while remaining interoperable with other data sources.

Details

Library Hi Tech, vol. 30 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 15 June 2012

Shohei Ohsawa, Toshiyuki Amagasa and Hiroyuki Kitagawa

The purpose of this paper is to improve the performance of querying and reasoning and querying over large‐scale Resource Description Framework (RDF) data. When processing RDF(S…

Abstract

Purpose

The purpose of this paper is to improve the performance of querying and reasoning and querying over large‐scale Resource Description Framework (RDF) data. When processing RDF(S) data, RDFS entailment is performed which often generates a large number of additional triples, which causes a poor performance. To deal with large‐scale RDF data, it is important to develop a scheme which enables the processing of large RDF data in an efficient manner.

Design/methodology/approach

The authors propose RDF packages, which is a space efficient format for RDF data. In an RDF package, a set of triples of the same class or triples having the same predicate are grouped into a dedicated node named Package. Any RDF data can be represented using RDF packages, and vice versa.

Findings

It is found that using RDF packages can significantly reduce the size of RDF data, even after RDFS entailment. The authors experimentally evaluate the performance of the proposed scheme in terms of triple size, reasoning speed, and querying speed.

Research limitations/implications

The proposed scheme is useful in processing RDF(S) data, but it needs further development to deal with an ontological language such as OWL.

Originality/value

An important feature of the RDF packages is that, when performing RDFS reasoning, there is no need to modify either reasoning rules or reasoning engine; while other related schemes require reasoning rules or reasoning engine to be modified.

Details

International Journal of Web Information Systems, vol. 8 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 3 November 2014

Nikolaos Konstantinou, Dimitrios-Emmanuel Spanos, Nikos Houssos and Nikolaos Mitrou

– This paper aims to introduce a transformation engine which can be used to convert an existing institutional repository installation into a Linked Open Data repository.

1773

Abstract

Purpose

This paper aims to introduce a transformation engine which can be used to convert an existing institutional repository installation into a Linked Open Data repository.

Design/methodology/approach

The authors describe how the data that exist in a DSpace repository can be semantically annotated to serve as a Semantic Web (meta)data repository.

Findings

The authors present a non-intrusive, standards-compliant approach that can run alongside with current practices, while incorporating state-of-the art methodologies.

Originality/value

Also, they propose a set of mappings between domain vocabularies that can be (re)used towards this goal, thus offering an approach that covers both the technical and semantic aspects of the procedure.

Details

The Electronic Library, vol. 32 no. 6
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 21 October 2019

Priyadarshini R., Latha Tamilselvan and Rajendran N.

The purpose of this paper is to propose a fourfold semantic similarity that results in more accuracy compared to the existing literature. The change detection in the URL and the…

Abstract

Purpose

The purpose of this paper is to propose a fourfold semantic similarity that results in more accuracy compared to the existing literature. The change detection in the URL and the recommendation of the source documents is facilitated by means of a framework in which the fourfold semantic similarity is implied. The latest trends in technology emerge with the continuous growth of resources on the collaborative web. This interactive and collaborative web pretense big challenges in recent technologies like cloud and big data.

Design/methodology/approach

The enormous growth of resources should be accessed in a more efficient manner, and this requires clustering and classification techniques. The resources on the web are described in a more meaningful manner.

Findings

It can be descripted in the form of metadata that is constituted by resource description framework (RDF). Fourfold similarity is proposed compared to three-fold similarity proposed in the existing literature. The fourfold similarity includes the semantic annotation based on the named entity recognition in the user interface, domain-based concept matching and improvised score-based classification of domain-based concept matching based on ontology, sequence-based word sensing algorithm and RDF-based updating of triples. The aggregation of all these similarity measures including the components such as semantic user interface, semantic clustering, and sequence-based classification and semantic recommendation system with RDF updating in change detection.

Research limitations/implications

The existing work suggests that linking resources semantically increases the retrieving and searching ability. Previous literature shows that keywords can be used to retrieve linked information from the article to determine the similarity between the documents using semantic analysis.

Practical implications

These traditional systems also lack in scalability and efficiency issues. The proposed study is to design a model that pulls and prioritizes knowledge-based content from the Hadoop distributed framework. This study also proposes the Hadoop-based pruning system and recommendation system.

Social implications

The pruning system gives an alert about the dynamic changes in the article (virtual document). The changes in the document are automatically updated in the RDF document. This helps in semantic matching and retrieval of the most relevant source with the virtual document.

Originality/value

The recommendation and detection of changes in the blogs are performed semantically using n-triples and automated data structures. User-focussed and choice-based crawling that is proposed in this system also assists the collaborative filtering. Consecutively collaborative filtering recommends the user focussed source documents. The entire clustering and retrieval system is deployed in multi-node Hadoop in the Amazon AWS environment and graphs are plotted and analyzed.

Details

International Journal of Intelligent Unmanned Systems, vol. 7 no. 4
Type: Research Article
ISSN: 2049-6427

Keywords

Article
Publication date: 1 April 2006

Yan Han

To research a resource description framework (RDF) based digital library system that facilitates digital resource management and supports knowledge management for an interoperable…

2935

Abstract

Purpose

To research a resource description framework (RDF) based digital library system that facilitates digital resource management and supports knowledge management for an interoperable information environment.

Design/methodology/approach

The paper first introduces some of issues with metadata management and knowledge management and describes the needs for a true interoperable environment for information transferring across domains. A journal delivery application has been implemented as a concept‐proof project to demonstrate the usefulness of RDF in digital library systems.

Findings

The RDF‐based digital library system at the University of Arizona Libraries provides an easy way for digital resource management by integrating other applications regardless of metadata formats and web presence.

Practical implications

A journal delivery application has been running in the RDF‐based digital library system since April 2005. An electronic theses and dissertation application will be handled by the same system.

Originality/value

The paper suggests to use RDF, the semantic web technology, as a new approach to facilitate knowledge management and metadata management. Using RDF technology brings new ways to manage and discover information for libraries.

Details

Library Hi Tech, vol. 24 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 4 August 2020

Junzhi Jia

The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data…

Abstract

Purpose

The purpose of this paper is to identify the concepts, component parts and relationships between vocabularies, linked data and knowledge graphs (KGs) from the perspectives of data and knowledge transitions.

Design/methodology/approach

This paper uses conceptual analysis methods. This study focuses on distinguishing concepts and analyzing composition and intercorrelations to explore data and knowledge transitions.

Findings

Vocabularies are the cornerstone for accurately building understanding of the meaning of data. Vocabularies provide for a data-sharing model and play an important role in supporting the semantic expression of linked data and defining the schema layer; they are also used for entity recognition, alignment and linkage for KGs. KGs, which consist of a schema layer and a data layer, are presented as cubes that organically combine vocabularies, linked data and big data.

Originality/value

This paper first describes the composition of vocabularies, linked data and KGs. More importantly, this paper innovatively analyzes and summarizes the interrelatedness of these factors, which comes from frequent interactions between data and knowledge. The three factors empower each other and can ultimately empower the Semantic Web.

Details

Journal of Documentation, vol. 77 no. 1
Type: Research Article
ISSN: 0022-0418

Keywords

21 – 30 of 355