Search results

1 – 10 of 239
Content available
Article
Publication date: 8 July 2022

Vania Vidal, Valéria Magalhães Pequeno, Narciso Moura Arruda Júnior and Marco Antonio Casanova

Enterprise knowledge graphs (EKG) in resource description framework (RDF) consolidate and semantically integrate heterogeneous data sources into a comprehensive dataspace…

Abstract

Purpose

Enterprise knowledge graphs (EKG) in resource description framework (RDF) consolidate and semantically integrate heterogeneous data sources into a comprehensive dataspace. However, to make an external relational data source accessible through an EKG, an RDF view of the underlying relational database, called an RDB2RDF view, must be created. The RDB2RDF view should be materialized in situations where live access to the data source is not possible, or the data source imposes restrictions on the type of query forms and the number of results. In this case, a mechanism for maintaining the materialized view data up-to-date is also required. The purpose of this paper is to address the problem of the efficient maintenance of externally materialized RDB2RDF views.

Design/methodology/approach

This paper proposes a formal framework for the incremental maintenance of externally materialized RDB2RDF views, in which the server computes and publishes changesets, indicating the difference between the two states of the view. The EKG system can then download the changesets and synchronize the externally materialized view. The changesets are computed based solely on the update and the source database state and require no access to the content of the view.

Findings

The central result of this paper shows that changesets computed according to the formal framework correctly maintain the externally materialized RDB2RDF view. The experiments indicate that the proposed strategy supports live synchronization of large RDB2RDF views and that the time taken to compute the changesets with the proposed approach was almost three orders of magnitude smaller than partial rematerialization and three orders of magnitude smaller than full rematerialization.

Originality/value

The main idea that differentiates the proposed approach from previous work on incremental view maintenance is to explore the object-preserving property of typical RDB2RDF views so that the solution can deal with views with duplicates. The algorithms for the incremental maintenance of relational views with duplicates published in the literature require querying the materialized view data to precisely compute the changesets. By contrast, the approach proposed in this paper requires no access to view data. This is important when the view is maintained externally, because accessing a remote data source may be too slow.

Details

International Journal of Web Information Systems, vol. 18 no. 5/6
Type: Research Article
ISSN: 1744-0084

Keywords

Article
Publication date: 2 March 2012

Thomas Baker

Library‐world “languages of description” are increasingly being expressed using the resource description framework (RDF) for compatibility with linked data approaches. This…

5200

Abstract

Purpose

Library‐world “languages of description” are increasingly being expressed using the resource description framework (RDF) for compatibility with linked data approaches. This article aims to look at how issues around the Dublin Core, a small “metadata element set,” exemplify issues that must be resolved in order to ensure that library data meet traditional standards for quality and consistency while remaining broadly interoperable with other data sources in the linked data environment.

Design/methodology/approach

The article focuses on how the Dublin Core – originally seen, in traditional terms, as a simple record format – came increasingly to be seen as an RDF vocabulary for use in metadata based on a “statement” model, and how new approaches to metadata evolved to bridge the gap between these models.

Findings

The translation of library standards into RDF involves the separation of languages of description, per se, from the specific data formats into which they have for so long been embedded. When defined with “minimal ontological commitment,” languages of description lend themselves to the sort of adaptation that is inevitably a part of any human linguistic activity. With description set profiles, the quality and consistency of data traditionally required for sharing records among libraries can be ensured by placing precise constraints on the content of data records – without compromising the interoperability of the underlying vocabularies in the wider linked data context.

Practical implications

In today's environment, library data must continue to meet high standards of consistency and quality, yet it must be possible to link or merge the data with sources that follow other standards. Placing constraints on the data created, more than on the underlying vocabularies, allows both requirements to be met.

Originality/value

This paper examines how issues around the Dublin Core exemplify issues that must be resolved to ensure library data meet quality and consistency standards while remaining interoperable with other data sources.

Details

Library Hi Tech, vol. 30 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 8 January 2018

Miel Vander Sande, Ruben Verborgh, Patrick Hochstenbach and Herbert Van de Sompel

The purpose of this paper is to detail a low-cost, low-maintenance publishing strategy aimed at unlocking the value of Linked Data collections held by libraries, archives and…

1020

Abstract

Purpose

The purpose of this paper is to detail a low-cost, low-maintenance publishing strategy aimed at unlocking the value of Linked Data collections held by libraries, archives and museums (LAMs).

Design/methodology/approach

The shortcomings of commonly used Linked Data publishing approaches are identified, and the current lack of substantial collections of Linked Data exposed by LAMs is considered. To improve on the discussed status quo, a novel approach for publishing Linked Data is proposed and demonstrated by means of an archive of DBpedia versions, which is queried in combination with other Linked Data sources.

Findings

The authors show that the approach makes publishing Linked Data archives easy and affordable, and supports distributed querying without causing untenable load on the Linked Data sources.

Research limitations/implications

The proposed approach significantly lowers the barrier for publishing, maintaining, and making Linked Data collections queryable. As such, it offers the potential to substantially grow the distributed network of queryable Linked Data sources. Because the approach supports querying without causing unacceptable load on the sources, the queryable interfaces are expected to be more reliable, allowing them to become integral building blocks of robust applications that leverage distributed Linked Data sources.

Originality/value

The novel publishing strategy significantly lowers the technical and financial barriers that LAMs face when attempting to publish Linked Data collections. The proposed approach yields Linked Data sources that can reliably be queried, paving the way for applications that leverage distributed Linked Data sources through federated querying.

Details

Journal of Documentation, vol. 74 no. 1
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 8 January 2018

Chunqiu Li and Shigeo Sugimoto

Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track…

1261

Abstract

Purpose

Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas.

Design/methodology/approach

The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English.

Findings

Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time.

Research limitations/implications

The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered.

Originality/value

This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.

Article
Publication date: 1 May 2006

Rajugan Rajagopalapillai, Elizabeth Chang, Tharam S. Dillon and Ling Feng

In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources…

Abstract

In data engineering, view formalisms are used to provide flexibility to users and user applications by allowing them to extract and elaborate data from the stored data sources. Conversely, since the introduction of EXtensible Markup Language (XML), it is fast emerging as the dominant standard for storing, describing, and interchanging data among various web and heterogeneous data sources. In combination with XML Schema, XML provides rich facilities for defining and constraining user‐defined data semantics and properties, a feature that is unique to XML. In this context, it is interesting to investigate traditional database features, such as view models and view design techniques for XML. However, traditional view formalisms are strongly coupled to the data language and its syntax, thus it proves to be a difficult task to support views in the case of semi‐structured data models. Therefore, in this paper we propose a Layered View Model (LVM) for XML with conceptual and schemata extensions. Here our work is three‐fold; first we propose an approach to separate the implementation and conceptual aspects of the views that provides a clear separation of concerns, thus, allowing analysis and design of views to be separated from their implementation. Secondly, we define representations to express and construct these views at the conceptual level. Thirdly, we define a view transformation methodology for XML views in the LVM, which carries out automated transformation to a view schema and a view query expression in an appropriate query language. Also, to validate and apply the LVM concepts, methods and transformations developed, we propose a viewdriven application development framework with the flexibility to develop web and database applications for XML, at varying levels of abstraction.

Details

International Journal of Web Information Systems, vol. 2 no. 2
Type: Research Article
ISSN: 1744-0084

Keywords

Book part
Publication date: 8 January 2021

Misu Kim, Mingyu Chen and Debbie Montgomery

The library metadata of the twenty-first century is moving toward a linked data model. BIBFRAME, which stands for Bibliographic Framework Initiative, was launched in 2011 with the…

Abstract

The library metadata of the twenty-first century is moving toward a linked data model. BIBFRAME, which stands for Bibliographic Framework Initiative, was launched in 2011 with the goal to make bibliographic descriptions sharable and interoperable on the web. Since its inception, BIBFRAME development has made remarkable progress. The focus of BIBFRAME discussions has now shifted from experimentation to implementation. The library community is collaborating with all stakeholders to build the infrastructure for BIBFRAME production in order to provide the environment where BIBFRAME data can be easily created, reused, and shared. This chapter addresses library community's BIBFRAME endeavors, with the focus on Library of Congress, Program for Cooperative Program, Linked Data for Production Phase 2, and OCLC. This chapter discusses BIBFRAME's major differences from the MARC standard with the hope of helping metadata practitioners get a general understanding of the future metadata activity. While the BIBFRAME landscape is beginning to take shape and its practical implications are beginning to develop, it is anticipated that MARC records will continue to be circulated for the foreseeable future. Upcoming multistandard metadata environments will bring new challenges to metadata practitioners, and this chapter addresses the required knowledge and skills for this transitional and multistandard metadata landscape. Finally, this chapter explores BIBFRAME's remaining challenges to realize the BIBFRAME production environment and asserts that BIBFRAME's ultimate goal is to deliver a value-added next-web search experience to our users.

Article
Publication date: 1 June 2006

Ching‐Jen Huang, Amy J.C. Trappey and Yin‐Ho Yao

The purpose of this research is to develop a prototype of agent‐based intelligent workflow system for product design collaboration in a distributed network environment.

2332

Abstract

Purpose

The purpose of this research is to develop a prototype of agent‐based intelligent workflow system for product design collaboration in a distributed network environment.

Design/methodology/approach

This research separates the collaborative workflow enactment mechanisms from the collaborative workflow building tools for flexible workflow management. Applying the XML/RDF (resource description framework) ontology schema, workflow logic is described in a standard representation. Lastly, a case study in collaborative system‐on‐chip (SoC) design is depicted to demonstrate the agent‐based workflow system for the design collaboration on the web.

Findings

Agent technology can overcome the difficulty of interoperability in cross‐platform, distributed environment with standard RDF data schema. Control and update of workflow functions become flexible and versatile by simply modifying agent reasoning and behaviors.

Research limitations/implications

When business partners want to collaborate, how to integrate agents in different workflows becomes a critical issues.

Practical implications

Agent technology can facilitate design cooperation and teamwork communication in a collaborative, transparent product development environment.

Originality/value

This research establishes generalized flow logic RDF models and an agent‐based intelligent workflow management system, called AWfMS, based on the RDF schema of workflow definition. AWfMS minimizes barriers in the distributed design process and hence increases design cooperations among partners.

Details

Industrial Management & Data Systems, vol. 106 no. 5
Type: Research Article
ISSN: 0263-5577

Keywords

Article
Publication date: 16 November 2012

Getaneh Alemu, Brett Stevens, Penny Ross and Jane Chandler

The purpose of this paper is to provide recommendations for making a conceptual shift from current document‐centric to data‐centric metadata. The importance of adjusting current…

7451

Abstract

Purpose

The purpose of this paper is to provide recommendations for making a conceptual shift from current document‐centric to data‐centric metadata. The importance of adjusting current library models such as Resource Description and Access (RDA) and Functional Requirements for Bibliographic Records (FRBR) to models based on Linked Data principles is discussed. In relation to technical formats, the paper suggests the need to leapfrog from machine readable cataloguing (MARC) to Resource Description Framework (RDF), without disrupting current library metadata operations.

Design/methodology/approach

This paper identified and reviewed relevant works on overarching topics that include standards‐based metadata, Web 2.0 and Linked Data. The review of these works is contextualised to inform the recommendations identified in this paper. Articles were retrieved from databases such as Emerald and D‐Lib Magazine. Books, electronic articles and relevant blog posts were also used to support the arguments put forward in this paper.

Findings

Contemporary library standards and models carried forward some of the constraints from the traditional card catalogue system. The resultant metadata are mainly attuned to human consumption rather than machine processing. In view of current user needs and technological development such as the interest in Linked Data, it is found important that current metadata models such as FRBR and RDA are re‐conceptualised.

Practical implications

This paper discusses the implications of re‐conceptualising current metadata models in light of Linked Data principles, with emphasis on metadata sharing, facilitation of serendipity, identification of Zeitgeist and emergent metadata, provision of faceted navigation, and enriching metadata with links.

Originality/value

Most of the literature on Linked Data for libraries focus on answering the “how to” questions of using RDF/XML and SPARQL technologies, however, this paper focuses mainly on answering “why” Linked Data questions, thus providing an underlying rationale for using Linked Data. The discussion on mixed‐metadata approaches, serendipity, Zeitgeist and emergent metadata is considered to provide an important rationale to the role of Linked Data for libraries.

Article
Publication date: 1 October 2005

Anton Naumenko, Sergiy Nikitin, Vagan Terziyan and Andriy Zharko

To identify cases related to design of ICT platforms for industrial alliances, where the use of Ontology‐driven architectures based on Semantic web standards is more advantageous…

1251

Abstract

Purpose

To identify cases related to design of ICT platforms for industrial alliances, where the use of Ontology‐driven architectures based on Semantic web standards is more advantageous than application of conventional modeling together with XML standards.

Design/methodology/approach

A comparative analysis of the two latest and the most obvious use cases (NASA and Nordic Process Industry Data Exchange Alliance) concerned with development of an environment for integration and collaboration of industrial partners, has been used as a basis for the research results. Additionally, dynamics of changes in a domain data model and their consequences have been analyzed on a couple of typical use cases.

Findings

Ontology‐driven architectures of a collaboration and integration ICT platforms have been recognized as more appropriate for a technical support of industrial alliances around a supply‐chains with a long life cycles.

Research limitations/implications

More typical cases related to changes in domain data/knowledge models and to necessity of their integration, have to be considered and analyzed in search of advantageous of ontological modeling over conventional modeling approaches. Ways of a gradual change from conventional domain models to ontological ones in ICT systems have to be studied. The significance of existing XML‐based tools and the popularity of XML has to be estimated for the wide adoption of Semantic web principles.

Practical implications

The modeling approach which will be used as a core for building a collaboration and integration ICT platforms has to be carefully selected. Incorrect choice (e.g. UML together with XML) can cause consequences that will be hard to reform. The paper is anticipated to facilitate faster adoption of the Semantic web approach by industry.

Originality/value

The serious revision of existing and emerging domain modeling approaches has been undertaken. More unique arguments in favor of ontological modeling have been discovered. The paper is intended for serious consideration by emerging industrial alliances with regard to their choice in a core technology that will technically enable integration and collaboration between partners.

Details

The Learning Organization, vol. 12 no. 5
Type: Research Article
ISSN: 0969-6474

Keywords

Article
Publication date: 3 May 2011

Gordon Dunsire and Mirna Willer

There has been a significant increase in activity over the past few years to integrate library metadata with the Semantic Web. While much of this has involved the development of…

7215

Abstract

Purpose

There has been a significant increase in activity over the past few years to integrate library metadata with the Semantic Web. While much of this has involved the development of controlled vocabularies as “linked data”, there have recently been concerted attempts to represent standard library models for bibliographic metadata in forms that are compatible with Semantic Web technologies. This paper aims to give an overview of these initiatives, describing relationships between them in the context of the Semantic Web.

Design/methodology/approach

The paper focusses on standards created and maintained by the International Federation of Library Associations and Institutions, including Functional Requirements for Bibliographic Records, Functional Requirements for Authority Data, and International Standard Bibliographic Description. It also covers related standards and models such as RDA – Resource Description and Access, REICAT (the new Italian cataloguing rules) and CIDOC Conceptual Reference Model, and the technical infrastructure for supporting relationships between them, including the RDA/ONIX framework for resource categorization, and Vocabulary Mapping Framework.

Findings

The paper discusses the importance of these developments for releasing the rich metadata held by libraries as linked data, addressing semantic and statistical inferencing, integration with user‐ and machine‐generated metadata, and authenticity, veracity and trust. It also discusses the representation of controlled vocabularies, including subject classifications and headings, name authorities, and terminologies for descriptive content, in a multilingual environment.

Practical implications

Finally, the paper discusses the potential collective impact of these initiatives on metadata workflows and management systems.

Originality/value

The paper provides a general review of recent activity for those interested in the development of library standards, the Semantic Web, and universal bibliographic control.

Details

Library Hi Tech News, vol. 28 no. 3
Type: Research Article
ISSN: 0741-9058

Keywords

1 – 10 of 239