Search results

1 – 10 of 628
Article
Publication date: 2 March 2012

Thomas Baker

Library‐world “languages of description” are increasingly being expressed using the resource description framework (RDF) for compatibility with linked data approaches. This…

5198

Abstract

Purpose

Library‐world “languages of description” are increasingly being expressed using the resource description framework (RDF) for compatibility with linked data approaches. This article aims to look at how issues around the Dublin Core, a small “metadata element set,” exemplify issues that must be resolved in order to ensure that library data meet traditional standards for quality and consistency while remaining broadly interoperable with other data sources in the linked data environment.

Design/methodology/approach

The article focuses on how the Dublin Core – originally seen, in traditional terms, as a simple record format – came increasingly to be seen as an RDF vocabulary for use in metadata based on a “statement” model, and how new approaches to metadata evolved to bridge the gap between these models.

Findings

The translation of library standards into RDF involves the separation of languages of description, per se, from the specific data formats into which they have for so long been embedded. When defined with “minimal ontological commitment,” languages of description lend themselves to the sort of adaptation that is inevitably a part of any human linguistic activity. With description set profiles, the quality and consistency of data traditionally required for sharing records among libraries can be ensured by placing precise constraints on the content of data records – without compromising the interoperability of the underlying vocabularies in the wider linked data context.

Practical implications

In today's environment, library data must continue to meet high standards of consistency and quality, yet it must be possible to link or merge the data with sources that follow other standards. Placing constraints on the data created, more than on the underlying vocabularies, allows both requirements to be met.

Originality/value

This paper examines how issues around the Dublin Core exemplify issues that must be resolved to ensure library data meet quality and consistency standards while remaining interoperable with other data sources.

Details

Library Hi Tech, vol. 30 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 25 April 2018

Deborah Maron and Melanie Feinberg

The purpose of this paper is to employ a case study of the Omeka content management system to demonstrate how the adoption and implementation of a metadata standard (in this case…

2568

Abstract

Purpose

The purpose of this paper is to employ a case study of the Omeka content management system to demonstrate how the adoption and implementation of a metadata standard (in this case, Dublin Core) can result in contrasting rhetorical arguments regarding metadata utility, quality, and reliability. In the Omeka example, the author illustrate a conceptual disconnect in how two metadata stakeholders – standards creators and standards users – operationalize metadata quality. For standards creators such as the Dublin Core community, metadata quality involves implementing a standard properly, according to established usage principles; in contrast, for standards users like Omeka, metadata quality involves mere adoption of the standard, with little consideration of proper usage and accompanying principles.

Design/methodology/approach

The paper uses an approach based on rhetorical criticism. The paper aims to establish whether Omeka’s given ends (the position that Omeka claims to take regarding Dublin Core) align with Omeka’s guiding ends (Omeka’s actual argument regarding Dublin Core). To make this assessment, the paper examines both textual evidence (what Omeka says) and material-discursive evidence (what Omeka does).

Findings

The evidence shows that, while Omeka appears to argue that adopting the Dublin Core is an integral part of Omeka’s mission, the platform’s lack of support for Dublin Core implementation makes an opposing argument. Ultimately, Omeka argues that the appearance of adopting a standard is more important than its careful implementation.

Originality/value

This study contributes to our understanding of how metadata standards are understood and used in practice. The misalignment between Omeka’s position and the goals of the Dublin Core community suggests that Omeka, and some portion of its users, do not value metadata interoperability and aggregation in the same way that the Dublin Core community does. This indicates that, although certain values regarding standards adoption may be pervasive in the metadata community, these values are not equally shared amongst all stakeholders in a digital library ecosystem. The way that standards creators (Dublin Core) understand what it means to “adopt a standard” is different from the way that standards users (Omeka) understand what it means to “adopt a standard.”

Article
Publication date: 20 September 2011

Shirley Lim and Chern Li Liew

This study aims to explore how metadata have been applied in GLAM (galleries, libraries, archives and museums) institutions in New Zealand (NZ) and to analyse its overall quality…

4850

Abstract

Purpose

This study aims to explore how metadata have been applied in GLAM (galleries, libraries, archives and museums) institutions in New Zealand (NZ) and to analyse its overall quality with the interoperability of the metadata element set especially in mind.

Design/methodology/approach

The first stage of data collection involved an analysis of the metadata records from 16 institutions from the NZ GLAM sector to examine the types and extent of metadata used. However, by looking at publicly accessible metadata records, it was impossible to determine the full extent of metadata created, especially when there could be metadata that were kept in‐house. This was complemented with interviewing of staff from the institutions concerned.

Findings

The study found that metadata records for digital images in four types of institutions have different emphases on metadata functions and a variety of metadata are not applied on a consistent basis. The lack of technical data in metadata records means that digital visual images are not always well protected. There is a consensus among those interviewed that metadata sharing is important. However, the wide use of a proprietary system which comes with pre‐existing metadata fields could result in a lack of flexibility and a risk that institutions adopt cataloguing practices to accommodating their collection management systems rather than to the requirements for interoperability and long‐term preservation.

Originality/value

In addition to studying metadata quality in GLAM digital image repositories, the study also examined the rationale and factors affecting the current practice via interviews with representatives from the institutions concerned. This shed light on potential barriers to interoperability that warranted further examination.

Details

Aslib Proceedings, vol. 63 no. 5
Type: Research Article
ISSN: 0001-253X

Keywords

Article
Publication date: 17 April 2007

Eun G. Park

The purpose of this research is to assess the current descriptions of architecture collections housed at the McGill University Library in preparation for building an interoperable

Abstract

Purpose

The purpose of this research is to assess the current descriptions of architecture collections housed at the McGill University Library in preparation for building an interoperable metadata and search interface for Canadian architecture collections.

Design/methodology/approach

The names and frequencies of tables and fields of 11 architecture databases were analyzed and summarized into the most commonly used groups. In addition, typologies of buildings by purpose of construction were presented as subject headings.

Findings

Current metadata schemes are diverse and heterogeneous across the 11 databases.

Research limitations/implications

This study is at the pilot stage and is limited to Canadian architecture collections at McGill University. The observations provide insights into metadata normalization that can be used as a basis for building architecture collections or image collections.

Originality/value

This is the first metadata assessment of architecture collections for the purpose of building a single uniform access.

Details

The Electronic Library, vol. 25 no. 2
Type: Research Article
ISSN: 0264-0473

Keywords

Book part
Publication date: 8 January 2021

Misu Kim, Mingyu Chen and Debbie Montgomery

The library metadata of the twenty-first century is moving toward a linked data model. BIBFRAME, which stands for Bibliographic Framework Initiative, was launched in 2011 with the…

Abstract

The library metadata of the twenty-first century is moving toward a linked data model. BIBFRAME, which stands for Bibliographic Framework Initiative, was launched in 2011 with the goal to make bibliographic descriptions sharable and interoperable on the web. Since its inception, BIBFRAME development has made remarkable progress. The focus of BIBFRAME discussions has now shifted from experimentation to implementation. The library community is collaborating with all stakeholders to build the infrastructure for BIBFRAME production in order to provide the environment where BIBFRAME data can be easily created, reused, and shared. This chapter addresses library community's BIBFRAME endeavors, with the focus on Library of Congress, Program for Cooperative Program, Linked Data for Production Phase 2, and OCLC. This chapter discusses BIBFRAME's major differences from the MARC standard with the hope of helping metadata practitioners get a general understanding of the future metadata activity. While the BIBFRAME landscape is beginning to take shape and its practical implications are beginning to develop, it is anticipated that MARC records will continue to be circulated for the foreseeable future. Upcoming multistandard metadata environments will bring new challenges to metadata practitioners, and this chapter addresses the required knowledge and skills for this transitional and multistandard metadata landscape. Finally, this chapter explores BIBFRAME's remaining challenges to realize the BIBFRAME production environment and asserts that BIBFRAME's ultimate goal is to deliver a value-added next-web search experience to our users.

Article
Publication date: 7 March 2008

Thomas Baker

The International Conference on Dublin Core and Metadata Applications (DC‐2008) is being held this year in Berlin. The purpose of this paper is to describe the evolution of the…

676

Abstract

Purpose

The International Conference on Dublin Core and Metadata Applications (DC‐2008) is being held this year in Berlin. The purpose of this paper is to describe the evolution of the Dublin Core effort from an initial focus on “core” elements for resource description towards a more comprehensive framework for developing application profiles that use multiple vocabularies on basis of the W3C resource description framework (RDF) model.

Design/methodology/approach

A Dublin Core application profile describes a metadata application, from functional requirements, via a domain model of entities to be described, to the formal specification of constraints on the basis of the DCMI Abstract Model.

Findings

Dublin Core application profiles are designed to be interoperable on the basis of W3C's RDF model and principles of Web architecture, such as consistent use of URIs, in order to facilitate the integration of metadata from multiple sources – a common requirement in today's Web.

Originality/value

The paper offers insights into the evolution of the Dublin Core.

Details

Library Hi Tech News, vol. 25 no. 2/3
Type: Research Article
ISSN: 0741-9058

Keywords

Article
Publication date: 1 September 2015

Constanze Curdt and Dirk Hoffmeister

Research data management (RDM) comprises all processes, which ensure that research data are well-organized, documented, stored, backed up, accessible, and reusable. RDM systems…

1208

Abstract

Purpose

Research data management (RDM) comprises all processes, which ensure that research data are well-organized, documented, stored, backed up, accessible, and reusable. RDM systems form the technical framework. The purpose of this paper is to present the design and implementation of a RDM system for an interdisciplinary, collaborative, long-term research project with focus on Soil-Vegetation-Atmosphere data.

Design/methodology/approach

The presented RDM system is based on a three-tier (client-server) architecture. This includes a file-based data storage, a database-based metadata storage, and a self-designed user-friendly web-interface. The system is designed in cooperation with the local computing centre, where it is also hosted. A self-designed interoperable, project-specific metadata schema ensures the accurate documentation of all data.

Findings

A RDM system has to be designed and implemented according to requirements of the project participants. General challenges and problems of RDM should be considered. Thus, a close cooperation with the scientists obtains the acceptance and usage of the system.

Originality/value

This paper provides evidence that the implementation of a RDM system in the provided and maintained infrastructure of a computing centre offers many advantages. Consequently, the designed system is independent of the project funding. In addition, access and re-use of all involved project data is ensured. A transferability of the presented approach to another interdisciplinary research project was already successful. Furthermore, the designed metadata schema can be expanded according to changing project requirements.

Details

Program: electronic library and information systems, vol. 49 no. 4
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 17 April 2007

Jenn Riley and Michelle Dalmau

The purpose of this paper is to describe a user‐centered approach to developing a metadata model for an inter‐institutional project to describe and digitize sheet music…

1420

Abstract

Purpose

The purpose of this paper is to describe a user‐centered approach to developing a metadata model for an inter‐institutional project to describe and digitize sheet music collections.

Design/methodology/approach

Query logs analysis, card sort, and task scenario studies were used to explore users' needs for the discovery of sheet music. Findings from these studies were used to design an interoperable metadata model for sheet music meeting the needs of libraries, archives, and museums.

Findings

The user studies conducted demonstrated to the project team the need and methods for recording titles, names, dates, subjects, and cover art for sheet music described as part of the IN Harmony project. It was also learned that tying user studies directly to the design of metadata models can be an effective approach for digital library projects.

Practical implications

The metadata model developed by the IN Harmony project will be reusable for other sheet music collections at a wide variety of institutions. The user‐centered methodologies used to develop the metadata model will similarly be reusable for other digital library projects in the future.

Originality/value

The approach described in this paper brings together standard user study methodologies with metadata design in a novel way, and demonstrates the effectiveness of a methodology that can be reused to plan metadata creation in future digital projects.

Details

The Electronic Library, vol. 25 no. 2
Type: Research Article
ISSN: 0264-0473

Keywords

Article
Publication date: 6 July 2015

Chariya Nonthakarn and Vilas Wuwongse

The purpose of this paper is to design an application profile that will enable interoperability among research management systems, support research collaboration, and facilitate…

1048

Abstract

Purpose

The purpose of this paper is to design an application profile that will enable interoperability among research management systems, support research collaboration, and facilitate the management of research information.

Design/methodology/approach

The approach is based on the Singapore Framework for Dublin Core Application Profile, a framework for designing metadata schemas for maximum interoperability. The application profile is built from gathering stakeholders’ requirements in research community and integrates four types of research information, i.e., information on researchers, research projects, research outputs, and research reports, which benefits researchers, research managers, and funding agencies.

Findings

The resultant application profile is evaluated against widely used similar metadata schemas and requirements; and is found to be more comprehensive than the existing schemas and meets the collected requirements. Furthermore, the application profile is deployed with a prototype of research management system and is found works appropriately.

Practical implications

The designed application profile has implications for further development of research management systems that would lead to the enhancement of research collaboration and the efficiency of research information management.

Originality/value

The proposed application profile covers information entire the research development lifecycle. Both schema and information can be represented in Resource Description Framework format for reusing purpose and linking with other information. This enables users to share research information, co-operate with others, funding agencies and the community at large, thereby allowing a research management system to increase collaboration and the efficiency of research management. Furthermore, researchers and research information can be linked by means of Linked Open Data technology.

Details

Program, vol. 49 no. 3
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 1 June 2003

Christopher J. Prom

The Open Archives Initiative (OAI) Protocol for Metadata Harvesting presents one promising method by which metadata regarding archives and manuscripts can be shared and made more…

1529

Abstract

The Open Archives Initiative (OAI) Protocol for Metadata Harvesting presents one promising method by which metadata regarding archives and manuscripts can be shared and made more interoperable with metadata from other sources. Against the background of archival descriptive theory and practice, this article outlines a method for exposing deep, hierarchical metadata from encoded archival description (EAD) files and assesses some theoretical and practical issues that will need to be confronted by institutions choosing to provide or harvest OAI records generated from EAD files. Using OAI on top of existing EAD implementations would allow institutions to repurpose their data and potentially reach more users but would also accelerate the process of reengineering archival access mechanisms. Archivists and technologists using OAI with EAD must pay careful attention to the necessity of preserving archival context and provenance.

Details

Library Hi Tech, vol. 21 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

1 – 10 of 628