Search results

1 – 10 of over 1000
Article
Publication date: 1 June 2004

Martin Kurth, David Ruddy and Nathan Rupp

Metadata and information technology staff in libraries that are building digital collections typically extract and manipulate MARC metadata sets to provide access to digital…

2409

Abstract

Metadata and information technology staff in libraries that are building digital collections typically extract and manipulate MARC metadata sets to provide access to digital content via non‐MARC schemes. Metadata processing in these libraries involves defining the relationships between metadata schemes, moving metadata between schemes, and coordinating the intellectual activity and physical resources required to create and manipulate metadata. Actively managing the non‐MARC metadata resources used to build digital collections is something most of these libraries have only begun to do. This article proposes strategies for managing MARC metadata repurposing efforts as the first step in a coordinated approach to library metadata management. Guided by lessons learned from Cornell University library mapping and transformation activities, the authors apply the literature of data resource management to library metadata management and propose a model for managing MARC metadata repurposing processes through the implementation of a metadata management design.

Details

Library Hi Tech, vol. 22 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 26 August 2014

Nikos Houssos, Kostas Stamatis, Panagiotis Koutsourakis, Sarantos Kapidakis, Emmanouel Garoufallou and Alexandros Koulouris

This paper aims to propose a toolset that enables individual digital collections owners to satisfy the requirements of aggregators even in cases where their IT and software…

530

Abstract

Purpose

This paper aims to propose a toolset that enables individual digital collections owners to satisfy the requirements of aggregators even in cases where their IT and software infrastructure is limited and does not support them inherently. Managers of repositories/digital collections face the challenge of exposing their data via Open Archives Initiative – Protocol for Metadata Harvesting (OAI-PMH) to multiple aggregators and conforming to their possibly differing requirements, for example on output metadata schemas and selective harvesting.

Design/methodology/approach

The authors developed a software server that is able to wrap existing systems or even metadata records in plain files as OAI-PMH sources. They analysed the functionality of OAI-PMH data providers in a flow of discrete steps and used a software library to modularise the software for these steps so that the whole process can be easily customised to the needs of each pair of OAI-PMH data provider and service provider. The developed server includes a mechanism for the implementation of schema mappings using an XML specification that can be defined by non-IT personnel, for example metadata experts. The server has been applied in various real-life use cases, in particular for providing content to Europeana.

Findings

It has been concluded through real-life use cases that it is indeed possible and feasible in practice to expose metadata records of digital collections via OAI-PMH even when the data sources do not support the required protocols and standards. Even advanced OAI-PMH features like selective harvesting can be supported. Mappings between input and output schemas in many practical cases can be implemented entirely or to a large extent as XML specifications by metadata experts instead of software developers.

Practical implications

Exposing data via OAI-PMH to aggregators like Europeana is made feasible/easier for digital collections owners, even when their software infrastructure does not inherently support the required protocols and standards.

Originality/value

The approach is original and applicable in practice to diverse technology environments, effectively addressing the indisputable fact of the heterogeneity of software and systems used to implement digital repositories and collections worldwide.

Article
Publication date: 20 March 2017

Timothy W. Cole, Myung-Ja K. Han, Maria Janina Sarol, Monika Biel and David Maus

Early Modern emblem books are primary sources for scholars studying the European Renaissance. Linked Open Data (LOD) is an approach for organizing and modeling information in a…

Abstract

Purpose

Early Modern emblem books are primary sources for scholars studying the European Renaissance. Linked Open Data (LOD) is an approach for organizing and modeling information in a data-centric manner compatible with the emerging Semantic Web. The purpose of this paper is to examine ways in which LOD methods can be applied to facilitate emblem resource discovery, better reveal the structure and connectedness of digitized emblem resources, and enhance scholar interactions with digitized emblem resources.

Design/methodology/approach

This research encompasses an analysis of the existing XML-based Spine (emblem-specific) metadata schema; the design of a new, domain-specific, Resource Description Framework compatible ontology; the mapping and transformation of metadata from Spine to both the new ontology and (separately) to the pre-existing Schema.org ontology; and the (experimental) modification of the Emblematica Online portal as a proof of concept to illustrate enhancements supported by LOD.

Findings

LOD is viable as an approach for facilitating discovery and enhancing the value to scholars of digitized emblem books; however, metadata must first be enriched with additional uniform resource identifiers and the workflow upgrades required to normalize and transform existing emblem metadata are substantial and still to be fully worked out.

Practical implications

The research described demonstrates the feasibility of transforming existing, special collections metadata to LOD. Although considerable work and further study will be required, preliminary findings suggest potential benefits of LOD for both users and libraries.

Originality/value

This research is unique in the context of emblem studies and adds to the emerging body of work examining the application of LOD best practices to library special collections.

Details

Library Hi Tech, vol. 35 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

Article
Publication date: 27 August 2014

Paolo Manghi, Michele Artini, Claudio Atzori, Alessia Bardi, Andrea Mannocci, Sandro La Bruzzo, Leonardo Candela, Donatella Castelli and Pasquale Pagano

The purpose of this paper is to present the architectural principles and the services of the D-NET software toolkit. D-NET is a framework where designers and developers find the…

Abstract

Purpose

The purpose of this paper is to present the architectural principles and the services of the D-NET software toolkit. D-NET is a framework where designers and developers find the tools for constructing and operating aggregative infrastructures (systems for aggregating data sources with heterogeneous data models and technologies) in a cost-effective way. Designers and developers can select from a variety of D-NET data management services, can configure them to handle data according to given data models, and can construct autonomic workflows to obtain personalized aggregative infrastructures.

Design/methodology/approach

The paper provides a definition of aggregative infrastructures, sketching architecture, and components, as inspired by real-case examples. It then describes the limits of current solutions, which find their lacks in the realization and maintenance costs of such complex software. Finally, it proposes D-NET as an optimal solution for designers and developers willing to realize aggregative infrastructures. The D-NET architecture and services are presented, drawing a parallel with the ones of aggregative infrastructures. Finally, real-cases of D-NET are presented, to show-case the statement above.

Findings

The D-NET software toolkit is a general-purpose service-oriented framework where designers can construct customized, robust, scalable, autonomic aggregative infrastructures in a cost-effective way. D-NET is today adopted by several EC projects, national consortia and communities to create customized infrastructures under diverse application domains, and other organizations are enquiring for or are experimenting its adoption. Its customizability and extendibility make D-NET a suitable candidate for creating aggregative infrastructures mediating between different scientific domains and therefore supporting multi-disciplinary research.

Originality/value

D-NET is the first general-purpose framework of this kind. Other solutions are available in the literature but focus on specific use-cases and therefore suffer from the limited re-use in different contexts. Due to its maturity, D-NET can also be used by third-party organizations, not necessarily involved in the software design and maintenance.

Details

Program, vol. 48 no. 4
Type: Research Article
ISSN: 0033-0337

Keywords

Article
Publication date: 8 August 2016

Myung-Ja K. Han

Academic and research libraries have been experiencing a lot of changes over the last two decades. The users have become technology savvy and want to discover and use library…

1314

Abstract

Purpose

Academic and research libraries have been experiencing a lot of changes over the last two decades. The users have become technology savvy and want to discover and use library collections via web portals instead of coming to library gateways. To meet these rapidly changing users’ needs, academic and research libraries are busy identifying new service models and areas of improvement. Cataloging and metadata services units in academic and research libraries are no exception. As discovery of library collections largely depends on the quality and design of metadata, cataloging and metadata services units must identify new areas of work and establish new roles by building sustainable workflows that utilize available metadata technologies. The paper aims to discuss these issues.

Design/methodology/approach

This paper discusses a list of challenges that academic libraries’ cataloging and metadata services units have encountered over the years, and ways to build sustainable workflows, including collaborations between units in and outside of the institution, and in the cloud; tools, technologies, metadata standards and semantic web technologies; and most importantly, exploration and research. The paper also includes examples and uses cases of both traditional metadata workflows and experimentation with linked open data that were built upon metadata technologies and will ultimately support emerging user needs.

Findings

To develop sustainable and scalable workflows that meet users’ changing needs, cataloging and metadata professionals need not only to work with new information technologies, but must also be equipped with soft skills and in-depth professional knowledge.

Originality/value

This paper discusses how cataloging and metadata services units have been exploiting information technologies and creating new scalable workflows to adapt to these changes, and what is required to establish and maintain these workflows.

Book part
Publication date: 8 January 2021

Misu Kim, Mingyu Chen and Debbie Montgomery

The library metadata of the twenty-first century is moving toward a linked data model. BIBFRAME, which stands for Bibliographic Framework Initiative, was launched in 2011 with the…

Abstract

The library metadata of the twenty-first century is moving toward a linked data model. BIBFRAME, which stands for Bibliographic Framework Initiative, was launched in 2011 with the goal to make bibliographic descriptions sharable and interoperable on the web. Since its inception, BIBFRAME development has made remarkable progress. The focus of BIBFRAME discussions has now shifted from experimentation to implementation. The library community is collaborating with all stakeholders to build the infrastructure for BIBFRAME production in order to provide the environment where BIBFRAME data can be easily created, reused, and shared. This chapter addresses library community's BIBFRAME endeavors, with the focus on Library of Congress, Program for Cooperative Program, Linked Data for Production Phase 2, and OCLC. This chapter discusses BIBFRAME's major differences from the MARC standard with the hope of helping metadata practitioners get a general understanding of the future metadata activity. While the BIBFRAME landscape is beginning to take shape and its practical implications are beginning to develop, it is anticipated that MARC records will continue to be circulated for the foreseeable future. Upcoming multistandard metadata environments will bring new challenges to metadata practitioners, and this chapter addresses the required knowledge and skills for this transitional and multistandard metadata landscape. Finally, this chapter explores BIBFRAME's remaining challenges to realize the BIBFRAME production environment and asserts that BIBFRAME's ultimate goal is to deliver a value-added next-web search experience to our users.

Article
Publication date: 8 February 2013

Stefan Dietze, Salvador Sanchez‐Alonso, Hannes Ebner, Hong Qing Yu, Daniela Giordano, Ivana Marenzi and Bernardo Pereira Nunes

Research in the area of technology‐enhanced learning (TEL) throughout the last decade has largely focused on sharing and reusing educational resources and data. This effort has…

1461

Abstract

Purpose

Research in the area of technology‐enhanced learning (TEL) throughout the last decade has largely focused on sharing and reusing educational resources and data. This effort has led to a fragmented landscape of competing metadata schemas, or interface mechanisms. More recently, semantic technologies were taken into account to improve interoperability. The linked data approach has emerged as the de facto standard for sharing data on the web. To this end, it is obvious that the application of linked data principles offers a large potential to solve interoperability issues in the field of TEL. This paper aims to address this issue.

Design/methodology/approach

In this paper, approaches are surveyed that are aimed towards a vision of linked education, i.e. education which exploits educational web data. It particularly considers the exploitation of the wealth of already existing TEL data on the web by allowing its exposure as linked data and by taking into account automated enrichment and interlinking techniques to provide rich and well‐interlinked data for the educational domain.

Findings

So far web‐scale integration of educational resources is not facilitated, mainly due to the lack of take‐up of shared principles, datasets and schemas. However, linked data principles increasingly are recognized by the TEL community. The paper provides a structured assessment and classification of existing challenges and approaches, serving as potential guideline for researchers and practitioners in the field.

Originality/value

Being one of the first comprehensive surveys on the topic of linked data for education, the paper has the potential to become a widely recognized reference publication in the area.

Abstract

Details

Library Hi Tech News, vol. 35 no. 8
Type: Research Article
ISSN: 0741-9058

Keywords

Article
Publication date: 15 May 2009

Sai Deng and Terry Reese

The purpose of this paper is to present methods for customized mapping and metadata transfer from DSpace to Online Computer Library Center (OCLC), which aims to improve Electronic…

1457

Abstract

Purpose

The purpose of this paper is to present methods for customized mapping and metadata transfer from DSpace to Online Computer Library Center (OCLC), which aims to improve Electronic Theses and Dissertations (ETD) work flow at libraries using DSpace to store theses and dissertations by automating the process of generating MARC records from Dublin Core (DC) metadata in DSpace and exporting them to OCLC.

Design/methodology/approach

This paper discusses how the Shocker Open Access Repository (SOAR) at Wichita State University (WSU) Libraries and ScholarsArchive at Oregon State University (OSU) Libraries harvest theses data from the DSpace platform using the Metadata Harvester in MarcEdit developed by Terry Reese at OSU Libraries. It analyzes certain challenges in transformation of harvested data including handling of authorized data, dealing with data ambiguity and string processing. It addresses how these two institutions customize Library of Congress's XSLT (eXtensible Stylesheet Language Transformations) mapping to transfer DC metadata to MarcXML metadata and how they export MARC data to OCLC and Voyager.

Findings

The customized mapping and data transformation for ETD data can be standardized while also requiring a case‐by‐case analysis. By offering two institutions' experiences, it provides information on the benefits and limitations for those institutions that are interested in using MarcEdit and customized XSLT to transform their ETDs from DSpace to OCLC and Voyager.

Originality/value

The new method described in the paper can eliminate the need for double entry in DSpace and OCLC, meet local needs and significantly improve ETD work flow. It offers perspectives on repurposing and managing metadata in a standard and customizable way.

Details

New Library World, vol. 110 no. 5/6
Type: Research Article
ISSN: 0307-4803

Keywords

Article
Publication date: 1 June 2004

Corey Keith

This paper describes the MARCXML architecture implemented at the Library of Congress. It gives an overview of the component pieces of the architecture, including the MARCXML…

2668

Abstract

This paper describes the MARCXML architecture implemented at the Library of Congress. It gives an overview of the component pieces of the architecture, including the MARCXML schema and the MARCXML toolkit, while giving a brief tutorial on their use. Several different applications of the architecture and tools are discussed to illustrate the features of the toolkit being developed thus far. Nearly any metadata format can take advantage of the features of the toolkit, and the process of the toolkit enabling a new format is discussed. Finally, this paper intends to foster new ideas with regards to the transformation of descriptive metadata, especially using XML tools. In this paper the following conventions will be used: MARC21 will refer to MARC 21 records in the ISO 2709 record structure used today; MARCXML will refer to MARC 21 records in an XML structure.

Details

Library Hi Tech, vol. 22 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

1 – 10 of over 1000