Search results
1 – 10 of 34Sai Deng and Terry Reese
The purpose of this paper is to present methods for customized mapping and metadata transfer from DSpace to Online Computer Library Center (OCLC), which aims to improve Electronic…
Abstract
Purpose
The purpose of this paper is to present methods for customized mapping and metadata transfer from DSpace to Online Computer Library Center (OCLC), which aims to improve Electronic Theses and Dissertations (ETD) work flow at libraries using DSpace to store theses and dissertations by automating the process of generating MARC records from Dublin Core (DC) metadata in DSpace and exporting them to OCLC.
Design/methodology/approach
This paper discusses how the Shocker Open Access Repository (SOAR) at Wichita State University (WSU) Libraries and ScholarsArchive at Oregon State University (OSU) Libraries harvest theses data from the DSpace platform using the Metadata Harvester in MarcEdit developed by Terry Reese at OSU Libraries. It analyzes certain challenges in transformation of harvested data including handling of authorized data, dealing with data ambiguity and string processing. It addresses how these two institutions customize Library of Congress's XSLT (eXtensible Stylesheet Language Transformations) mapping to transfer DC metadata to MarcXML metadata and how they export MARC data to OCLC and Voyager.
Findings
The customized mapping and data transformation for ETD data can be standardized while also requiring a case‐by‐case analysis. By offering two institutions' experiences, it provides information on the benefits and limitations for those institutions that are interested in using MarcEdit and customized XSLT to transform their ETDs from DSpace to OCLC and Voyager.
Originality/value
The new method described in the paper can eliminate the need for double entry in DSpace and OCLC, meet local needs and significantly improve ETD work flow. It offers perspectives on repurposing and managing metadata in a standard and customizable way.
Details
Keywords
David N. Nelson, Larry Hansard and Linda Turney
The purpose of this paper is to describe the process and the personnel skills required for converting a non-MARC database file into a MARC file for uploading to both OCLC and a…
Abstract
Purpose
The purpose of this paper is to describe the process and the personnel skills required for converting a non-MARC database file into a MARC file for uploading to both OCLC and a local catalog. It also examines the various decisions that need to be made when mapping from one file structure to another.
Design/methodology/approach
Applied–Database record conversion.
Findings
While MARCEDIT is a remarkably powerful tool for cataloging and database maintenance purposes, dealing with non-MARC records requires additional programming skills and tools for the successful completion of a file conversion project.
Practical implications
Discusses the importance of converting locally produced databases, especially those with bibliographic content, to national and international standards to significantly increase their discoverability.
Originality/value
Provides an overview of file conversion issues and considerations.
Details
Keywords
This case study aims to demonstrate that in-house integrated library systems migration can be accomplished by a dedicated team of librarians without advanced tools or prior…
Abstract
Purpose
This case study aims to demonstrate that in-house integrated library systems migration can be accomplished by a dedicated team of librarians without advanced tools or prior experience with data migration or systems integration.
Design/methodology/approach
This migration was accomplished by academic librarians using freely available tools: OpenOffice Calc, MarcEdit and the Koha Integrated Library System.
Findings
The data migration pathway presented here was developed and successfully used to transfer over 48,000 records in less than two months.
Practical implications
This case study presents an original process that is particularly effective for smaller libraries.
Originality/value
While similar case studies exist, most employ expensive third-party contractors for data migration or rely heavily on institutional IT departments.
Details
Keywords
Vyacheslav I. Zavalin and Shawne D. Miksa
This paper aims to discuss the challenges encountered in collecting, cleaning and analyzing the large data set of bibliographic metadata records in machine-readable cataloging…
Abstract
Purpose
This paper aims to discuss the challenges encountered in collecting, cleaning and analyzing the large data set of bibliographic metadata records in machine-readable cataloging [MARC 21] format. Possible solutions are presented.
Design/methodology/approach
This mixed method study relied on content analysis and social network analysis. The study examined subject representation in MARC 21 metadata records created in 2020 in WorldCat – the largest international database of “big smart data.” The methodological challenges that were encountered and solutions are examined.
Findings
In this general review paper with a focus on methodological issues, the discussion of challenges is followed by a discussion of solutions developed and tested as part of this study. Data collection, processing, analysis and visualization are addressed separately. Lessons learned and conclusions related to challenges and solutions for the design of a large-scale study evaluating MARC 21 bibliographic metadata from WorldCat are given. Overall recommendations for the design and implementation of future research are suggested.
Originality/value
There are no previous publications that address the challenges and solutions of data collection and analysis of WorldCat’s “big smart data” in the form of MARC 21 data. This is the first study to use a large data set to systematically examine MARC 21 library metadata records created after the most recent addition of new fields and subfields to MARC 21 Bibliographic Format standard in 2019 based on resource description and access rules. It is also the first to focus its analyzes on the networks formed by subject terms shared by MARC 21 bibliographic records in a data set extracted from a heterogeneous centralized database WorldCat.
Details
Keywords
Misu Kim, Mingyu Chen and Debbie Montgomery
The library metadata of the twenty-first century is moving toward a linked data model. BIBFRAME, which stands for Bibliographic Framework Initiative, was launched in 2011 with the…
Abstract
The library metadata of the twenty-first century is moving toward a linked data model. BIBFRAME, which stands for Bibliographic Framework Initiative, was launched in 2011 with the goal to make bibliographic descriptions sharable and interoperable on the web. Since its inception, BIBFRAME development has made remarkable progress. The focus of BIBFRAME discussions has now shifted from experimentation to implementation. The library community is collaborating with all stakeholders to build the infrastructure for BIBFRAME production in order to provide the environment where BIBFRAME data can be easily created, reused, and shared. This chapter addresses library community's BIBFRAME endeavors, with the focus on Library of Congress, Program for Cooperative Program, Linked Data for Production Phase 2, and OCLC. This chapter discusses BIBFRAME's major differences from the MARC standard with the hope of helping metadata practitioners get a general understanding of the future metadata activity. While the BIBFRAME landscape is beginning to take shape and its practical implications are beginning to develop, it is anticipated that MARC records will continue to be circulated for the foreseeable future. Upcoming multistandard metadata environments will bring new challenges to metadata practitioners, and this chapter addresses the required knowledge and skills for this transitional and multistandard metadata landscape. Finally, this chapter explores BIBFRAME's remaining challenges to realize the BIBFRAME production environment and asserts that BIBFRAME's ultimate goal is to deliver a value-added next-web search experience to our users.
Details
Keywords
The purpose of this paper is to present a process, as a proof-of-concept, that automates the tracking of updates to name authority records (NARs), the downloading of revised NARs…
Abstract
Purpose
The purpose of this paper is to present a process, as a proof-of-concept, that automates the tracking of updates to name authority records (NARs), the downloading of revised NARs into local catalog system, and subsequent bibliographic file maintenance (BFM), in response to the programmatic manipulation of the Library of Congress Name Authority File (LCNAF).
Design/methodology/approach
A proof-of-concept process to automate NAR updates and BFM in local catalog, using OCLC LCNAF SRU Service, MARCEdit, XSLT, and AutoIt, is built and subsequently tested using data from both test and production catalog servers at Michigan State University Libraries.
Findings
The proof-of-concept process tested is proved to be successful in general though scalability and diacritics issues have to be addressed before it can become fully operational in a production environment.
Originality/value
This process enables libraries, especially those without third-party authority control service, to handle the phased reissuance of LCNAF and related BFM in an automatic fashion with minimal human intervention.
Details
Keywords
The purpose of this paper is to suggest that cataloguing departments and agencies could benefit from Fourth Industrial Revolution and advanced technologies.
Abstract
Purpose
The purpose of this paper is to suggest that cataloguing departments and agencies could benefit from Fourth Industrial Revolution and advanced technologies.
Design/methodology/approach
A desk research based on literature review extracted from different information sources. Literature was interpreted based on the search of the researcher on key concepts of the topic.
Findings
Literature indicates new trends in cataloguing and how technologies could be incorporated into cataloguing.
Practical implications
This study can inform policies on artificial intelligence adoption in cataloguing.
Originality/value
The 4IR and artificial intelligence does not often occur in cataloguing literature. Linking 4IR and advanced technologies, artificial intelligence and robotic science in cataloguing could inform policies and practice in cataloguing.
Details
Keywords
The purpose of this paper is to describe several projects which made use of new technologies in the cataloguing environment at the University of Auckland Library, and emphasise…
Abstract
Purpose
The purpose of this paper is to describe several projects which made use of new technologies in the cataloguing environment at the University of Auckland Library, and emphasise the need for quality bibliographic data, as the basis of successful information retrieval.
Design/methodology/approach
The University of Auckland Library is continually looking for ways to improve access to its resources. Particular attention has been given to exploring opportunities offered by modern technology. The paper describes how tools like MARC Report and MARC Global can be used to improve the quality of existing bibliographic data in library catalogues. It looks at strategies for automated bibliographic data creation. It also describes processes involved in creating gateways to specific parts of existing collections. Emphasis is also given to initiatives aimed at providing access to material that was not traditionally described in the catalogue.
Findings
The need to improve library catalogues is obvious but metadata quality remains essential to effective information retrieval. Advances in computers and information technology have created huge potentials for cataloguing staff to increase efficiency and accuracy, and hold down costs.
Practical implications
The University of Auckland Library believes that empowering cataloguing staff with new technology is critical to efficiently providing access to a wide range of information sources. The Cataloguing Department utilizes technology to automate and manage many of its functions and to streamline its procedures.
Originality/value
The paper argues that it is important to recognise the continued value of the library catalogue. The catalogue is still the main representation of the library's resources, both print and electronic, and an essential aid in finding relevant material on a particular subject. Efficient utilisation of the catalogue means improved access to library collections and better service to patrons.
Details
Keywords
Academic and research libraries have been experiencing a lot of changes over the last two decades. The users have become technology savvy and want to discover and use library…
Abstract
Purpose
Academic and research libraries have been experiencing a lot of changes over the last two decades. The users have become technology savvy and want to discover and use library collections via web portals instead of coming to library gateways. To meet these rapidly changing users’ needs, academic and research libraries are busy identifying new service models and areas of improvement. Cataloging and metadata services units in academic and research libraries are no exception. As discovery of library collections largely depends on the quality and design of metadata, cataloging and metadata services units must identify new areas of work and establish new roles by building sustainable workflows that utilize available metadata technologies. The paper aims to discuss these issues.
Design/methodology/approach
This paper discusses a list of challenges that academic libraries’ cataloging and metadata services units have encountered over the years, and ways to build sustainable workflows, including collaborations between units in and outside of the institution, and in the cloud; tools, technologies, metadata standards and semantic web technologies; and most importantly, exploration and research. The paper also includes examples and uses cases of both traditional metadata workflows and experimentation with linked open data that were built upon metadata technologies and will ultimately support emerging user needs.
Findings
To develop sustainable and scalable workflows that meet users’ changing needs, cataloging and metadata professionals need not only to work with new information technologies, but must also be equipped with soft skills and in-depth professional knowledge.
Originality/value
This paper discusses how cataloging and metadata services units have been exploiting information technologies and creating new scalable workflows to adapt to these changes, and what is required to establish and maintain these workflows.
Details