Search results
1 – 10 of over 3000The purpose of this paper is to propose a Resource Description Framework (RDF)-based approach to transform metadata crosswalking from equivalent lexical element mapping into…
Abstract
Purpose
The purpose of this paper is to propose a Resource Description Framework (RDF)-based approach to transform metadata crosswalking from equivalent lexical element mapping into semantic mapping with various contextual relationships. RDF is used as a crosswalk model to represent the contextual relationships implicitly embedded between described objects and their elements, including semantic, hierarchical, granular, syntactic and multiple object relationships to achieve semantic metadata interoperability at the data element level.
Design/methodology/approach
This paper uses RDF to translate metadata elements and their relationships into semantic expressions, and also as a data model to define the syntax for element mapping. The feasibility of the proposed approach for semantic metadata crosswalking is examined based on two use cases – the Archives of Navy Ships Project and the Digital Artifacts Project of National Palace Museum in Taipei – both from the Taiwan e-Learning and Digital Archives Program.
Findings
As the model developed is based on RDF-based expressions, unsolved issues related to crosswalking, such as sets of shared terms, and contextual relationships embedded between described objects and their metadata elements could be manifested into a semantic representation. Corresponding element mapping and mapping rules can be specified without ambiguity to achieve semantic metadata interoperability.
Research limitations/implications
Five steps were developed to clarify the details of the RDF-based crosswalk. The RDF-based expressions can also serve as a basis from which to develop linked data and Semantic Web applications. More use cases including biodiversity artifacts of natural history museums and literary works of libraries, and conditions, constraints and cardinality of metadata data elements will be required to make revisions to fine tune the proposed RDF-based metadata crosswalk.
Originality/value
In addition to reviving contextual relationships embedded between described objects and their metadata elements, nine types of mapping rules were developed to achieve a semantic metadata crosswalk which will facilitate the design of related mapping software. Furthermore, the proposed approach complements existing crosswalking documents provided by authoritative organizations, and enriches mapping language developed by the CIDOC community.
Details
Keywords
Martin Kurth, David Ruddy and Nathan Rupp
Metadata and information technology staff in libraries that are building digital collections typically extract and manipulate MARC metadata sets to provide access to digital…
Abstract
Metadata and information technology staff in libraries that are building digital collections typically extract and manipulate MARC metadata sets to provide access to digital content via non‐MARC schemes. Metadata processing in these libraries involves defining the relationships between metadata schemes, moving metadata between schemes, and coordinating the intellectual activity and physical resources required to create and manipulate metadata. Actively managing the non‐MARC metadata resources used to build digital collections is something most of these libraries have only begun to do. This article proposes strategies for managing MARC metadata repurposing efforts as the first step in a coordinated approach to library metadata management. Guided by lessons learned from Cornell University library mapping and transformation activities, the authors apply the literature of data resource management to library metadata management and propose a model for managing MARC metadata repurposing processes through the implementation of a metadata management design.
Details
Keywords
Laurent Remy, Dragan Ivanović, Maria Theodoridou, Athina Kritsotaki, Paul Martin, Daniele Bailo, Manuela Sbarra, Zhiming Zhao and Keith Jeffery
The purpose of this paper is to boost multidisciplinary research by the building of an integrated catalogue or research assets metadata. Such an integrated catalogue should enable…
Abstract
Purpose
The purpose of this paper is to boost multidisciplinary research by the building of an integrated catalogue or research assets metadata. Such an integrated catalogue should enable researchers to solve problems or analyse phenomena that require a view across several scientific domains.
Design/methodology/approach
There are two main approaches for integrating metadata catalogues provided by different e-science research infrastructures (e-RIs): centralised and distributed. The authors decided to implement a central metadata catalogue that describes, provides access to and records actions on the assets of a number of e-RIs participating in the system. The authors chose the CERIF data model for description of assets available via the integrated catalogue. Analysis of popular metadata formats used in e-RIs has been conducted, and mappings between popular formats and the CERIF data model have been defined using an XML-based tool for description and automatic execution of mappings.
Findings
An integrated catalogue of research assets metadata has been created. Metadata from e-RIs supporting Dublin Core, ISO 19139, DCAT-AP, EPOS-DCAT-AP, OIL-E and CKAN formats can be integrated into the catalogue. Metadata are stored in CERIF RDF in the integrated catalogue. A web portal for searching this catalogue has been implemented.
Research limitations/implications
Only five formats are supported at this moment. However, description of mappings between other source formats and the target CERIF format can be defined in the future using the 3M tool, an XML-based tool for describing X3ML mappings that can then be automatically executed on XML metadata records. The approach and best practices described in this paper can thus be applied in future mappings between other metadata formats.
Practical implications
The integrated catalogue is a part of the eVRE prototype, which is a result of the VRE4EIC H2020 project.
Social implications
The integrated catalogue should boost the performance of multi-disciplinary research; thus it has the potential to enhance the practice of data science and so contribute to an increasingly knowledge-based society.
Originality/value
A novel approach for creation of the integrated catalogue has been defined and implemented. The approach includes definition of mappings between various formats. Defined mappings are effective and shareable.
Details
Keywords
Sai Deng and Terry Reese
The purpose of this paper is to present methods for customized mapping and metadata transfer from DSpace to Online Computer Library Center (OCLC), which aims to improve Electronic…
Abstract
Purpose
The purpose of this paper is to present methods for customized mapping and metadata transfer from DSpace to Online Computer Library Center (OCLC), which aims to improve Electronic Theses and Dissertations (ETD) work flow at libraries using DSpace to store theses and dissertations by automating the process of generating MARC records from Dublin Core (DC) metadata in DSpace and exporting them to OCLC.
Design/methodology/approach
This paper discusses how the Shocker Open Access Repository (SOAR) at Wichita State University (WSU) Libraries and ScholarsArchive at Oregon State University (OSU) Libraries harvest theses data from the DSpace platform using the Metadata Harvester in MarcEdit developed by Terry Reese at OSU Libraries. It analyzes certain challenges in transformation of harvested data including handling of authorized data, dealing with data ambiguity and string processing. It addresses how these two institutions customize Library of Congress's XSLT (eXtensible Stylesheet Language Transformations) mapping to transfer DC metadata to MarcXML metadata and how they export MARC data to OCLC and Voyager.
Findings
The customized mapping and data transformation for ETD data can be standardized while also requiring a case‐by‐case analysis. By offering two institutions' experiences, it provides information on the benefits and limitations for those institutions that are interested in using MarcEdit and customized XSLT to transform their ETDs from DSpace to OCLC and Voyager.
Originality/value
The new method described in the paper can eliminate the need for double entry in DSpace and OCLC, meet local needs and significantly improve ETD work flow. It offers perspectives on repurposing and managing metadata in a standard and customizable way.
Details
Keywords
Marilyn Lutz and Curtis Meadow
To describe the evolution of a content management system at the University of Maine Library that would function as a universal, extensible metadata repository, thereby eliminating…
Abstract
Purpose
To describe the evolution of a content management system at the University of Maine Library that would function as a universal, extensible metadata repository, thereby eliminating the need to build separate databases for new digital collections, and facilitating both end‐user access and the management of electronic resources in an integrated technology environment.
Design/methodology/approach
Beginning with the development of a prototype system that mapped EAD encoded finding aids to a relational database, this paper discusses the evolution of this prototype into the design and implementation of a RDBMS (and continuing development of an object‐oriented database management systems (OODBMS) system) to actively manage digital objects and associated metadata. The key to the system design is metadata: extracting and mapping, transforming, and managing the processing of MARC‐based metadata into non‐MARC schemes to build digital collections. Other relevant CMS architecture issues discussed are the design of a functional bibliographic structure and utilities for metadata harvesting and indexing.
Findings
Provides information on the use of the Dublin Core Abstract Model and a flexible and adaptable collection‐centric approach in the overall CMS architecture as implemented on a non‐MARC RDBMS, and provides an explanation of the advantages of an object oriented database system over the complexity of evolving relational database tables.
Practical implications
A useful source for the development of an in‐house CMS, and a contribution to the growing body of literature about the transformation of MARC‐based metadata for database design.
Originality/value
This paper is a case study of actual work conducted at the University of Maine Library. The RDBMS manages digital collections; the OODBMS manages digital video and other multimedia resources.
Details
Keywords
Chunqiu Li and Shigeo Sugimoto
Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track…
Abstract
Purpose
Provenance information is crucial for consistent maintenance of metadata schemas over time. The purpose of this paper is to propose a provenance model named DSP-PROV to keep track of structural changes of metadata schemas.
Design/methodology/approach
The DSP-PROV model is developed through applying the general provenance description standard PROV of the World Wide Web Consortium to the Dublin Core Application Profile. Metadata Application Profile of Digital Public Library of America is selected as a case study to apply the DSP-PROV model. Finally, this paper evaluates the proposed model by comparison between formal provenance description in DSP-PROV and semi-formal change log description in English.
Findings
Formal provenance description in the DSP-PROV model has advantages over semi-formal provenance description in English to keep metadata schemas consistent over time.
Research limitations/implications
The DSP-PROV model is applicable to keep track of the structural changes of metadata schema over time. Provenance description of other features of metadata schema such as vocabulary and encoding syntax are not covered.
Originality/value
This study proposes a simple model for provenance description of structural features of metadata schemas based on a few standards widely accepted on the Web and shows the advantage of the proposed model to conventional semi-formal provenance description.
Details
Keywords
Sue McKemmish, Glenda Acland and Barbara Reed
In July 1999 the Australian Recordkeeping Metadata Schema (RKMS) was approved by its academic and industry steering group. The RKMS has inherited elements from and built on many…
Abstract
In July 1999 the Australian Recordkeeping Metadata Schema (RKMS) was approved by its academic and industry steering group. The RKMS has inherited elements from and built on many other metadata standards associated with information management. It has also contributed to the development of subsequent sector specific recordkeeping metadata sets. The importance of the RKMS as a framework for mapping or reading other sets, and also as a standardised set of metadata available for adoption in diverse implementation environments, is now emerging. This paper explores the context of the Australian SPIRT1 Recordkeeping Metadata Project, and the conceptual models developed by the SPIRT Research Team as a framework for standardising and defining recordkeeping metadata. It then introduces the elements of the SPIRT Recordkeeping Metadata Schema and explores its functionality, before discussing implementation issues and future directions
Details
Keywords
The purpose of this paper is to develop an understanding of the issues surrounding the cataloguing of maps in archives and libraries. An investigation into appropriate metadata…
Abstract
Purpose
The purpose of this paper is to develop an understanding of the issues surrounding the cataloguing of maps in archives and libraries. An investigation into appropriate metadata formats, such as MARC21, EAD and Dublin Core with RDF, shows how particular map data can be stored. Mathematical map elements, specifically co‐ordinates, are explored as a source of optimal retrieval.
Design/methodology/approach
This paper is based on both the personal experiences of map cataloguers as well as previous literature on map retrieval elements, metadata formats and map retrieval systems.
Findings
The difficulties behind map cataloguing do not lie in metadata file formats but rather in maps themselves, staff and budget. They also lie in the lack of map‐appropriate retrieval systems and the lack of co‐ordinate search capabilities.
Practical implications
The practical implications of this work reflect the necessity for strong map‐retrieval systems and strength of available metadata formats to store essential map data for retrieval. Future map cataloguers should secure appropriate systems for retrieval and include geographical location information, specifically numerical co‐ordinates.
Originality/value
This paper provides insight into current issues in map data and the file formats currently used for storing this data. It also investigates current map‐friendly systems in use by libraries and archives.
Details
Keywords
Lesly Huxley, Leona Carpenter and Marianne Peereboom
Renardus was developed under the EU’s User‐friendly Information Society programme by partners from national libraries, university research and technology centres and subject…
Abstract
Renardus was developed under the EU’s User‐friendly Information Society programme by partners from national libraries, university research and technology centres and subject information gateways Europe‐wide. Since January 2000, those partners have been working towards realisation of their aim to build a single Web‐based “broker service” providing cross‐search/cross‐browse access to existing Internet‐accessible scientific and cultural resource collections distributed across Europe. This paper describes Renardus’ key concepts and highlights some of the collaborative frameworks and tools developed and deployed during the project, and the existing technical and information standards used, particularly in support of metadata modelling, mapping and sharing and the information architecture. Issues, implications and benefits for end users and information professionals are presented through illustrations of the interface design. We conclude with an outline of organisational arrangements and strategies, outstanding issues and next steps in encouraging future collaboration with other services.
Details
Keywords
Shu Liu and Yongli Zhou
This paper aims to inform library professionals on technical issues relating to implementing and using DigiTool, proprietary software by Ex Libris, to develop an institutional…
Abstract
Purpose
This paper aims to inform library professionals on technical issues relating to implementing and using DigiTool, proprietary software by Ex Libris, to develop an institutional repository (IR).
Design/methodology/approach
This paper describes Colorado State University Libraries' experience to date in developing an IR using DigiTool. Topics discussed are based on the processes and workflows, and include local customization; metadata and object ingest; implementation of handles; incorporation with web discovery; and management of statistical data.
Findings
The DigiTool, a powerful, complex, and relatively mature out‐of‐box IR platform that fulfils one's needs to establish and maintain an IR, is considered.
Originality/value
The experiential information and technical details on implementing and using DigiTool will be valuable to institutions that are interested in adopting this product for a similar purpose.
Details