Search results
1 – 10 of 850Academic and research libraries have been experiencing a lot of changes over the last two decades. The users have become technology savvy and want to discover and use library…
Abstract
Purpose
Academic and research libraries have been experiencing a lot of changes over the last two decades. The users have become technology savvy and want to discover and use library collections via web portals instead of coming to library gateways. To meet these rapidly changing users’ needs, academic and research libraries are busy identifying new service models and areas of improvement. Cataloging and metadata services units in academic and research libraries are no exception. As discovery of library collections largely depends on the quality and design of metadata, cataloging and metadata services units must identify new areas of work and establish new roles by building sustainable workflows that utilize available metadata technologies. The paper aims to discuss these issues.
Design/methodology/approach
This paper discusses a list of challenges that academic libraries’ cataloging and metadata services units have encountered over the years, and ways to build sustainable workflows, including collaborations between units in and outside of the institution, and in the cloud; tools, technologies, metadata standards and semantic web technologies; and most importantly, exploration and research. The paper also includes examples and uses cases of both traditional metadata workflows and experimentation with linked open data that were built upon metadata technologies and will ultimately support emerging user needs.
Findings
To develop sustainable and scalable workflows that meet users’ changing needs, cataloging and metadata professionals need not only to work with new information technologies, but must also be equipped with soft skills and in-depth professional knowledge.
Originality/value
This paper discusses how cataloging and metadata services units have been exploiting information technologies and creating new scalable workflows to adapt to these changes, and what is required to establish and maintain these workflows.
Details
Keywords
Gordon Dunsire and Mirna Willer
There has been a significant increase in activity over the past few years to integrate library metadata with the Semantic Web. While much of this has involved the development of…
Abstract
Purpose
There has been a significant increase in activity over the past few years to integrate library metadata with the Semantic Web. While much of this has involved the development of controlled vocabularies as “linked data”, there have recently been concerted attempts to represent standard library models for bibliographic metadata in forms that are compatible with Semantic Web technologies. This paper aims to give an overview of these initiatives, describing relationships between them in the context of the Semantic Web.
Design/methodology/approach
The paper focusses on standards created and maintained by the International Federation of Library Associations and Institutions, including Functional Requirements for Bibliographic Records, Functional Requirements for Authority Data, and International Standard Bibliographic Description. It also covers related standards and models such as RDA – Resource Description and Access, REICAT (the new Italian cataloguing rules) and CIDOC Conceptual Reference Model, and the technical infrastructure for supporting relationships between them, including the RDA/ONIX framework for resource categorization, and Vocabulary Mapping Framework.
Findings
The paper discusses the importance of these developments for releasing the rich metadata held by libraries as linked data, addressing semantic and statistical inferencing, integration with user‐ and machine‐generated metadata, and authenticity, veracity and trust. It also discusses the representation of controlled vocabularies, including subject classifications and headings, name authorities, and terminologies for descriptive content, in a multilingual environment.
Practical implications
Finally, the paper discusses the potential collective impact of these initiatives on metadata workflows and management systems.
Originality/value
The paper provides a general review of recent activity for those interested in the development of library standards, the Semantic Web, and universal bibliographic control.
Details
Keywords
Abstract
Mohammad Nasir Uddin and Paul Janecek
The aim of this paper is to develop and implement a multidimensional classification system in the web that can provide an alternative but convenient structure for organising and…
Abstract
Purpose
The aim of this paper is to develop and implement a multidimensional classification system in the web that can provide an alternative but convenient structure for organising and finding information content.
Design/methodology/approach
A prototype system is developed following the views of Ranganathan's faceted classification, which is to provide multiple classifications of the web documents through content oriented metadata organised under different facets (orthogonal groups of categories).
Findings
Based on an architectural framework this study demonstrates a prototype faceted classification system (FCS) that is integrated into a general open‐source content management system and populated with a sample collection of institutional web pages/documents.
Originality/value
The study provides significant grounds for the IR community to improve interface structure for easy access, management, and retrieval of web information. In addition, the integration of content management tools with multidimensional taxonomies can be a new instance of a corporate web system for easy content creation, organisation, and navigation.
Details
Keywords
This chapter helps us to understand the staffing and workflow ramifications of Linked Data. A survey of the current state of metadata work, compared to the possibilities and…
Abstract
This chapter helps us to understand the staffing and workflow ramifications of Linked Data. A survey of the current state of metadata work, compared to the possibilities and intentions of Linked Data modeling and technology, allows us to make a needs assessment for future planning. Findings are that current trends in metadata work – distributed production alongside centralized management, iterative and collaborative resource description – are appropriate in a Linked Data environment, and should be further cultivated. A plan for training staff on the conceptual modeling of Linked Data is also outlined, together providing a launching pad to begin organizational planning for Linked Data.
Details
Keywords
Xiaocan (Lucy) Wang, Natalie Bulick and Valentine Muyumba
The purpose of this paper is to describe the Electronic Theses and Dissertations program implemented and managed by the Indiana State University since 2009. The paper illustrates…
Abstract
Purpose
The purpose of this paper is to describe the Electronic Theses and Dissertations program implemented and managed by the Indiana State University since 2009. The paper illustrates issues relating to the background, policies, platform, workflow and cataloging, as well as the publication and preservation of graduate scholarship.
Design/methodology/approach
The authors examined many aspects of the Electronic Theses and Dissertations program and addressed issues dealt before, during and after the publication of the electronic theses and dissertations collection. The approaches the authors utilized are literature review and personal management experience from working on the program.
Findings
Implementing an Electronic Theses and Dissertations program involves providing a series of management services. These services include developing relevant policies, implementing an archiving and publication platform and creating submission and publishing workflows, as well as cataloging, disseminating and preserving the student collection. Openly publishing the collection through a range of access points significantly increases its visibility and accessibility. Adopting several archival and preservation strategies ensures the long-term readiness of the collection.
Originality/value
This paper will provide useful practices for implementing an ETD program to those institutions new to the ETD initiative process. It also contributes to the current body of literature and to the overall improvement of ETD programs globally.
Details
Keywords
This paper aims to explore the role of artificial intelligence (AI) in automating library cataloging and classification processes, exploring current applications, challenges and…
Abstract
Purpose
This paper aims to explore the role of artificial intelligence (AI) in automating library cataloging and classification processes, exploring current applications, challenges and future possibilities. It aims to provide insights into how AI technologies are reshaping traditional library practices and their implications for the future of information organization and access.
Design/methodology/approach
The paper presents a comprehensive review, analyzing recent research and developments in AI applications for library cataloging and classification. It covers traditional methods, relevant AI technologies, implementation challenges, impacts on library workflows and future directions.
Findings
AI technologies, particularly machine learning and natural language processing, offer significant potential for enhancing efficiency, consistency and depth in metadata creation and classification. However, implementation challenges include data quality issues, integration with legacy systems and the need for new skill sets among library professionals. The impact on library workflows is profound, necessitating a reimagining of traditional librarian responsibilities. Future developments promise more advanced capabilities in personalized discovery, adaptive classification schemes and predictive collection development.
Originality/value
This paper provides a holistic overview of AI’s impact on library cataloging and classification, synthesizing current research and future trends. It highlights the delicate balance required in leveraging AI to enhance library services while upholding core library values. The paper emphasizes the need for ongoing critical engagement with these technologies to shape the future of library services in the AI era.
Details
Keywords
Janet Kahkonen Smith, Roger L. Cunningham and Stephen P. Sarapata
This paper will describe the way in which the USMARC cataloging schema is used at the Eisenhower National Clearing‐house (ENC). Discussion will include how ENC MARC extensions…
Abstract
This paper will describe the way in which the USMARC cataloging schema is used at the Eisenhower National Clearing‐house (ENC). Discussion will include how ENC MARC extensions were developed for cataloging mathematics and science curriculum resources, and how the ENC workflow is integrated into the cataloging interface. The discussion will conclude with a historical look at the in‐house data transfer from ENC MARC to the current production of IEEE LOM XML encoding for record sharing and OAI compliance, required under the NSDL project guidelines.
Details
Keywords
Sai Deng and Terry Reese
The purpose of this paper is to present methods for customized mapping and metadata transfer from DSpace to Online Computer Library Center (OCLC), which aims to improve Electronic…
Abstract
Purpose
The purpose of this paper is to present methods for customized mapping and metadata transfer from DSpace to Online Computer Library Center (OCLC), which aims to improve Electronic Theses and Dissertations (ETD) work flow at libraries using DSpace to store theses and dissertations by automating the process of generating MARC records from Dublin Core (DC) metadata in DSpace and exporting them to OCLC.
Design/methodology/approach
This paper discusses how the Shocker Open Access Repository (SOAR) at Wichita State University (WSU) Libraries and ScholarsArchive at Oregon State University (OSU) Libraries harvest theses data from the DSpace platform using the Metadata Harvester in MarcEdit developed by Terry Reese at OSU Libraries. It analyzes certain challenges in transformation of harvested data including handling of authorized data, dealing with data ambiguity and string processing. It addresses how these two institutions customize Library of Congress's XSLT (eXtensible Stylesheet Language Transformations) mapping to transfer DC metadata to MarcXML metadata and how they export MARC data to OCLC and Voyager.
Findings
The customized mapping and data transformation for ETD data can be standardized while also requiring a case‐by‐case analysis. By offering two institutions' experiences, it provides information on the benefits and limitations for those institutions that are interested in using MarcEdit and customized XSLT to transform their ETDs from DSpace to OCLC and Voyager.
Originality/value
The new method described in the paper can eliminate the need for double entry in DSpace and OCLC, meet local needs and significantly improve ETD work flow. It offers perspectives on repurposing and managing metadata in a standard and customizable way.
Details
Keywords
Laurent Remy, Dragan Ivanović, Maria Theodoridou, Athina Kritsotaki, Paul Martin, Daniele Bailo, Manuela Sbarra, Zhiming Zhao and Keith Jeffery
The purpose of this paper is to boost multidisciplinary research by the building of an integrated catalogue or research assets metadata. Such an integrated catalogue should enable…
Abstract
Purpose
The purpose of this paper is to boost multidisciplinary research by the building of an integrated catalogue or research assets metadata. Such an integrated catalogue should enable researchers to solve problems or analyse phenomena that require a view across several scientific domains.
Design/methodology/approach
There are two main approaches for integrating metadata catalogues provided by different e-science research infrastructures (e-RIs): centralised and distributed. The authors decided to implement a central metadata catalogue that describes, provides access to and records actions on the assets of a number of e-RIs participating in the system. The authors chose the CERIF data model for description of assets available via the integrated catalogue. Analysis of popular metadata formats used in e-RIs has been conducted, and mappings between popular formats and the CERIF data model have been defined using an XML-based tool for description and automatic execution of mappings.
Findings
An integrated catalogue of research assets metadata has been created. Metadata from e-RIs supporting Dublin Core, ISO 19139, DCAT-AP, EPOS-DCAT-AP, OIL-E and CKAN formats can be integrated into the catalogue. Metadata are stored in CERIF RDF in the integrated catalogue. A web portal for searching this catalogue has been implemented.
Research limitations/implications
Only five formats are supported at this moment. However, description of mappings between other source formats and the target CERIF format can be defined in the future using the 3M tool, an XML-based tool for describing X3ML mappings that can then be automatically executed on XML metadata records. The approach and best practices described in this paper can thus be applied in future mappings between other metadata formats.
Practical implications
The integrated catalogue is a part of the eVRE prototype, which is a result of the VRE4EIC H2020 project.
Social implications
The integrated catalogue should boost the performance of multi-disciplinary research; thus it has the potential to enhance the practice of data science and so contribute to an increasingly knowledge-based society.
Originality/value
A novel approach for creation of the integrated catalogue has been defined and implemented. The approach includes definition of mappings between various formats. Defined mappings are effective and shareable.
Details