Search results

1 – 10 of over 4000
Click here to view access options
Article
Publication date: 20 September 2011

Mangala Anil Hirwade

The purpose of this paper is to investigate the metadata standards available worldwide and to study those in detail. The paper presents the detailed analysis of these standards.

Downloads
3656

Abstract

Purpose

The purpose of this paper is to investigate the metadata standards available worldwide and to study those in detail. The paper presents the detailed analysis of these standards.

Design/methodology/approach

The methodology adopted was investigative. Data were collected through e‐mails and visiting the relevant web sites.

Findings

No one existing metadata standard covers all the required facets. In such cases, an implementer is require to create a new scheme with the desired metadata elements but such schemes create a number of issues in interoperating. Another solution to this problem is to create extra elements to fill gaps in the coverage. Interoperability problems can be overcome by publications of new elements declaring their definitions, formats and so on.

Originality/value

Few studies have compared some of the well‐known standards. This study gives a detailed analysis of the metadata standards available worldwide and it will help in comparing and adopting the required standard for the repositories.

Details

Library Hi Tech News, vol. 28 no. 7
Type: Research Article
ISSN: 0741-9058

Keywords

Click here to view access options
Article
Publication date: 10 August 2021

Hirak Jyoti Hazarika, S. Ravikumar and Akash Handique

This paper aims to present a novel DSpace-based medical image repository system planned explicitly for storing and retrieving clinical images using digital imaging and…

Downloads
42

Abstract

Purpose

This paper aims to present a novel DSpace-based medical image repository system planned explicitly for storing and retrieving clinical images using digital imaging and communication in medicine (DICOM) metadata standards. DSpace institutional repository software is widely used in an academic environment for accessing and mainly storing text-related files. DICOM images are particular types of images embedded with much system-generated metadata and organised using DICOM metadata standards.

Design/methodology/approach

The present paper talks about institutional repository software (DSpace) in archiving DICOM images. In the current study, the authors have tried to integrate the DICOM metadata standard with DSpace, which was compatible with Dublin Core (DC) and open archives initiative – protocol for metadata harvesting (OAI-PMH). After combining the DICOM standard with DSpace and the repository tested with a sample of 5,000 images, the retrieval results using various DICOM tags was very satisfactory. This study paves for the use of open source software (OSS) in storing and retrieving medical images.

Findings

The author has provided the DSpace software to recognised DICOM (.dcm) files in the first stage. In the second stage, a patch was developed to identify the DICOM metadata standard in Dspace, which has inbuilt DC metadata standards. Finally, in the third stage, retrieval efficiency was tested with a 5,000 .dcm image using the DICOM tag and the results were very fruitful.

Research limitations/implications

A major limitation of this study was the size of the data (5,000 DICOM images) with which the authors have tested the system. The system scalability has to be tested on various fronts like on cloud and local servers with different configurations, for which a separate study has to be done.

Practical implications

Once this system is in place, DICOM users can stock, retrieve and access the image from the Web platform. Furthermore, this proposed repository will be the warehouse of various DICOM images with reasonable storage costs.

Originality/value

In addition to exploring the opportunities of free open source software (FOSS) implementation in medical science, this study includes issues related to the performance of an open-source repository for retrieving and preserving medical images. It created and developed Open Source DICOM Medical Image Library with DICOM metadata standard with the help of DSpace. Thus, the study will generate value for library professionals and medical professionals and FOSS vendors to understand the medical market in the context of FOSS.

Details

Collection and Curation, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2514-9326

Keywords

Click here to view access options
Article
Publication date: 1 June 2004

Magda El‐Sherbini and George Klim

Metadata standards existing today range from very complex to very simple. Relative simplicity or complexity of metadata standards depends in large part on the resources…

Downloads
7013

Abstract

Metadata standards existing today range from very complex to very simple. Relative simplicity or complexity of metadata standards depends in large part on the resources for which they were created and the depth of description that is deemed necessary to make these resources accessible. This paper reviews the differences between metadata standards and current cataloging practices, and discusses how the various metadata standards are applied in libraries. In addressing these issues, the authors introduce definitions of key concepts of metadata and cataloging standards and provide an overview of the most common metadata schemes. The discussion of current cataloging practices includes an overview of the most commonly used cataloging practices and standards, the impact of metadata on library practice and the role of librarians related to metadata. The authors will discuss the OHIOLINK Electronic Thesis and Dissertations (ETD) as an example of how Anglo‐American Cataloging Rules 2nd (AACR2) and Machine Readable Cataloging (MARC21) are used as metadata to store, describe and access this unique information resource.

Details

The Electronic Library, vol. 22 no. 3
Type: Research Article
ISSN: 0264-0473

Keywords

Click here to view access options
Article
Publication date: 18 June 2019

María Montenegro

The purpose of this paper is to investigate the underlying meanings, effects and cultural patterns of metadata standards, focusing on Dublin Core (DC), and explore the…

Downloads
1196

Abstract

Purpose

The purpose of this paper is to investigate the underlying meanings, effects and cultural patterns of metadata standards, focusing on Dublin Core (DC), and explore the ways in which anticolonial metadata tools can be applied to exercise and promote Indigenous data sovereignty.

Design/methodology/approach

Applying an anticolonial approach, this paper examines the assumptions underpinning the stated roles of two of DC’s metadata elements, rights and creator. Based on that examination, the paper considers the limitations of DC for appropriately documenting Indigenous traditional knowledge (TK). Introduction of the TK labels and their implementation are put forward as an alternative method to such limitations in metadata standards.

Findings

The analysis of the rights and creator elements revealed that DC’s universality and supposed neutrality threaten the rightful attribution, specificity and dynamism of TK, undermining Indigenous data sovereignty. The paper advocates for alternative descriptive methods grounded within tribal sovereignty values while recognizing the difficulties of dealing with issues of interoperability by means of metadata standards given potentially innate tendencies to customization within communities.

Originality/value

This is the first paper to directly examine the implications of DC’s rights and creator elements for documenting TK. The paper identifies ethical practices and culturally appropriate tools that unsettle the universality claims of metadata standards. By introducing the TK labels, the paper contributes to the efforts of Indigenous communities to regain control and ownership of their cultural and intellectual property.

Details

Journal of Documentation, vol. 75 no. 4
Type: Research Article
ISSN: 0022-0418

Keywords

Click here to view access options
Article
Publication date: 18 January 2021

Chelsea Renshaw and Chern Li Liew

This paper aims to examine the attitudes and experiences of information professionals with descriptive standards and collection management systems (CMSs) used for managing…

Abstract

Purpose

This paper aims to examine the attitudes and experiences of information professionals with descriptive standards and collection management systems (CMSs) used for managing documentary heritage collections held by cultural heritage institutions in New Zealand (NZ). The aim is that such insights will inform decision-making around promoting documentary heritage collections discoverability and accessibility, in terms of advocating for appropriate system requirements when procuring or updating CMSs, and application of descriptive standards.

Design/methodology/approach

A qualitative design was applied to investigate the attitudes and experiences of information professionals working in libraries, archives and records management institutions, museums and public galleries. Data was collected through semi-structured interviews with thirteen participants who worked across ten different cultural heritage institutions.

Findings

The findings reveal that variances among metadata in libraries, museums, public galleries, archives and records management institutions continue to lead to challenges around discovery and access of documentary heritage. If opportunities for connecting documentary heritage collections in the age of linked data are to be realized, the sector needs to work collectively to address these variances along with consideration of the CMSs used. The study findings highlight issues currently affecting the NZ cultural heritage sector goal to make collections discoverable and more widely accessible.

Originality/value

The findings highlight a need for deeper research into CMSs used by the cultural heritage sector as these systems have an impact on metadata management including constraining the application of appropriate descriptive standards for documentary heritage collections.

Click here to view access options
Article
Publication date: 25 April 2018

Deborah Maron and Melanie Feinberg

The purpose of this paper is to employ a case study of the Omeka content management system to demonstrate how the adoption and implementation of a metadata standard (in…

Downloads
1662

Abstract

Purpose

The purpose of this paper is to employ a case study of the Omeka content management system to demonstrate how the adoption and implementation of a metadata standard (in this case, Dublin Core) can result in contrasting rhetorical arguments regarding metadata utility, quality, and reliability. In the Omeka example, the author illustrate a conceptual disconnect in how two metadata stakeholders – standards creators and standards users – operationalize metadata quality. For standards creators such as the Dublin Core community, metadata quality involves implementing a standard properly, according to established usage principles; in contrast, for standards users like Omeka, metadata quality involves mere adoption of the standard, with little consideration of proper usage and accompanying principles.

Design/methodology/approach

The paper uses an approach based on rhetorical criticism. The paper aims to establish whether Omeka’s given ends (the position that Omeka claims to take regarding Dublin Core) align with Omeka’s guiding ends (Omeka’s actual argument regarding Dublin Core). To make this assessment, the paper examines both textual evidence (what Omeka says) and material-discursive evidence (what Omeka does).

Findings

The evidence shows that, while Omeka appears to argue that adopting the Dublin Core is an integral part of Omeka’s mission, the platform’s lack of support for Dublin Core implementation makes an opposing argument. Ultimately, Omeka argues that the appearance of adopting a standard is more important than its careful implementation.

Originality/value

This study contributes to our understanding of how metadata standards are understood and used in practice. The misalignment between Omeka’s position and the goals of the Dublin Core community suggests that Omeka, and some portion of its users, do not value metadata interoperability and aggregation in the same way that the Dublin Core community does. This indicates that, although certain values regarding standards adoption may be pervasive in the metadata community, these values are not equally shared amongst all stakeholders in a digital library ecosystem. The way that standards creators (Dublin Core) understand what it means to “adopt a standard” is different from the way that standards users (Omeka) understand what it means to “adopt a standard.”

Click here to view access options
Article
Publication date: 1 February 2001

Magda El‐Sherbini

This article is a survey of representative metadata efforts comparing them to MARC 21 metadata in order to determine if new electronic formats require the development of a…

Downloads
3715

Abstract

This article is a survey of representative metadata efforts comparing them to MARC 21 metadata in order to determine if new electronic formats require the development of a new set of standards. This study surveys the ongoing metadata projects in order to identify what types of metadata exist and how they are used and also compares and analyzes selected metadata elements in an attempt to illustrate how they are related to MARC 21 metadata format elements.

Details

Library Review, vol. 50 no. 1
Type: Research Article
ISSN: 0024-2535

Keywords

Click here to view access options
Article
Publication date: 6 January 2012

Getaneh Alemu, Brett Stevens and Penny Ross

With the aim of developing a conceptual framework which aims to facilitate semantic metadata interoperability, this paper explores overarching conceptual issues on how…

Downloads
2793

Abstract

Purpose

With the aim of developing a conceptual framework which aims to facilitate semantic metadata interoperability, this paper explores overarching conceptual issues on how traditional library information organisation schemes such as online public access catalogues (OPACs), taxonomies, thesauri, and ontologies on the one hand versus Web 2.0 technologies such as social tagging (folksonomies) can be harnessed to provide users with satisfying experiences.

Design/methodology/approach

This paper reviews works in relation to current metadata creation, utilisation and interoperability approaches, focusing on how a social constructivist philosophical perspective can be employed to underpin metadata decisions in digital libraries. Articles are retrieved from databases such as EBSCO host and Emerald and online magazines such as D‐Lib and Ariadne. Books, news articles and blog posts that are deemed relevant are also used to support the arguments put forward in this paper.

Findings

Current metadata approaches are deeply authoritative and metadata deployments in digital libraries tend to favour an objectivist approach with focus on metadata simplicity. It is argued that unless information objects are enriched with metadata generated through a collaborative and user‐driven approach, achieving semantic metadata interoperability in digital libraries will remain difficult.

Practical implications

In this paper, it is indicated that the number of metadata elements (fields) constituting a standard has a direct bearing on metadata richness, which in turn directly affects semantic interoperability. It is expected that this paper will contribute towards a better understanding of harnessing user‐driven metadata.

Originality/value

As suggested in this paper, a conceptual metadata framework underpinned by a social constructivist approach substantially contributes to semantic interoperability in digital libraries.

Details

New Library World, vol. 113 no. 1/2
Type: Research Article
ISSN: 0307-4803

Keywords

Click here to view access options
Article
Publication date: 13 June 2008

Joanne Evans, Barbara Reed and Sue McKemmish

The ability to establish sustainable frameworks for creating and managing recordkeeping metadata is one of the key challenges for recordkeeping in digital and networked

Downloads
2334

Abstract

Purpose

The ability to establish sustainable frameworks for creating and managing recordkeeping metadata is one of the key challenges for recordkeeping in digital and networked environments. The purpose of this article is to give an overview of the Clever Recordkeeping Metadata Project, an Australian research project which sought to investigate how the movement of recordkeeping metadata between systems could be automated.

Design/methodology/approach

The project adopted an action research approach to the research, utilising a systems development method within this framework to iteratively build a prototype demonstrating how recordkeeping metadata could be created once in particular application environments, then used many times to meet a range of business and recordkeeping purposes.

Findings

Recordkeeping metadata interoperability, like recordkeeping metadata itself, is complex and dynamic. The research identifies the need for standards and tools to reflect and have the capacity to handle this complexity.

Originality/value

This paper provides insights into the complex nature of recordkeeping metadata and the kind of infrastructure that needs to be developed to support its automated capture and re‐use in integrated systems environments.

Details

Records Management Journal, vol. 18 no. 2
Type: Research Article
ISSN: 0956-5698

Keywords

Click here to view access options
Article
Publication date: 16 November 2012

Getaneh Alemu, Brett Stevens, Penny Ross and Jane Chandler

The purpose of this paper is to provide recommendations for making a conceptual shift from current document‐centric to data‐centric metadata. The importance of adjusting…

Downloads
6965

Abstract

Purpose

The purpose of this paper is to provide recommendations for making a conceptual shift from current document‐centric to data‐centric metadata. The importance of adjusting current library models such as Resource Description and Access (RDA) and Functional Requirements for Bibliographic Records (FRBR) to models based on Linked Data principles is discussed. In relation to technical formats, the paper suggests the need to leapfrog from machine readable cataloguing (MARC) to Resource Description Framework (RDF), without disrupting current library metadata operations.

Design/methodology/approach

This paper identified and reviewed relevant works on overarching topics that include standards‐based metadata, Web 2.0 and Linked Data. The review of these works is contextualised to inform the recommendations identified in this paper. Articles were retrieved from databases such as Emerald and D‐Lib Magazine. Books, electronic articles and relevant blog posts were also used to support the arguments put forward in this paper.

Findings

Contemporary library standards and models carried forward some of the constraints from the traditional card catalogue system. The resultant metadata are mainly attuned to human consumption rather than machine processing. In view of current user needs and technological development such as the interest in Linked Data, it is found important that current metadata models such as FRBR and RDA are re‐conceptualised.

Practical implications

This paper discusses the implications of re‐conceptualising current metadata models in light of Linked Data principles, with emphasis on metadata sharing, facilitation of serendipity, identification of Zeitgeist and emergent metadata, provision of faceted navigation, and enriching metadata with links.

Originality/value

Most of the literature on Linked Data for libraries focus on answering the “how to” questions of using RDF/XML and SPARQL technologies, however, this paper focuses mainly on answering “why” Linked Data questions, thus providing an underlying rationale for using Linked Data. The discussion on mixed‐metadata approaches, serendipity, Zeitgeist and emergent metadata is considered to provide an important rationale to the role of Linked Data for libraries.

1 – 10 of over 4000