Search results

1 – 10 of 21
Article
Publication date: 2 October 2018

Senan Kiryakos and Shigeo Sugimoto

Multiple studies have illustrated that the needs of various users seeking descriptive bibliographic data for pop culture resources (e.g. manga, anime, video games) have not been…

Abstract

Purpose

Multiple studies have illustrated that the needs of various users seeking descriptive bibliographic data for pop culture resources (e.g. manga, anime, video games) have not been properly met by cultural heritage institutions and traditional models. With a focus on manga as the central resource, the purpose of this paper is to address these issues to better meet user needs.

Design/methodology/approach

Based on an analysis of existing bibliographic metadata, this paper proposes a unique bibliographic hierarchy for manga that is also extendable to other pop culture sources. To better meet user requirements of descriptive data, an aggregation-based approach relying on the Object Reuse and Exchange-Open Archives Initiative (OAI-ORE) model utilized existing, fan-created data on the web.

Findings

The proposed hierarchy is better able to portray multiple entities of manga as they exist across data providers compared to existing models, while the utilization of OAI-ORE-based aggregation to build and provide bibliographic metadata for said hierarchy resulted in levels of description that more adequately meet user demands.

Originality/value

Though studies have proposed alternative models for resources like games or comics, manga has remained unexamined. As manga is a major component of many popular multimedia franchises, a focus here with the intention while building the model to support other resource types provides a foundation for future work seeking to incorporate these resources.

Details

Journal of Documentation, vol. 75 no. 2
Type: Research Article
ISSN: 0022-0418

Keywords

Article
Publication date: 8 February 2013

Hesamedin Hakimjavadi and Mohamad Noorman Masrek

The purpose of this study is to evaluate the status of eight interoperability protocols within repositories of electronic theses and dissertations (ETDs) as an introduction to…

Abstract

Purpose

The purpose of this study is to evaluate the status of eight interoperability protocols within repositories of electronic theses and dissertations (ETDs) as an introduction to further studies on feasibility of deploying these protocols in upcoming areas of interoperability.

Design/methodology/approach

Three surveys of 266 ETD repositories, 15 common ETD management software solutions, and 136 ETD experts were conducted in order to appraise the protocols. These protocols were evaluated in four categories of aggregation, syndication, distributed search, and publishing protocols.

Findings

This study revealed that, despite its drawbacks, Protocol for Metadata Harvesting (PMH) is still the most utilized interoperability protocol within ETD providers, ETD software developers, and implementers, followed by ATOM and Object Reuse and Exchange (ORE) protocols. However, in all competitive areas related to performance and functionality, ORE surpasses other protocols. It was also found that the three protocols of ATOM, PMH, and ORE could be used interchangeably in the most used cases of interoperability protocols in repositories.

Practical implications

In this research, a combination of methods was employed to evaluate the status of protocols, from the perspectives of data providers, software providers, and implementers. Practitioners may use these methods to assess other protocols in terms of effectiveness and efficiency.

Originality/value

The conduct of this study has involved three types of surveys, through which different aspects of interoperability protocols are evaluated. Prior to the conduct of this study, there has yet any study focusing on the same topic, which has adopted the multi‐method that has been adopted in this study.

Article
Publication date: 15 June 2015

Miquel Termens, Mireia Ribera and Anita Locher

The purpose of this paper is to analyze the file formats of the digital objects stored in two of the largest open-access repositories in Spain, DDUB and TDX, and determines the…

1345

Abstract

Purpose

The purpose of this paper is to analyze the file formats of the digital objects stored in two of the largest open-access repositories in Spain, DDUB and TDX, and determines the implications of these formats for long-term preservation, focussing in particular on the different versions of PDF.

Design/methodology/approach

To be able to study the two repositories, the authors harvested all the files corresponding to every digital object and some of their associated metadata using the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) and Open Archives Initiative Object Reuse and Exchange (OAI-ORE) protocols. The file formats were analyzed with DROID software and some additional tools.

Findings

The results show that there is no alignment between the preservation policies declared by institutions, the technical tools available, and the actual stored files.

Originality/value

The results show that file controls currently applied to institutional repositories do not suffice to grant their stated mission of long-term preservation of scientific literature.

Details

Library Hi Tech, vol. 33 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

Content available
Article
Publication date: 25 January 2008

502

Abstract

Details

Library Hi Tech News, vol. 25 no. 1
Type: Research Article
ISSN: 0741-9058

Article
Publication date: 27 August 2014

Paolo Manghi, Michele Artini, Claudio Atzori, Alessia Bardi, Andrea Mannocci, Sandro La Bruzzo, Leonardo Candela, Donatella Castelli and Pasquale Pagano

The purpose of this paper is to present the architectural principles and the services of the D-NET software toolkit. D-NET is a framework where designers and developers find the…

Abstract

Purpose

The purpose of this paper is to present the architectural principles and the services of the D-NET software toolkit. D-NET is a framework where designers and developers find the tools for constructing and operating aggregative infrastructures (systems for aggregating data sources with heterogeneous data models and technologies) in a cost-effective way. Designers and developers can select from a variety of D-NET data management services, can configure them to handle data according to given data models, and can construct autonomic workflows to obtain personalized aggregative infrastructures.

Design/methodology/approach

The paper provides a definition of aggregative infrastructures, sketching architecture, and components, as inspired by real-case examples. It then describes the limits of current solutions, which find their lacks in the realization and maintenance costs of such complex software. Finally, it proposes D-NET as an optimal solution for designers and developers willing to realize aggregative infrastructures. The D-NET architecture and services are presented, drawing a parallel with the ones of aggregative infrastructures. Finally, real-cases of D-NET are presented, to show-case the statement above.

Findings

The D-NET software toolkit is a general-purpose service-oriented framework where designers can construct customized, robust, scalable, autonomic aggregative infrastructures in a cost-effective way. D-NET is today adopted by several EC projects, national consortia and communities to create customized infrastructures under diverse application domains, and other organizations are enquiring for or are experimenting its adoption. Its customizability and extendibility make D-NET a suitable candidate for creating aggregative infrastructures mediating between different scientific domains and therefore supporting multi-disciplinary research.

Originality/value

D-NET is the first general-purpose framework of this kind. Other solutions are available in the literature but focus on specific use-cases and therefore suffer from the limited re-use in different contexts. Due to its maturity, D-NET can also be used by third-party organizations, not necessarily involved in the software design and maintenance.

Details

Program, vol. 48 no. 4
Type: Research Article
ISSN: 0033-0337

Keywords

Content available
Article
Publication date: 1 December 2006

168

Abstract

Details

Library Hi Tech News, vol. 23 no. 10
Type: Research Article
ISSN: 0741-9058

Content available
Article
Publication date: 8 August 2008

253

Abstract

Details

Library Hi Tech News, vol. 25 no. 7
Type: Research Article
ISSN: 0741-9058

Article
Publication date: 28 June 2022

Sneha Bharti and Ranjeet Kumar Singh

While the obstacles of archiving endangered languages are significant, the question of which platform is best for building a digital language archive is constantly present. The…

Abstract

Purpose

While the obstacles of archiving endangered languages are significant, the question of which platform is best for building a digital language archive is constantly present. The purpose of this study is to evaluate and analyse digital language archives development platforms, such as content management systems (CMSs), digital repositories and archival collections management systems (ACMSs) using parameters that have been specified. The authors selected Mukurtu CMS, which is based on Drupal CMS; DSpace as the digital repository software; and ArchivesSpace as an ACMS in this study.

Design/methodology/approach

The current research is supported by a study of the literature and a detailed exploration of different systems used to develop digital language archives. The whole research is carried out in three steps: literature searching; identification of relevant literature; and parameter identification, exploration of tools and data reporting and analysis.

Findings

Following the technical and feature analysis of these tools, it can be concluded that they are more or less comparable, as well as constantly evolving, updating and having a bigger community base. It may be determined that DSpace is the most popular platform, but the other two, particularly ArchivesSpaces, are fierce competitors.

Research limitations/implications

This study outlines the technical prerequisites for creating a digital language archive, which will be useful to IT personnel working on these projects. The research is also useful for tool developers as it allows them to incorporate missing functionality and technical standards by comparing them to alternatives. The parameters established in this study can be used for similar studies in other domains, as well as for evaluating existing digital language archives.

Practical implications

The findings of this study have broad practical implications, and they can assist archivists, linguists, language communities and library and information science professionals in choosing an appropriate platform for building a digital language archive.

Originality/value

This study finds that there is relatively little effort made towards reviewing digital language archiving and the systems that are used to do it; thus, this study is carried out to assess and analyse digital language archive creation systems based on defined parameters. The parameters were discovered through a combination of the available literature and tool discovery. Using a parametric approach to evaluate tools yields unique insights and quickly reveals system flaws.

Article
Publication date: 30 September 2020

Lisa Kruesi, Frada Burstein and Kerry Tanner

The purpose of this study is to assess the opportunity for a distributed, networked open biomedical repository (OBR) using a knowledge management system (KMS) conceptual…

Abstract

Purpose

The purpose of this study is to assess the opportunity for a distributed, networked open biomedical repository (OBR) using a knowledge management system (KMS) conceptual framework. An innovative KMS conceptual framework is proposed to guide the transition from a traditional, siloed approach to a sustainable OBR.

Design/methodology/approach

This paper reports on a cycle of action research, involving literature review, interviews and focus group with leaders in biomedical research, open science and librarianship, and an audit of elements needed for an Australasian OBR; these, along with an Australian KM standard, informed the resultant KMS framework.

Findings

The proposed KMS framework aligns the requirements for an OBR with the people, process, technology and content elements of the KM standard. It identifies and defines nine processes underpinning biomedical knowledge – discovery, creation, representation, classification, storage, retrieval, dissemination, transfer and translation. The results comprise an explanation of these processes and examples of the people, process, technology and content dimensions of each process. While the repository is an integral cog within the collaborative, distributed open science network, its effectiveness depends on understanding the relationships and linkages between system elements and achieving an appropriate balance between them.

Research limitations/implications

The current research has focused on biomedicine. This research builds on the worldwide effort to reduce barriers, in particular paywalls to health knowledge. The findings present an opportunity to rationalize and improve a KMS integral to biomedical knowledge.

Practical implications

Adoption of the KMS framework for a distributed, networked OBR will facilitate open science through reducing duplication of effort, removing barriers to the flow of knowledge and ensuring effective management of biomedical knowledge.

Social implications

Achieving quality, permanency and discoverability of a region’s digital assets is possible through ongoing usage of the framework for researchers, industry and consumers.

Originality/value

The framework demonstrates the dependencies and interplay of elements and processes to frame an OBR KMS.

Details

Journal of Knowledge Management, vol. 24 no. 10
Type: Research Article
ISSN: 1367-3270

Keywords

Content available
Article
Publication date: 7 August 2009

267

Abstract

Details

Library Hi Tech News, vol. 26 no. 7
Type: Research Article
ISSN: 0741-9058

1 – 10 of 21