Search results
1 – 10 of 11Getaneh Alemu, Brett Stevens, Penny Ross and Jane Chandler
The purpose of this paper is to provide recommendations for making a conceptual shift from current document‐centric to data‐centric metadata. The importance of adjusting current…
Abstract
Purpose
The purpose of this paper is to provide recommendations for making a conceptual shift from current document‐centric to data‐centric metadata. The importance of adjusting current library models such as Resource Description and Access (RDA) and Functional Requirements for Bibliographic Records (FRBR) to models based on Linked Data principles is discussed. In relation to technical formats, the paper suggests the need to leapfrog from machine readable cataloguing (MARC) to Resource Description Framework (RDF), without disrupting current library metadata operations.
Design/methodology/approach
This paper identified and reviewed relevant works on overarching topics that include standards‐based metadata, Web 2.0 and Linked Data. The review of these works is contextualised to inform the recommendations identified in this paper. Articles were retrieved from databases such as Emerald and D‐Lib Magazine. Books, electronic articles and relevant blog posts were also used to support the arguments put forward in this paper.
Findings
Contemporary library standards and models carried forward some of the constraints from the traditional card catalogue system. The resultant metadata are mainly attuned to human consumption rather than machine processing. In view of current user needs and technological development such as the interest in Linked Data, it is found important that current metadata models such as FRBR and RDA are re‐conceptualised.
Practical implications
This paper discusses the implications of re‐conceptualising current metadata models in light of Linked Data principles, with emphasis on metadata sharing, facilitation of serendipity, identification of Zeitgeist and emergent metadata, provision of faceted navigation, and enriching metadata with links.
Originality/value
Most of the literature on Linked Data for libraries focus on answering the “how to” questions of using RDF/XML and SPARQL technologies, however, this paper focuses mainly on answering “why” Linked Data questions, thus providing an underlying rationale for using Linked Data. The discussion on mixed‐metadata approaches, serendipity, Zeitgeist and emergent metadata is considered to provide an important rationale to the role of Linked Data for libraries.
Details
Keywords
Getaneh Alemu, Brett Stevens and Penny Ross
With the aim of developing a conceptual framework which aims to facilitate semantic metadata interoperability, this paper explores overarching conceptual issues on how traditional…
Abstract
Purpose
With the aim of developing a conceptual framework which aims to facilitate semantic metadata interoperability, this paper explores overarching conceptual issues on how traditional library information organisation schemes such as online public access catalogues (OPACs), taxonomies, thesauri, and ontologies on the one hand versus Web 2.0 technologies such as social tagging (folksonomies) can be harnessed to provide users with satisfying experiences.
Design/methodology/approach
This paper reviews works in relation to current metadata creation, utilisation and interoperability approaches, focusing on how a social constructivist philosophical perspective can be employed to underpin metadata decisions in digital libraries. Articles are retrieved from databases such as EBSCO host and Emerald and online magazines such as D‐Lib and Ariadne. Books, news articles and blog posts that are deemed relevant are also used to support the arguments put forward in this paper.
Findings
Current metadata approaches are deeply authoritative and metadata deployments in digital libraries tend to favour an objectivist approach with focus on metadata simplicity. It is argued that unless information objects are enriched with metadata generated through a collaborative and user‐driven approach, achieving semantic metadata interoperability in digital libraries will remain difficult.
Practical implications
In this paper, it is indicated that the number of metadata elements (fields) constituting a standard has a direct bearing on metadata richness, which in turn directly affects semantic interoperability. It is expected that this paper will contribute towards a better understanding of harnessing user‐driven metadata.
Originality/value
As suggested in this paper, a conceptual metadata framework underpinned by a social constructivist approach substantially contributes to semantic interoperability in digital libraries.
Details
Keywords
Soohyung Joo, Darra Hofman and Youngseek Kim
The purpose of this paper is to explore the breadth of the challenges and issues facing institutional repositories in academic libraries, based on a survey of academic librarians…
Abstract
Purpose
The purpose of this paper is to explore the breadth of the challenges and issues facing institutional repositories in academic libraries, based on a survey of academic librarians. Particularly, this study covers the challenges and barriers related to data management facing institutional repositories.
Design/methodology/approach
The study uses a survey method to identify the relative significance of major challenges facing institutional repositories across six dimensions, including: data, metadata, technological requirements, user needs, ethical concerns and administrative challenges.
Findings
The results of the survey reveal that academic librarians identify limited resources, including insufficient budget and staff, as the major factor preventing the development and/or deployment of services in institutional repositories. The study also highlights crucial challenges in different dimensions of institutional repositories, including the sheer amount of data, institutional support for metadata creation and the sensitivity of data.
Originality/value
This study is one of a few studies that comprehensively identified the variety of challenges that institutional repositories face in operating academic libraries with a focus on data management in institutional repositories. In this study, 37 types of challenges were identified in six dimensions of institutional repositories. More importantly, the significance of those challenges was assessed from the perspective of academic librarians involved in institutional repository services.
Details
Keywords
William Y. Arms, Naomi Dushay, Dave Fulker and Carl Lagoze
This paper describes the use of the Open Archives Initiative Protocol for Metadata Harvesting in the NSF’s National Science Digital Library (NSDL). The protocol is used both as a…
Abstract
This paper describes the use of the Open Archives Initiative Protocol for Metadata Harvesting in the NSF’s National Science Digital Library (NSDL). The protocol is used both as a method to ingest metadata into a central Metadata Repository and also as the means by which the repository exports metadata to service providers. The NSDL Search Service is used to illustrate this architecture. An early version of the Metadata Repository was an alpha test site for version 1 of the protocol and the production repository was a beta test site for version 2. This paper describes the implementation experience and early practical tests. Despite some teething troubles and the long‐term difficulties of semantic compatibility, the overall conclusion is optimism that the Open Archive Initiative will be a successful part of the NSDL.
Details
Keywords
The purpose of this paper is to investigate the search behavior of institutional repository (IR) users in regard to subjects as a means of estimating the potential impact of…
Abstract
Purpose
The purpose of this paper is to investigate the search behavior of institutional repository (IR) users in regard to subjects as a means of estimating the potential impact of applying a controlled subject vocabulary to an IR.
Design/methodology/approach
Google Analytics data were used to record cases where users arrived at an IR item page from an external web search and subsequently downloaded content. Search queries were compared against the Faceted Application of Subject Terminology (FAST) schema to determine the topical nature of the queries. Queries were also compared against the item’s metadata values for title and subject using approximate string matching to determine the alignment of the queries with current metadata values.
Findings
A substantial portion of successful user search queries to an IR appear to be topical in nature. User search queries matched values from FAST at a higher rate than existing subject metadata. Increased attention to subject description in IR records may provide an opportunity to improve the search visibility of the content.
Research limitations/implications
The study is limited to a particular IR. Data from Google Analytics does not provide comprehensive search query data.
Originality/value
The study presents a novel method for analyzing user search behavior to assist IR managers in determining whether to invest in applying controlled subject vocabularies to IR content.
Details
Keywords
Sumeer Gul, Tariq Ahmad Shah, Suhail Ahmad, Farzana Gulzar and Taseen Shabir
The study aims to showcase the developmental perspective of “grey literature” and its importance to different sectors of the society. Furthermore, issues, challenges and…
Abstract
Purpose
The study aims to showcase the developmental perspective of “grey literature” and its importance to different sectors of the society. Furthermore, issues, challenges and possibilities concerned with the existence of “grey literature” have also been discoursed.
Design/methodology/approach
The study is based on the existing literature published in the field of “grey literature” which was identified with the aid of three leading indexing and abstracting services, Web of Science, SciVerse Scopus, and Google Scholar. Keywords like grey literature, black literature, The Grey Journal, The International Journal on Grey Literature, International Conference on Grey Literature, non-conventional literature, semi-published literature, System for Information on Grey Literature in Europe (SIGLE), European Association for the Exploitation of Grey Literature (EAGLE), white literature, white papers, theses and dissertations, GreyNet, grey literature-electronic media, Grey market, open access, OpenNet, open access repositories, institutional repositories, open archives, electronic theses and dissertations, institutional libraries, scholarly communication, access to knowledge, metadata standards for grey literature, metadata heterogeneity, disciplinary grey literature, etc. were searched in the select databases. Simple as well as advanced search feature of the databases were made use of. Moreover, for more recent and updated information on the topic, the “citing articles” feature of the databases was also used. The “citing articles” were consulted on the basis of their relevance with the subject content.
Findings
The study helps to understand the definitive framework and developmental perspective of “grey literature”. “Grey Literature” has emerged as a promising content for enhancing the visibility of the ideas that were earlier unexplored and least made use of “Grey literature” has also overcome the problems and issues with its existence and adoption. Technology has played a catalytic role in eradicating the issues and problems pertinent to the “grey literature” to a greater extent.
Research limitations/implications
The study is based on the published literature that is indexed by only three databases, i.e. Web of Science, SciVerse Scopus and Google Scholar. Furthermore, some limited aspects of “grey literature” have been covered.
Practical implications
The study will be of great help to various stakeholders and policymakers to showcase the value and importance of “grey literature” for better access and exploitation. It will also be of importance to those interested to know how the literature tagged as grey changed with the passing time and how it through its unseen characteristics has evolved as an important source of information at par with the “white literature”.
Originality/value
The study tries to provide a demarcated and segregated outlook of the “grey literature”. It also focuses on various issues, problems and possibilities pertinent to the adoption and existence of “grey literature”.
Details
Keywords
The terms “digital curation” and “cyberinfrastructure” have been coined in the last decade to describe distinct but related concepts of how data can be managed, preserved…
Abstract
Purpose
The terms “digital curation” and “cyberinfrastructure” have been coined in the last decade to describe distinct but related concepts of how data can be managed, preserved, manipulated and made available for long‐term use. This paper aims to examine these.
Design/methodology/approach
The paper considers the origins of both terms and the communities that have been engaged with each of them, traces the development of the present digital environment in the USA and considers what this may mean for the future.
Findings
The paper reveals that each term has important attributes that contribute to a comprehensive understanding of the digital knowledge universe.
Originality/value
The paper reveals information about the development of digital preservation.
Details
Keywords
Digital preservation is a term that is a bit of an enigma to many people both in and out of the digital arena, but it will undoubtedly be important in an increasingly all-digital…
Abstract
Purpose
Digital preservation is a term that is a bit of an enigma to many people both in and out of the digital arena, but it will undoubtedly be important in an increasingly all-digital world. The underlying work relating to digital preservation is essential to the long-term care of digital media, but who is charged with addressing this type of work, and can policy serve to structure and also reflect this complex concept? The main point of interest for this study is to examine existing digital preservation policies at Association of Research Libraries (ARL) institutions and analyze the content of the policies. The purpose will be to determine if these policies are able to provide a robust framework for true digital preservation work at this point in time. First, an introduction is made to provide the structure of the study and background. Next, a literature review is provided, followed by an outline of the methods and results of the study, and finally a conclusion with recommendations for future research.
Design/methodology/approach
An analysis of digital preservation policy at ARL institutions is conducted, with recommendations provided for further research.
Findings
This study was an attempt to highlight the current state of digital preservation policies, reviewing both the positive elements and the shortcomings of policies at ARL member institutions. The call for policies made for this study resulted in finding that 32 (26 per cent) ARL institutions currently have a digital preservation policy in place, from the institutions that responded (58 per cent response rate). In total, 23/40 institutions without a current policy indicate there is, or will be, work to complete a policy within the coming year (2016-2017). A call can be made at this time for more in-depth research and analysis of the policies for further inquiry. Both effective (University of Houston, University of Florida, York University) and ineffective (Colorado State University, University of Texas, Virginia Tech) digital preservation policies were discovered during the course of the study, with many policies falling somewhere in the middle. Many institutions provided a good template for digital preservation but lacked details for how this work would be addressed and who would be completing such work.
Research limitations/implications
Limited to ARL member institutions at the time of the study (January 2016).
Originality/value
There is currently a gap in analysis and research of digital preservation policies. This is an area of active policy creation for many institutions, and it will likely be a growing area for researchers to examine.
Details
Keywords
As educational technology becomes pervasive, demand will grow for library content to be incorporated into courseware. Among the barriers impeding interoperability between…
Abstract
As educational technology becomes pervasive, demand will grow for library content to be incorporated into courseware. Among the barriers impeding interoperability between libraries and educational tools is the difference in specifications commonly used for the exchange of digital objects and metadata. Among libraries, Metadata Encoding and Transmission Standard (METS) is a new but increasingly popular standard; the IMS content‐package (IMS‐CP) plays a parallel role in educational technology. This article describes how METS‐encoded library content can be converted into digital objects for IMS‐compliant systems through an XSLT‐based crosswalk. The conceptual models behind METS and IMS‐CP are compared, the design and limitations of an XSLT‐based translation are described, and the crosswalks are related to other techniques to enhance interoperability.
Details
Keywords
This paper aims to show how information in digital collections that have been catalogued using high‐quality metadata can be retrieved more easily by users of search engines such…
Abstract
Purpose
This paper aims to show how information in digital collections that have been catalogued using high‐quality metadata can be retrieved more easily by users of search engines such as Google.
Design/methodology/approach
The research and proposals described arose from an investigation into the observed phenomenon that pages from the Glasgow Digital Library (gdl.cdlr.strath.ac.uk) were regularly appearing near the top of Google search results shortly after publication, without any deliberate effort to achieve this. The reasons for this phenomenon are now well understood and are described in the second part of the paper. The first part provides context with a review of the impact of Google and a summary of recent initiatives by commercial publishers to make their content more visible to search engines.
Findings
The literature research provides firm evidence of a trend amongst publishers to ensure that their online content is indexed by Google, in recognition of its popularity with internet users. The practical research demonstrates how search engine accessibility can be compatible with use of established collection management principles and high‐quality metadata.
Originality/value
The concept of data shoogling is introduced, involving some simple techniques for metadata optimisation. Details of its practical application are given, to illustrate how those working in academic, cultural and public‐sector organisations could make their digital collections more easily accessible via search engines, without compromising any existing standards and practices.
Details