Search results
1 – 10 of 89This paper describes the MARCXML architecture implemented at the Library of Congress. It gives an overview of the component pieces of the architecture, including the MARCXML…
Abstract
This paper describes the MARCXML architecture implemented at the Library of Congress. It gives an overview of the component pieces of the architecture, including the MARCXML schema and the MARCXML toolkit, while giving a brief tutorial on their use. Several different applications of the architecture and tools are discussed to illustrate the features of the toolkit being developed thus far. Nearly any metadata format can take advantage of the features of the toolkit, and the process of the toolkit enabling a new format is discussed. Finally, this paper intends to foster new ideas with regards to the transformation of descriptive metadata, especially using XML tools. In this paper the following conventions will be used: MARC21 will refer to MARC 21 records in the ISO 2709 record structure used today; MARCXML will refer to MARC 21 records in an XML structure.
Details
Keywords
Sayyed Mahdi Taheri and Nadjla Hariri
The purpose of this research was to assess and compare the indexing and ranking of XML‐based content objects containing MARCXML and XML‐based Dublin Core (DCXML) metadata elements…
Abstract
Purpose
The purpose of this research was to assess and compare the indexing and ranking of XML‐based content objects containing MARCXML and XML‐based Dublin Core (DCXML) metadata elements by general search engines (Google and Yahoo!), in a comparative analytical study.
Design/methodology/approach
One hundred XML content objects in two groups were analyzed: those with MARCXML elements (50 records) and those with DCXML (50 records) published on two web sites (www.dcmixml.islamicdoc.org and www.marcxml.islamicdoc.org).The web sites were then introduced to the Google and Yahoo! search engines.
Findings
The indexing of metadata records and the difference between their indexing and ranking were examined using descriptive statistics and a non‐parametric Mann‐Whitney U test. The findings show that the visibility of content objects was possible by all their metadata elements. There was no significant difference between two groups' indexing, but a difference was observed in terms of ranking.
Practical implications
The findings of this research can help search engine designers in the optimum use of metadata elements to improve their indexing and ranking process with the aim of increasing availability. The findings can also help web content object providers in the proper and efficient use of metadata systems.
Originality/value
This is the first research to examine the interoperability between XML‐based metadata and web search engines, and compares the MARC format and DCMI in a research approach.
Details
Keywords
A. Hossein Farajpahlou and Faeze Tabatabai
The aim of this paper is to examine the indexing quality and ranking of XML content objects containing Dublin Core and MARC 21 metadata elements in dynamic online information…
Abstract
Purpose
The aim of this paper is to examine the indexing quality and ranking of XML content objects containing Dublin Core and MARC 21 metadata elements in dynamic online information environments by general search engines such as Google and Yahoo!
Design/methodology/approach
In total, 100 XML content objects were divided into two groups: those with DCXML elements and those with MARCXML elements. Both groups were published on the web site www.marcdcmi.ir in late July 2009 and were online until June 2010. The web site was introduced to Google and Yahoo! search engines. The indexing quality of metadata elements embedded in the content objects in a dynamic online information environment and their indexing and ranking capabilities were compared and examined.
Findings
Google search engine was able to retrieve fully all the content objects through their Dublin Core and MARC 21 metadata elements; Yahoo! search engine, however, did not respond at all. Results of the study showed that all Dublin Core and MARC 21 metadata elements were indexed by Google search engine. No difference was observed between indexing quality and ranking of DCXML metadata elements with that of MARCXML. The results of the study revealed that neither the XML‐based Dublin Core Metadata Initiative nor MARC 21 demonstrate any preference regarding access in dynamic online information environments through Google search engine.
Practical implications
The findings can provide useful information for search engine designers.
Originality/value
The present study was conducted for the first time in dynamic environments using XML‐based metadata elements. It can provide grounds for further studies of this kind.
Details
Keywords
The main purpose of this paper is to propose a solution for data interoperability between library programs in Iran.
Abstract
Purpose
The main purpose of this paper is to propose a solution for data interoperability between library programs in Iran.
Design/methodology/approach
The research proceeded through expressing the essence of interoperability in library programs in order to exchange metadata, In this regard, the current situation is analyzed using a researcher‐made checklist, and then the problems and shortcomings are highlighted in the field of the interoperability which in turn enables us to find some ways to overcome them.
Findings
The majority of the library software in Iran do not respect data exchange. They mostly use ISO 2709 as an export format and rarely use other formats. Moreover, most of the library software use Z39.50 client to get information from Library of congress and also Iranian National Library. Therefore, none of them could exchange data between each other because of not using server side service. The proposed model tries to introduce harvesting metadata by OAI service provider and also searching the metadata records by SRU client‐server model.
Originality/value
The findings indicate that Iranian libraries should be aware of the essence of interoperability. Using the proposed model would help them to exchange metadata in a cost‐efficient and cost‐effective manner.
Details
Keywords
Peter Carini and Kelcy Shepherd
This case study details the evolution of descriptive practices and standards used in the Mount Holyoke College Archives and the Five College Finding Aids Access Project, discusses…
Abstract
This case study details the evolution of descriptive practices and standards used in the Mount Holyoke College Archives and the Five College Finding Aids Access Project, discusses the relationship of Encoded Archival Description (EAD) and the MARC standard in reference to archival description, and addresses the challenges and opportunities of transferring data from one metadata standard to another. The study demonstrates that greater standardization in archival description allows archivists to respond more effectively to technological change.
Details
Keywords
Sai Deng and Terry Reese
The purpose of this paper is to present methods for customized mapping and metadata transfer from DSpace to Online Computer Library Center (OCLC), which aims to improve Electronic…
Abstract
Purpose
The purpose of this paper is to present methods for customized mapping and metadata transfer from DSpace to Online Computer Library Center (OCLC), which aims to improve Electronic Theses and Dissertations (ETD) work flow at libraries using DSpace to store theses and dissertations by automating the process of generating MARC records from Dublin Core (DC) metadata in DSpace and exporting them to OCLC.
Design/methodology/approach
This paper discusses how the Shocker Open Access Repository (SOAR) at Wichita State University (WSU) Libraries and ScholarsArchive at Oregon State University (OSU) Libraries harvest theses data from the DSpace platform using the Metadata Harvester in MarcEdit developed by Terry Reese at OSU Libraries. It analyzes certain challenges in transformation of harvested data including handling of authorized data, dealing with data ambiguity and string processing. It addresses how these two institutions customize Library of Congress's XSLT (eXtensible Stylesheet Language Transformations) mapping to transfer DC metadata to MarcXML metadata and how they export MARC data to OCLC and Voyager.
Findings
The customized mapping and data transformation for ETD data can be standardized while also requiring a case‐by‐case analysis. By offering two institutions' experiences, it provides information on the benefits and limitations for those institutions that are interested in using MarcEdit and customized XSLT to transform their ETDs from DSpace to OCLC and Voyager.
Originality/value
The new method described in the paper can eliminate the need for double entry in DSpace and OCLC, meet local needs and significantly improve ETD work flow. It offers perspectives on repurposing and managing metadata in a standard and customizable way.
Details
Keywords
This paper provides an introduction to the Metadata Object Description Schema (MODS), a MARC21 compatible XML schema for descriptive metadata. It explains the requirements that…
Abstract
This paper provides an introduction to the Metadata Object Description Schema (MODS), a MARC21 compatible XML schema for descriptive metadata. It explains the requirements that the schema targets and the special features that differentiate it from MARC, such as user‐oriented tags, regrouped data elements, linking, recursion, and accommodations for electronic resources.
Details
Keywords
Sayyed Mahdi Taheri, Nadjla Hariri and Sayyed Rahmatollah Fattahi
The aim of this research was to examine the use of the data island method for creating metadata records based on DCXML, MARCXML, and MODS with indexability and visibility of…
Abstract
Purpose
The aim of this research was to examine the use of the data island method for creating metadata records based on DCXML, MARCXML, and MODS with indexability and visibility of element tag names in web search engines.
Design/methodology/approach
A total of 600 metadata records were developed in two groups (300 HTML-based records in an experimental group with special structure embedded in the < pre> tag of HTML based on the data island method, and 300 XML-based records as the control group with the normal structure). These records were analyzed through an experimental approach. The records of these two groups were published on two independent websites, and were submitted to Google and Bing search engines.
Findings
Findings show that all the tag names of the metadata records created based on the data island method relating to the experimental group indexed by Google and Bing were visible in the search results. But the tag names in the control group's metadata records were not indexed by the search engines. Accordingly it is possible to index and retrieve the metadata records by their tag name in the search engines. But the records of the control group are accessible by the element values only. The research suggests some patterns to the metadata creators and the end users for better indexing and retrieval.
Originality/value
The research used the data island method for creating the metadata records, and deals with the indexability and visibility of the metadata element tag names for the first time.
Details
Keywords
Lucas Mak, Devin Higgins, Aaron Collie and Shawn Nicholson
The purpose of this paper is to illustrate that Electronic Theses and Dissertation (ETD) metadata can be used as data for institutional assessment and to map an extended research…
Abstract
Purpose
The purpose of this paper is to illustrate that Electronic Theses and Dissertation (ETD) metadata can be used as data for institutional assessment and to map an extended research landscape when connected to other data sets through linked data models.
Design/methodology/approach
This paper presents conceptual consideration of ideas behind linked data architecture to leverage ETD and attendant metadata to build a case for institutional assessment. Analysis of graph data support the considerations.
Findings
The study reveals first and foremost that ETD metadata is in itself data. Concerns with creating URIs for data elements and general applicability of linked data model formation result. The analysis positively points up a rich environment of institutional relationships not readily found in traditional flat metadata records.
Originality/value
This paper provides a new perspective in examining research landscape through ETDs produced by graduate students in higher education sector.
Details
Keywords
The purpose of this paper is to argue that academic librarians must learn to use web service APIs and to introduce APIs to a non-technical audience.
Abstract
Purpose
The purpose of this paper is to argue that academic librarians must learn to use web service APIs and to introduce APIs to a non-technical audience.
Design/methodology/approach
This paper is a viewpoint that argues for the importance of APIs by identifying the shifting paradigms of libraries in the digital age. Showing that the primary function of librarians will be to share and curate digital content, the paper shows that APIs empower a librarian to do that.
Findings
The implementation of web service APIs is within the reach of librarians who are not trained as software developers. Online documentation and free courses offer sufficient training for librarians to learn these new ways of sharing and curating digital content.
Research limitations/implications
The argument of this paper depends upon an assumption of a shift in the paradigm of libraries away from collections of materials to access points of information. The need for libraries to learn APIs depends upon a new role for librarians that anecdotal evidence supports is rising.
Practical implications
By learning a few technical skills, librarians can help patrons find relevant information within a world of proliferating information sources.
Originality/value
The literature on APIs is highly technical and overwhelming for those without training in software development. This paper translates technical language for those who have not programmed before.
Details