Search results
1 – 10 of 121Danijela Boberić-Krstićev and Danijela Tešendić
The purpose of this paper is to present the software architecture of the university’s union catalogue in Novi Sad, Serbia. The university’s union catalogue would comprise the…
Abstract
Purpose
The purpose of this paper is to present the software architecture of the university’s union catalogue in Novi Sad, Serbia. The university’s union catalogue would comprise the collections of 14 academic libraries.
Design/methodology/approach
The basis of this paper is a case study of developing a software solution for the union catalogue of the University of Novi Sad in Serbia. The solution principles of object-oriented modelling are applied to describe the software architecture. Specifically, the unified modeling language (UML) component and sequence diagrams are used. The database model is described by using a physical data model.
Findings
Through the research of related papers and, taking into consideration the problem of creating a university union catalogue, it is concluded that the best approach is to combine the idea of a virtual and a physical union catalogue. Records are stored in one physical union catalogue, while the holdings data are stored in the local library management systems (LMSs) organized in the form of virtual union catalogues. Because academic libraries often use LMSs from different vendors, interoperable communication between those LMSs and the union catalogue is provided through the usage of standard library protocols for information retrieval (Search and Retrieve URL [SRU], SRU Record Update and NISO Circulation Interchange Protocol [NCIP]).
Research limitations/implications
The development of a union catalogue for the University of Novi Sad is in its test phase, and, at this moment, only a software solution supporting the functionalities of a union catalogue has been created.
Practical implications
By introducing a university union catalogue, students would be able to search the collections of all the university libraries by using a single portal. Their results would indicate whether a book is available and from which library it is available to borrow.
Originality/value
Originality of this software architecture lies in the usage of standard library protocols. The described architecture enables the addition of new members to the university union catalogue, regardless of which LMS the library uses.
Details
Keywords
The purpose of this paper is to present a process, as a proof-of-concept, that automates the tracking of updates to name authority records (NARs), the downloading of revised NARs…
Abstract
Purpose
The purpose of this paper is to present a process, as a proof-of-concept, that automates the tracking of updates to name authority records (NARs), the downloading of revised NARs into local catalog system, and subsequent bibliographic file maintenance (BFM), in response to the programmatic manipulation of the Library of Congress Name Authority File (LCNAF).
Design/methodology/approach
A proof-of-concept process to automate NAR updates and BFM in local catalog, using OCLC LCNAF SRU Service, MARCEdit, XSLT, and AutoIt, is built and subsequently tested using data from both test and production catalog servers at Michigan State University Libraries.
Findings
The proof-of-concept process tested is proved to be successful in general though scalability and diacritics issues have to be addressed before it can become fully operational in a production environment.
Originality/value
This process enables libraries, especially those without third-party authority control service, to handle the phased reissuance of LCNAF and related BFM in an automatic fashion with minimal human intervention.
Details
Keywords
Miroslav Zarić, Danijela Boberić Krstićev and Dušan Surla
The aim of the research is modelling and implementation of a client application that enables parallel search and retrieval of bibliographic records from multiple servers. The…
Abstract
Purpose
The aim of the research is modelling and implementation of a client application that enables parallel search and retrieval of bibliographic records from multiple servers. The client application supports simultaneous communication over Z39.50 and SRW/SRU protocols. The application design is flexible and later addition of other communication protocols for search/retrieval is envisioned and supported.
Design/methodology/approach
Object‐oriented approach has been used for modelling and implementation of client application. CASE tool, Sybase PowerDesigner, supporting Unified Modelling Language (UML 2.0), was used for modelling. Java programming language and Eclipse environment were used for implementation.
Findings
The result of the research is a client application that enables parallel search and retrieval of multiple Z39.50 and SRW/SRU servers. Additionally, the application supports conversion from type‐1 query language, defined by Z39.50 standard, to CQL query language required for search/retrieval from SRW/SRU servers. The application was verified by performing parallel search and retrieval from several publicly accessible Z39.50 and SRW/SRU servers.
Research limitations/implications
The application supports only the use of bib‐1 attribute set for type‐1 queries created according to Z39.50 standard. Hence, only such queries can be converted to CQL notation. The use of other attribute sets is not supported.
Practical implications
The client application is integrated into the BISIS software system, version 4. This enables the cataloguing of bibliographic records retrieved over Z39.50 and SRW/SRU protocol.
Originality/value
The contribution of this work is in client application architecture that enables parallel communication with multiple servers, which can use different communication protocols, Z39.50 or SRW/SRU. Search/retrieval from servers using some other protocol is also supported. This can be achieved by adding new classes that implement protocol specification, and classes for query transformation into notation required by that new protocol, if required.
Details
Keywords
Vyacheslav I. Zavalin and Shawne D. Miksa
This paper aims to discuss the challenges encountered in collecting, cleaning and analyzing the large data set of bibliographic metadata records in machine-readable cataloging…
Abstract
Purpose
This paper aims to discuss the challenges encountered in collecting, cleaning and analyzing the large data set of bibliographic metadata records in machine-readable cataloging [MARC 21] format. Possible solutions are presented.
Design/methodology/approach
This mixed method study relied on content analysis and social network analysis. The study examined subject representation in MARC 21 metadata records created in 2020 in WorldCat – the largest international database of “big smart data.” The methodological challenges that were encountered and solutions are examined.
Findings
In this general review paper with a focus on methodological issues, the discussion of challenges is followed by a discussion of solutions developed and tested as part of this study. Data collection, processing, analysis and visualization are addressed separately. Lessons learned and conclusions related to challenges and solutions for the design of a large-scale study evaluating MARC 21 bibliographic metadata from WorldCat are given. Overall recommendations for the design and implementation of future research are suggested.
Originality/value
There are no previous publications that address the challenges and solutions of data collection and analysis of WorldCat’s “big smart data” in the form of MARC 21 data. This is the first study to use a large data set to systematically examine MARC 21 library metadata records created after the most recent addition of new fields and subfields to MARC 21 Bibliographic Format standard in 2019 based on resource description and access rules. It is also the first to focus its analyzes on the networks formed by subject terms shared by MARC 21 bibliographic records in a data set extracted from a heterogeneous centralized database WorldCat.
Details
Keywords
Danijela Boberic Krsticev, Danijela Tešendic and Binay Kumar Verma
This paper aims to discuss the possibilities of using a mobile application in the process of conducting an inventory of library collection and present an application for the same…
Abstract
Purpose
This paper aims to discuss the possibilities of using a mobile application in the process of conducting an inventory of library collection and present an application for the same. The application scans barcode labels on books and retrieves data about those books. Data regarding the status and call number of each book can be changed using this application.
Design/methodology/approach
This paper is based on a case study of developing an application for the Android platform, and this application is part of the BISIS library management system.
Findings
By analysing the procedure of conducting an inventory in the library of the Faculty of Science, University of Novi Sad, it is concluded that this procedure is tedious and can be simplified. To make this procedure more efficient, a mobile application enabling search and update of bibliographic records has been developed. That application communicates with the BISIS library management system using a specially designed service.
Practical implications
By introducing this application at the libraries, the process of inventory of a library collection can be simplified, the time needed for the inventory will be shorter and the inventory will require less physical effort.
Originality/value
The application is designed to help librarians during the process of inventory of library collections. During this process, librarians have to check status of every item on the shelves and to update catalogue with new information. This application enables mobility of librarians and updates information about items during checking the shelves.
Details
Keywords
This article discusses the deficiencies of search engines and the importance of metadata before examining three models of metadata retrieval: distributed; distributed data with a…
Abstract
This article discusses the deficiencies of search engines and the importance of metadata before examining three models of metadata retrieval: distributed; distributed data with a centralised index; and centralised union catalogue. In listing the advantages and disadvantages of the distributed model, the Z39.50 protocol is used as an example. The OAI harvest protocol is the example of the second model. Virtual union catalogues are compared with a real one. A pan‐European model is discussed as a way to combine the best of all three models, with EUCAT as its base.
Details
Keywords
K.T. Anuradha, R. Sivakaminathan and P. Arun Kumar
There are many library automation packages available as open‐source software, comprising two modules: staff‐client module and online public access catalogue (OPAC). Although the…
Abstract
Purpose
There are many library automation packages available as open‐source software, comprising two modules: staff‐client module and online public access catalogue (OPAC). Although the OPAC of these library automation packages provides advanced features of searching and retrieval of bibliographic records, none of them facilitate full‐text searching. Most of the available open‐source digital library software facilitates indexing and searching of full‐text documents in different formats. This paper makes an effort to enable full‐text search features in the widely used open‐source library automation package Koha, by integrating it with two open‐source digital library software packages, Greenstone Digital Library Software (GSDL) and Fedora Generic Search Service (FGSS), independently.
Design/methodology/approach
The implementation is done by making use of the Search and Retrieval by URL (SRU) feature available in Koha, GSDL and FGSS. The full‐text documents are indexed both in Koha and GSDL and FGSS.
Findings
Full‐text searching capability in Koha is achieved by integrating either GSDL or FGSS into Koha and by passing an SRU request to GSDL or FGSS from Koha. The full‐text documents are indexed both in the library automation package (Koha) and digital library software (GSDL, FGSS)
Originality/value
This is the first implementation enabling the full‐text search feature in a library automation software by integrating it into digital library software.
Details
Keywords
George Macgregor and Fraser Nicolaides
Detail research undertaken to determine the key differences in the performance of certain centralised (physical) and distributed (virtual) bibliographic catalogue services, and to…
Abstract
Purpose
Detail research undertaken to determine the key differences in the performance of certain centralised (physical) and distributed (virtual) bibliographic catalogue services, and to suggest strategies for improving interoperability and performance in, and between, physical and virtual models.
Design/methodology/approach
Methodically defined searches of a centralised catalogue service and selected distributed catalogues were conducted using the Z39.50 information retrieval protocol, allowing search types to be semantically defined. The methodology also entailed the use of two workshops comprising systems librarians and cataloguers to inform suggested strategies for improving performance and interoperability within both environments.
Findings
Technical interoperability was permitted easily between centralised and distributed models, however, the various individual configurations permitted only limited semantic interoperability. Significant prescription in cataloguing and indexing guidelines, greater participation in the program for collaborative cataloguing, consideration of future functional requirements for bibliographic records migration, and greater disclosure to end users are some of the suggested strategies to improve performance and semantic interoperability.
Practical implications
This paper not only informs the library and information science research community and union catalogue administrators, but also has numerous practical implications for those establishing distributed systems based on Z39.50 and search/retrieve web services as well as those establishing centralised systems.
Originality/value
The paper moves the discussion of Z39.50‐based systems away from anecdotal evidence and provides recommendations based on testing, and is intimately informed by the UK cataloguing and systems librarian community.
Details
Keywords
Hesamedin Hakimjavadi and Mohamad Noorman Masrek
The purpose of this study is to evaluate the status of eight interoperability protocols within repositories of electronic theses and dissertations (ETDs) as an introduction to…
Abstract
Purpose
The purpose of this study is to evaluate the status of eight interoperability protocols within repositories of electronic theses and dissertations (ETDs) as an introduction to further studies on feasibility of deploying these protocols in upcoming areas of interoperability.
Design/methodology/approach
Three surveys of 266 ETD repositories, 15 common ETD management software solutions, and 136 ETD experts were conducted in order to appraise the protocols. These protocols were evaluated in four categories of aggregation, syndication, distributed search, and publishing protocols.
Findings
This study revealed that, despite its drawbacks, Protocol for Metadata Harvesting (PMH) is still the most utilized interoperability protocol within ETD providers, ETD software developers, and implementers, followed by ATOM and Object Reuse and Exchange (ORE) protocols. However, in all competitive areas related to performance and functionality, ORE surpasses other protocols. It was also found that the three protocols of ATOM, PMH, and ORE could be used interchangeably in the most used cases of interoperability protocols in repositories.
Practical implications
In this research, a combination of methods was employed to evaluate the status of protocols, from the perspectives of data providers, software providers, and implementers. Practitioners may use these methods to assess other protocols in terms of effectiveness and efficiency.
Originality/value
The conduct of this study has involved three types of surveys, through which different aspects of interoperability protocols are evaluated. Prior to the conduct of this study, there has yet any study focusing on the same topic, which has adopted the multi‐method that has been adopted in this study.
Details
Keywords
Describes the use of safety attitudes as the basis for an intervention to improve safety performance in a power generation company. Following an initial survey using the safety…
Abstract
Describes the use of safety attitudes as the basis for an intervention to improve safety performance in a power generation company. Following an initial survey using the safety attitude questionnaire developed by the SRU, a set of initiatives was developed. The initiatives included setting up safety teams, the introduction of written action plans, the provision of workforce safety budgets and an enhanced profile for management action. The initiatives were implemented by the SRU over a period of one year. Following the intervention there were improvements in safety attitude, lost time accident rates, self‐reported accident rates and absenteeism levels.
Details