Performance Measurement and Metrics

ISSN: 1467-8047

Article publication date: 29 November 2011



Thornton, S. (2011), "Editorial", Performance Measurement and Metrics, Vol. 12 No. 3. https://doi.org/10.1108/pmm.2011.27912caa.001



Emerald Group Publishing Limited

Copyright © 2011, Emerald Group Publishing Limited


Article Type: Editorial From: Performance Measurement and Metrics, Volume 12, Issue 3

Once more we have authors and papers from around the world, with Portugal, Bangladesh, Hungary, Malawi, Thailand, India and the United States, and with a range of topics just as varied.

Maria Vinagre and her colleagues from Portugal present us with the background to, and research in support of techniques created to evaluate the quality and performance of services delivered by the Portuguese Digital Library Consortium as part of the Digital Library Integrated Evaluation Programme.

Their Digital Library Service Quality Model and diQUAL scale have been developed with a view to assess service quality gaps, a la Parasuramam, as perceived by four strategic user groups, and proven to have good psychometric properties. Providing valuable information to decision makers and and highlighting organisational deficiences to the service providers, these tools should prove to be a useful addition in the performance toolkit.

Maidul Islam and Zabed Ahmed describe their approach to user assessment of a library OPAC in Dhaka University. A survey questionnaire was developed and used to collect data on students’ demographics, online catalogue use and their perceptions of ease of use and satisfaction with OPAC. In order to analyse the influence of students’ demographic and individual characteristics on their perceptions and satisfaction, Mann-Whitney and Kruskal-Wallis tests were carried out. The authors also present us with some heuristic guidelines for OPAC design.

Having worked with a wide range of OPACs in the past, I applaud any attempt to assess and redesign them in line with users’ perceptions. While not being in favour with users’ opinions being taken as sacrosanct, I have used at least two systems where the user was not just an afterthought, but appeared never to have been considered at all!

Maria Borbely also takes a look at library system assessment, this time using ISO/IEC 9126 which is now the most well known and widely applied international software quality standard. The goal of this study was to find out how the task effectiveness, completion and efficiency and task time effect on the general user satisfaction with the OPAC, together with which factors have the most telling effects on user satisfaction. A group of volunteers, a mixture of library staff and end-users, were given a series of tasks to carry out on the OPAC, with log files to track their time taken for each task, and a questionnaire on the overall usability and satisfaction with the system.

Patrick Mapulanga gives us a salutary study of the budgeting, funding and acquisitions and services of the University of Malawi’s Libraries (UML). Heavily reliant on overseas donations and exchange schemes, he notes that the budget for acquisitions of library materials didn’t even exist in two libraries in 2006/2007, while the funding staggered inconsistently from one year to the next. Even inter-library loans from the British Library fail regularly, as returned loans sit in registries because there is a lack of adequate funds to post them back.

Although the library services aim high, with a vision statement “to place UML in an environment that contributes to the intellectual activities of the University by supporting scholarly research and excellence in teaching and learning”, the negative assessments made by academic staff and students on the quality of stock and services, which seem largely due to a lack of adequate funding, make these lofty aspirations a “tall order”. Hopefully, a highlight on the issues will lead to an improvement.

On the edge of the scope of this journal, I welcome the paper by Dr Binshan Lin and his colleagues, which addresses the somewhat thorny problem of the classification of higher education institutions. Assessment of performance and quality is easy when comparing like with identical like, but in the case of educational institutions what may appear to the outsider as comparable units can be very, very different. In fact, in several countries funding authorities have found extreme difficulties in apparently taking this fact on board, introducing flawed and inadequate assessment programmes.

This paper examines the development of the Thai Higher Education Classification Model (THEC-Model), which was designed to classify and help select the universities to be part of the National Research Universities initiative. Its development stemmed from linking the classification criteria where the nature and characteristics of a university are identified, with some of the current ranking criteria, which are based on research performance. It consists of five dimensions, including research funding, instructional programs, levels of instructional programs, instructors and research staff body, and student body. Despite some minor differences, the CHE administrators will be deploying the proposed model for the future NRU initiative giving it the credibility essential for acceptance when the future announcement for the NRU initiative is made.

Steve Thornton

Related articles