Editorial

Performance Measurement and Metrics

ISSN: 1467-8047

Article publication date: 3 July 2009

357

Citation

Thornton, S. (2009), "Editorial", Performance Measurement and Metrics, Vol. 10 No. 2. https://doi.org/10.1108/pmm.2009.27910baa.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2009, Emerald Group Publishing Limited


Editorial

Article Type: Editorial From: Performance Measurement and Metrics, Volume 10, Issue 2

This issue contains a collection of articles which cover a broad spectrum of performance measurement issues. You may disagree with some of them (I hope that you do), agree with others, or be stirred into thought or action. As Performance Measurement and Metrics’ (PMMs’) popularity grows, as can be seen from the increase in the number of articles downloaded, the variety of topics covered continues to increase.

Susan McKnight describes a technique for providing more effective feedback from her customer based than conventional questionnaires such as the Rodski Research Group’s or LibQUAL+’s. The customer value method she describes is another tool drawn from commerce, and is, as she herself points out, more expensive to use. However, she argues, the results are far more informative, and helped to engage staff in the change process that was necessary to improve perceptions of value and to reduce irritations. Two independent satisfaction surveys showed that student satisfaction with library services had significantly improved.

One of the very interesting management techniques Susan McKnight uses is to include staff members in the workshops as observers, but to get them to vote as they expected their customers to vote. The gaps identified between staff assumptions of customer perceptions of service importance and performance were very noticeable, and helped make the feedback far more valuable. I remember discussing the question of customer perception and service satisfaction with one of my library managers. He felt he was completely aware of what they thought since he spoke to a proportion of them in the library. When I asked him about the ones who never came into to the library, his reply chilled me to the bone. “They obviously have no need of our services, or they would come in, wouldn’t they?” Susan McKnight’s approach might have helped, but probably not.

While she offers an alternative to LibQUAL, which is nevertheless one of the most widely used tools in our armoury, Yahia Zare Mehrjerdi, Hossein Sayyadi Toranlo and Reza Jamali attempt to address an issue which has been raised before with customer satisfaction surveys of the LibQUAL and SERVQUAL type, namely that these methods; disregard the vagueness of the individual judgments, that their values change when converted to numbers, and that the subjectivity, judgment, selection and priority of the evaluators must have a big impact on the results. They have applied a fuzzy approach to service quality and the gap between the student’s expectations and perceptions, using Wilcoxon’s two-sample rank sum statistic, an analysis of fuzzy data which consequently reinforces results in comparison with crisp data. An interesting, if somewhat contentious approach.

Ulla Wimmer reports on a national workshop from Germany which has attempted to attain the Holy Grail – getting auditors to consider something other than the bottom line as a measurement of value to the community or measure of efficiency. BIX is a national benchmarking project for public and academic libraries which has been around for over ten years, and now has over 380 participants. However, during the audit of one of the German State’s academic libraries the auditors took no notice of the BIX results, and concentrated on input and efficiency measures looking for the library with the lowest cost per process. This audit became the driving force for an omnibus meeting of university management, representatives of the ministries for research and higher education, performance measurement experts on for libraries, and representatives of the German Library association and BIX management. The workshop was aimed at creating a set of measures that are meaningful to both State administrators and the professional community, and in the process hopefully making the audit authorities aware that cost does not equal value.

Adriana J. Gonzalez provides us with a case study into how they have improved library instruction conducted at Texas A&M University Libraries. Although they had gathered feedback from instruction lessons for several years, initially very little was done with the information gathered. Through a determined analysis of the data gathered, shortfalls in practice were identified, and a raft of improvements and policy changes has been introduced to redress those shortfalls.

Next we have two papers which were originally presented at the International Conference on Qualitative and Quantitative Methods in Libraries (QQML) which was held in Chania, Crete in May this year. This is a new conference on the scene, and its appearance was somewhat marred by a large number of speakers failing to show, largely due to a combination of the recession and swine flu. Other conferences have been hit the same way. Nevertheless, there were many good papers, and the venue was enchanting. Even with the problems they encountered, it has been decided to hold another conference there next year, which I for one will be putting down in my diary. I hope to include several more papers in the next issues of PMM.

The first of these papers is by David Streatfield and Sharon Markless and is an overview of what they define as “Impact assessment”. Others have different names and definitions for it (and they actually prefer the term “Impact evaluation” instead!), the precepts has been adopted by the Bill and Melinda Gates Foundation in relation to the Global Libraries Initiative. He gives the philosophy behind the current programme, and several examples of it in practice. Although he asserts “that rigorous impact assessment combined with lucid advocacy will lead to service sustainability”, sadly both he and I have our doubts whether strategic decision-making will ever be influenced as much as it should be by evidence and “brute sanity”.

The second paper from Chania, is by David Hunter of the National Library of Scotland (NLS). Devolution of powers to the Scottish Government in 1999 has led to a radical reorganisation of government in Scotland, and the introduction of an outcome-based public performance framework. The appointment of a new chief executive led to the introduction their first corporate strategy, which in turn has led to the development of strategic performance indicators. David describes the development of key performance indicators, and how they are used to meet the requirements of the Government’s priorities and to ensure that there is alignment between the NLS’ objectives and those of the Scottish Government.

The issue concludes with reviews from Roswitha Poll and Anna Maria Tammaro.

Once again, I hope you have as much pleasure in reading these articles as I have had in editing them.

Steve Thornton

Related articles