Emerald Group Publishing Limited
Ten Northumbria Conferences: the contribution to library management
Article Type: Guest editorial From: Library Management, Volume 36, Issue 3
Libraries have been collecting data on their performance for well over a century. The relevance of quantitative data as a response to pressure for the application of management science tools to libraries was noted forty years ago by Orr (1973). Orr offered one of the first measurement frameworks for libraries, and in doing so recognised the critical distinction between measuring “how good is the service” in contrast to “how much good does it do”. The dichotomy of quality and value measures remains with us today, and libraries require methods to assess both contributions as part of their management toolkit.
The origins of the Northumbria International Conferences on Performance Measurement in Library and Information Services lay in a similar attempt 20 years ago to present a new framework for measurement. The SCONUL Advisory Committee on Performance Indicators was involved in the development of the “The Effective Academic Library” (Ellard et al., 1995), and some of the members of that group, supported by the British Library Research & Development Department and Northumbria University created the first conference. Ian Winkworth, Geoffrey Ford and Dick Hartley were on that first Board, with Roswitha Poll, who has remained an ever-present and energetic influence, providing a succession of keynotes to the conference.
The conference has moved its home venue southward over the period of existence, starting in what was ancient Bernicia and lately moving into Deira, but still within the former realm of Northumbria. The first three conferences were at Longhirst Hall, and this helped shape the sociable nature and the positive culture of exchange that has been an enduring feature of the event. The conference has always programmed itself with some reference to the IFLA world gathering, and this led to the fourth conference being held in Pittsburgh, Pennsylvania, sparking a stronger North American engagement that has grown substantially, and ultimately led to the foundation of the Library Assessment Conference. After forays to South Africa and Florence, the series moved its institutional sponsorship from Northumbria to the University of York for the ninth and tenth conferences.
This brief review celebrates and reflects on the contribution of the conference to measurement as an essential feature of library management by considering some of the keynote contributions to the ten conferences. It also introduces selected papers from the tenth conference, chosen to illustrate some contemporary themes in the measurement and management of libraries.
Themes, paradigms and frameworks
Does performance and measurement have a relationship with library management? Those attending the conference would probably consider this to be self-evident, at least at the micro level of individual measurement activities or projects. It has, however, been a possible criticism of library measurement that librarians count everything they can and then apply this back to management questions in an inductive way, rather than seeking data to inform predefined needs. Of course any field of research will contain a mixture of “no stone unturned”, “fancy that” and “bandwagon” topics, and library measurement is not immune, but one contribution of the Northumbria series has been to critique ideas and forms of measurement itself as well as its substance. There has been a constant theme of challenging data and measurement collection to be relevant not only to measurement, but to a more existential debate about what libraries are and what they might and should be during times of fundamental change in forms of information and communication. What libraries are defines what needs to be managed and therefore measured. Hence this brief consideration focuses on the broad picture of library performance measurement as demonstrated in the conference keynotes, and the attempts within those presentations to define paradigms and frameworks for measurement or for libraries themselves.
In the first conference Nancy Van House’s (1995) keynote, still memorable to those present, laid out some of the enduring issues and predicted some of the main trends which would form the core of future research and consequently Northumbria Conference topics. The broader environmental context of change and competition was recognised and defined. Crucially the basis of performance measurement as “constructing sense in the organisation” and the power of the combination of empirical evidence with narrative was recognised. This definition of evaluation as an iterative process of sense-making and “telling the story” has since underpinned the quest for measures that must also pass the test of relevance and importance. Van House predicted future competition for our turf, but recognised that our competitive advantage is the understanding we have of our users, and that user-centred measurement will avoid the risk of irrelevancy. The final caution was (in the Zen saying) “not to mistake the finger for the moon”; in this case mistaking the measure for the performance.
In the second conference Lancaster’s (1998) keynote expressed concern at what libraries were becoming and suggested a decline in the service ideal. The conference itself might have offered proof that this was at least not the case amongst attendees. Lancaster recognised the inevitability of digital developments, foreshadowing continuing debates about disintermediation and the measurement consequences of delivering services in the digital environment. Rowena Cullen’s (1998) “postmodern” keynote analysis in the same conference was a thought-provoking deconstruction of the discourse of evaluation, again seeking to answer the fundamental question of whether performance measurement activity improves organisational effectiveness. Cullen produced a values/focus/purpose matrix to achieve a definition of the positioning of any library that would then inform measurement choices. This brave attempt at a new paradigm may not have generated much practical take up of the associated framework, but the challenges in the paper still apply.
Peter Brophy has provided two keynotes to the conference which picked up the theme of what a Library is in the digital age, how it relates to its users in this context, and what the consequences should be for measurement and evaluation. In the fourth conference, Brophy (2002) presented a potential framework for conceptualising what libraries might be in the twenty-first century, given that the networked environment had fundamentally changed our business. The five models for the future library were:
1. the traditional library;
2. the memory institution;
3. the learning centre;
4. the library as community resource; and
5. the invisible intermediary.
and Brophy proposed measures for each. Brophy also referred back to Orr (1973) in suggesting that this might require a new interpretation of what “library goodness” means in a networked world. These potential distinctions may help inform measurement, but in the case of many research institutions, “The Library” now encompasses all of these functions and probably more.
In the seventh conference, Brophy’s (2007) keynote returned to Van House’s (1995) challenge of “telling the story” and combined this with his appreciation of the digital context. The proposition here was that libraries must meet and support users where their life and research occurs, creating presence and support in their:
To gain user’s trust in this environment requires a deep understanding of their language and habits; an understanding of their terminology and concepts, and how they share meaning. For academic and research libraries this includes recognising disciplines as social systems. Whether or not Brophy’s (2007) paper directly led to the development of the subsequent ethnographic studies of users, drawing on expertise and methodology from this discipline, it correctly predicted a trend, and reinforced the importance of narrative in creating meaning about libraries out of the complexity of our modern context.
The focus on users, their expectations, opinions and quality methods for management and measurement has been a strong strand of the conference since its inception. The fifth conference hosted Ananthanarayanan Parasuraman (2004), reflecting the central importance of service quality across the two decades of the conference, and in particular the application of the SERVQUAL approach to measurement culminating in the LibQUAL+ tool. Booth’s keynote on evidence-based practice in the sixth conference reflected an approach which now has its own international conference (Booth, 2007). In the eighth conference Town attempted to return the community to the issue of “the good that libraries do” with a focus on value concepts and measurement as distinctive from quality data (Town, 2009), and Tenopir (2011) in the ninth reflected on the substantial new programme of activity around ROI measurement. Jantti (2014) in the tenth conference describes the two-decade commitment of an Australian University Library to quality and performance, demonstrating the need for a long-term approach to performance measurement, but also to periodically refresh the data collected to remain relevant to the University’s mission and to reflect the growing importance of value and impact measures. This exemplary description of a commitment to understanding and meeting user needs and tracking the performance of a library over the long-term is at the heart of effective library management.
When we began our thinking about the Northumbria keynote on which this introduction is based, a more extensive survey of papers and trends from the conference series was envisaged. Our initial analysis however soon suggested that any simplistic view of the history and development of performance measurement over the near 20 years would not be valid. The assumption of a successive and progressive development away from the quantitative towards the more qualitative and strategic is also not supported: all were present from the outset, and all remain relevant. There is evidence of the development of many new methods and techniques and the desire to engage with measurement across all areas of library service and management. Some large areas of library activity (particularly staff-related measurement) are perhaps under-represented, but these reflect general difficulties with analysis and metrics in the broader management field.
From the papers from the first conference it is clear that library performance measurement was already a well-established concept in 1995. Many of the enduring topics were present from the start: organisational effectiveness, climate, morale, change and other staff issues were covered, as were quality management models and frameworks, and benchmarking. The challenge of new electronic measures and wider stakeholder perspectives was also recognised. The introduction to the second conference proceedings (Dixon and Parker, 1998) suggested that the third would move “firmly into the field of strategic management” and that “quantitative methods need to be complemented by interpretation and strategic vision”, but it would be hard to argue that a very broad vision and conception of performance measurement was not present from the start. Measurement appears to have been intrinsic to library management and strategic development across the time period of the conference series.
There are some general conclusions about library management and measurement that might be drawn from the Northumbria Conferences. These conferences have played a part in the continuing discourse about the future of libraries in the digital age. Despite many pessimistic predictions, libraries have survived. One message for library management and measurement from the Northumbria Conferences is that if there is a paradigm for performance measurement, it is “The Library” as it continues to exist and practice. The bulk of contributions to all the conferences have continued to arise from practitioners and researchers applying scientific research methodologies to management questions arising from real world library contexts. It may be that the application of quantitative methods drives research towards library operations and services, rather than towards the “organism” and humanistic aspects of library practice. This balance probably reflects the level at which most conference attendees work. It has therefore been important for regular conference keynotes to arise from the library leadership level, so that there was also a conference focus on strategic, political and advocacy questions. If there is a major change on the horizon, it may be that the focus on the collection of evidence of library performance through disparate projects using single sets of data needs to be supplemented by a future of correlation, in which large library, institutional or broader datasets are combined to demonstrate the scale impact and value of libraries to their user communities and parent organisations.
In conclusion, the management benefit from the Northumbria Conferences also arises from the creation of an international community of practice of library performance measurement and assessment. The fertile exchange between researchers from library and information schools and practitioners has been a positive feature, as has the cross-sectoral and international nature and the involvement of staff from all organisational levels. This is not a common experience in our profession. The conferences have always had a high percentage of contributors amongst the attendees, and the trust involved in sharing data has created social capital for those involved, and encouraged collaboration.
Library performance measurement is still an important and relevant idea, and we are ahead of many other professions in our thinking and practice. It could be argued that one of the reasons that libraries have survived into the digital age is because they have engaged in evidence-based change. Some of this is due to the diverse, curious, and biennially visible “invisible college” of Northumbria. At its simplest, this a group of people who can make one feel less alone in the quest for better libraries, through a deeper collective understanding of their performance and management. Northumbria has achieved this for many librarians across the world, and not least for these two authors.
The tenth conference and selected papers
The papers selected here from the tenth conference have been chosen because of their originality and potential contribution to progress in library management.
Jantti’s keynote reflects on the two-decade commitment of the University of Wollongong’s Library to excellence, and to how effective management and measurement have underpinned this quest. Jantti reflects on developments in their Performance Indicator Framework across the same time period as the Northumbria Conferences, and how recently this has focused on better data to support value and impact proofs.
Through the period of the Northumbria Conferences, libraries have experienced profound change based on digital developments, particularly in scholarly communication. Creating distinctive value in the research workflow is an essential task of the modern research library, and Bakkalbasi, Jaggars and Rockenbach in their paper on “Re-skilling for the digital humanities” describe an outcome-based assessment of Columbia University’s creation of discipline-based digital centres and digital scholarship coordinator staff as a means to this end. The Scholarship and Collections Directorate at the British Library is the subject of Schofield, Sen and Vasconcelos’ paper on evaluating intangible assets. Schofield, Sen and Vasconcelos present a framework for intangible assets for the Arts and Humanities Department, based on the balanced scorecard, encompassing human, structural, collections and services, and relational assets.
Libraries are relationship organisations, and three further papers here continue the detailed analysis of this aspect of management and its measurement. The political and social success of libraries is critical to future existence, but the literature has perhaps not fully recognised this dimension in the past, or focused on the performance of the relationships that underpin this success. Corrall explores the contribution of subject librarians using a similar conceptual framework to Schofield, Sen and Vascenlos, and Town creates a framework for measuring a library’s relational capital as a component of the value scorecard. Giannopolou and Tsakonas’ study of the library’s affective relationships with users in times of economic stress, using the stimulus-organism-response behavioural framework, is intended to help library managers prioritise using findings from a context of extreme pressure on resources.
Finally, the culture of a library remains critical to its performance, and library leadership must manage towards a positive service culture. Wilson describes and offers a tool for the assessment and achievement of this meta-level of performance through a quality maturity index developed over ten years from an idea presented at the sixth conference (Wilson & Town, 2007). This demonstrates the continuous and continuing contribution of the conference as a vehicle and voice for those committed to working in the field of performance measurement over the long term, and confirms the view that performance and its measurement is an intrinsic and essential element of library management.
Stephen Town and Ian Hall - University of York, York, UK
Booth, A. (2007), “Counting what counts: performance measurement and evidence-based practice”, Proceedings of the 6th Northumbria International Conference on Performance Measurement in Libraries and Information Services, Emerald Publishing Group, Bingley, pp. 52-62
Brophy, P. (2002), “Performance measures for twenty-first century libraries”, in Stein, J., Kyrillidou, M. and Davis, D. (Eds), Proceedings of the 4th Northumbria International Conference on Performance Measurement in Libraries and Information Services, Association of Research Libraries, Washington, DC, pp. 1-8
Brophy, P. (2007), Telling the story: qualitative approaches to measuring the performance of emerging library services”, available at: www.lib.sun.ac.za/Northumbria7/Northumbria%202007%2003/PM7-Brophy.ppt (accessed 11 November 2014).
Cullen, R. (1998), “Does performance measurement improve organizational effectiveness? A postmodern analysis”, in Wressell, P. and Associates (Eds), Proceedings of the 2nd Northumbria International Conference on Performance Measurement in Libraries and Information Services, Information North, Newcastle-Upon-Tyne, pp. 3-20
Dixon, P. and Parker, S. (1998), “Introduction”, in Wressell, P. and Associates (Eds), Proceedings of the 2nd Northumbria International Conference on Performance Measurement in Libraries and Information Services, Information North, Newcastle-Upon-Tyne, p. iii
Ellard, K., Ford, M.G., Naylor, B., Lines, E. and Winkworth, I. (1995), The Effective Academic Library: a Framework for Evaluating the Performance of UK Academic Library, HEFCE
Jantti, M. (2014), “One score on – the past, present and future of measurement at UOW library”, in Hall, I., Thornton, S. and Town, S. (Eds), Proceedings of the 10th Northumbria International Conference on Performance Measurement in Libraries and Information Services, University of York, York, in press.
Lancaster, F.W. (1998), “Evaluating the digital library”, in Wressell, P. and Associates (Eds), Proceedings of the 2nd Northumbria International Conference on Performance Measurement in Libraries and Information Services, Information North, Newcastle-Upon-Tyne, pp. 47-57
Orr, R.H. (1973), “Measuring the goodness of library services: a general framework for considering quantitative measures”, Journal of Documentation, Vol. 29 No. 3, pp. 315-332
Parasuraman, A. (2004), “Assessing and improving service performance for maximum impact: insights from a two-decade-long research journey”, in Parker, S. (Ed.), Proceedings of the 5th Northumbria International Conference on Performance Measurement in Libraries and Information Services, Emerald Group Publishing, Bingley, pp. 31-38
Tenopir, C. (2011), “Measuring value and ROI of academic libraries: The IMLS Lib-Value Project”, available at: www.york.ac.uk/media/abouttheuniversity/supportservices/informationdirectorate/documents/northumbriapresentations/NorthumbriaTenopirfinal.2011.pptx (accessed 11 November 2014).
Town, J.S. (2009), “From values to value measurement: a metaphysical enquiry”, presentation, available at: www.slideshare.net/stephentown/value-north8v3-final (accessed 8 October 2014).
Van House, N. (1995), “Organisation politics and performance measurement”, in Wressell, P. (Ed.), Proceedings of the 1st Northumbria International Conference on Performance Measurement in Libraries and Information Services, Information North, Newcastle-Upon-Tyne, pp. 1-10
Wilson, F. and Town, S. (2007), “Benchmarking and library quality maturity”, Proceedings of the 6th Northumbria International Conference on Performance Measurement in Libraries and Information Services, Emerald Publishing Group, Bingley, pp. 401-406