Evaluating Electronic Information Services: A Review of Activity and issues in the UK Academic Library Sector

and

Library Hi Tech News

ISSN: 0741-9058

Article publication date: 1 January 2003

233

Citation

Hartland-Fox, R. and Thebridge, S. (2003), "Evaluating Electronic Information Services: A Review of Activity and issues in the UK Academic Library Sector", Library Hi Tech News, Vol. 20 No. 1. https://doi.org/10.1108/lhtn.2003.23920aaf.002

Publisher

:

Emerald Group Publishing Limited

Copyright © 2003, MCB UP Limited


Evaluating Electronic Information Services: A Review of Activity and issues in the UK Academic Library Sector

Rebecca Hartland-Fox and Stella Thebridge

Background

The eVALUEd project[1] aims to develop tools for evaluating electronic information services as well as to increase awareness of resources and initiatives already in existence which could help library and information managers in the higher education sector to evaluate their services more efficiently and more effectively.

A survey of library and information services in all UK higher education institutions was undertaken in Spring 2002. This provided information on current evaluation activity and outlined the views of practitioners as to the kind of evaluation support that they required. Follow-up interviews, with a sample of survey respondents, were undertaken in August and September which allowed for more detail to emerge about practices, perceptions and needs.

Questionnaire: introduction

The survey was posted to every director of information services, and was also made available online. A full analysis of this is available at: http://www.cie.uce.ac.uk/evalued/Library/Paper1_QuAnalysis.pdf

There were 112 responses from 194 institutions (28 of which were completed online). The overall response rate was 58 percent.

Of institutions responding to this survey, 43 percent were operating converged services, while 55 percent were not.

Information was sought about the following:

  • institutional background;

  • evaluation of electronic information services;

  • evaluation issues; and

  • wider evaluation of institutional electronic services.

Interviews: introduction

Practitioners in 20 institutions took part in follow-up interviews, 14 by telephone and five in person. One response was sent via e-mail. A full analysis of this is available at: http://www.cie.uce.ac.uk/evalued/Library/Paper3_Interviews.pdf

The selection of interviewees was based on:

  • library and information services (LIS) staff who had identified themselves as willing to participate in follow-ups from the initial eVALUEd questionnaire; and

  • those who had indicated some level of evaluation activity.

Interviews were intended to cover the following broad aspects:

  • summary of evaluation of electronic information services carried out;

  • indication of obstacles, challenges and gaps in evaluation;

  • awareness of practical tools and identification of areas where help is needed; and

  • influences and issues surrounding the evaluation of electronic information services (EIS).

Some of the key points to emerge from both the questionnaire and subsequent interviews are summarised below.

Evaluation undertaken

Almost three-quarters (72 percent) of institutions responding to the questionnaire confirmed that they carried out evaluation of the electronic information services within their library and information service.

The most commonly used method of evaluation cited was management information systems (MIS) data collection. This includes use of vendor statistics.

Considering the disquiet that emerges about the reliability, comparability and adequacy of vendor statistics, the responses show high reliance on these figures, indicating that in the absence of any other measure, institutions are resorting to these data to evaluate their services.

The types of evaluation cited and incidence are listed below:

  • MIS data (133);

  • online feedback (general) (115);

  • printed questionnaires (113);

  • observations (105);

  • online questionnaires (87);

  • cost-benefit analysis (72);

  • printed feedback (general) (71);

  • focus groups (41);

  • IT performance monitoring (39);

  • external monitoring services (35);

  • interviews (32);

  • monitoring workflow (26);

  • impact analysis (22);

  • document analysis (20);

  • user profiles (9);

  • other (17).

Note: respondents could tick categories more than once in relation to a range of EIS.

Changes brought about by evaluation

The main changes effected through analysis of evaluation data listed with number of citings (taken from the questionnaire) follow:

  • collection management (68);

  • funding changes (13);

  • publicity (10);

  • Web display (9).

Barriers to evaluation of EIS

Barriers to conducting evaluation in the area of EIS were perceived as follows by questionnaire respondents:

  • lack of time (45);

  • lack of statistical data (34);

  • lack of staff (23);

  • lack of response (21);

  • lack of staff evaluation skills (13);

  • lack of money (8).

Interviewees were asked to give more details about obstacles to their evaluation efforts. As outlined in the questionnaire, the two main factors were considered to be time and statistical data. Issues associated with vendor statistics in particular were often cited as a major influence to evaluation attempts across the board. For example, the implementation of performance measures was not considered possible without suitable data. It also appears that many practitioners use statistical data to make decisions on collection management (e.g. subscriptions to databases to journals).

MIS data

Several interviewees commented that usage is a big issue. Comments from interviewees included:

  • Need to know how to measure usage – we need a quick and easy way of analysing usage of journals and databases.

As the questionnaire highlighted, vendor statistics have presented a number of problems to libraries. In total 15 interviewees referred to obstacles created by vendor statistics. The main problem cited was lack of comparability as statistical data and presentation vary widely between vendors. For example:

  • Can't compare like with like.

    Lack of statistical base between suppliers.

Another issue is the unreliability of data collected. Technical issues can impact on the statistical data collected:

  • Double clicks can count as two uses.

Project COUNTER (set up in March 2002) is an international initiative that aims to ensure some standardisation of vendor statistics. In the first phase, the objective is to establish a code of practice to support librarians and publishers, which should also identify the needs of the various stakeholders COUNTER (2001-).

The responses to the questionnaire and follow-up interviews reflect themes in the literature. The capability offered by statistics for electronic products appears promising, particularly when one considers the potential problems that the collection of comparable statistics for printed journals might present. Statistics also offer the opportunity to break down usage further, for example by user type (Ashcroft, 2000).

While usage was considered to be one of the main areas where useful data could be gained swiftly, if suitable methods and arrangements were in place, in reality, the resultant data vary widely between vendors as does the timing of any updates. These factors obviously have a major impact on the time required to analyse any statistical data (Eaton, 2002).

  • Different journal hosts monitor different statistical information measures (searches performed, PDF documents downloaded abstracts viewed, etc.), with little consistency between them (Wolf, 2001).

Many libraries lack control over their own management information systems that do not allow accurate statistics of use to be drawn off. In non-converged services, there seems to exist a lack of collaboration between the IT and library and information services in order to develop in-house data.

Several HEIs are tackling this problem and developing their own solutions in-house. This includes using questionnaires asking users about their patterns of usage for specific databases (Wolf, 2001); the development of a systems infrastructure to capture links by users to each electronic information service (Eaton, 2002).

SCONUL data

Three interviewees mentioned problems presented by collecting data to support the SCONUL framework. In particular, comments were made about the new section on EIS. For example:

  • Difficult to separate out individual titles, e.g. unique electronic journal titles.

Due to vendor arrangements and the inclusion of some electronic titles as "bundles" it is often difficult to identify titles held:

Difficult to even count number of e-journals held.

Some freely available electronic titles are stored in the same way as those purchased and this can cause problems in calculating numbers.

Practical support for evaluation

MIS data issues

All of the issues gained from practitioners during the course of the eVALUEd research will feed into the development of tools to support evaluation of electronic services. Some of the issues relating to MIS data in particular that emerged from respondents' comments to the survey and in interviews are listed below:

  • a mechanism should be sought to encourage vendors/suppliers to supply data promptly and to encourage consistency of approach;

  • a nationally agreed set of electronic information service performance indicators (perhaps deriving from EQUINOX, (2000), would be helpful. Monitoring and collection of data would need to be done by a national agency like SCONUL;

  • any toolkit must draw on statistical data which are already available or which are "latent" and can easily be collected;

  • electronic information services are expensive, and there is a need to justify costs. Any toolkit must be streamlined to highlight cost benefit and transparent enough to show that the evaluation exercise itself is cost effective.

Support required for evaluation

A total of 81 institutions (72 percent) said they would like support with their evaluation of electronic information services, while 17 (15 percent) said they would not and 14 (13 percent) did not respond. Some who did not want support suggested that they currently lacked the time to conduct evaluation.

Importantly, 95 percent (106 institutions) said that they would use a free evaluation toolkit if one were available, and 88 percent (99 institutions) said they would welcome training opportunities in the evaluation of electronic information services. This supports the premise on which the eVALUEd project has been built.

Available resources

Because there had been no mention of EQUINOX (2000) measures in responses to the survey, interviewees were asked specifically if they had heard of these and of the Association of Research Libraries' (2000) e-metrics project in the USA. They were also asked if they had used the measures. The responses confirm little awareness and even less use of these initiatives (Table I).

Practical evaluation examples

Additional information to that provided in the questionnaire was sought on the types of activity and some judgement as to what evaluation had been successful. The phrase "good practice" implies some judgement about quality. A question relating to examples of good practice was asked. This enabled the member of LIS staff to apply the judgement of "goodness" as they saw fit. The responses overlapped to some extent with previous responses relating to evaluation conducted.

Types of evaluation considered to be of benefit include:

  • evaluation conducted by research students;

  • some useful tips on administering questionnaires;

  • methods of obtaining feedback;

  • monitoring student use;

  • checking service availability and reliability;

  • use and development of MIS data; and

  • conducting a usability study.

Some useful tips and advice were offered, for example in terms of improving response rates. Many found offering both online and printed versions of questionnaires yielded a better response than one method alone.

Evaluation help: content

Some of the main aspects which interviewees would like to see included in a toolkit are outlined below:

  • usage;

  • benchmarking;

  • impact analysis;

  • cost-benefit analysis;

  • general evaluation;

  • performance indicators.

  • Anything that saves time and effort and provides a consistent evaluation framework would be welcome (Project interviewee).

Developing the toolkit

The eVALUEd project will be developing an online toolkit and piloting it among selected institutions shortly. This will attempt to cover a comprehensive set of evaluation issues. Case studies will also be undertaken in a small number of institutions whose evaluation practice could usefully be disseminated to the rest of the sector.

The research team will continue to monitor relevant literature and to disseminate findings across the profession. Interviews with "experts" in the wider evaluation field are continuing as a complement to those conducted with library and information professionals. These will continue to inform the project's direction.

Note

  1. 1.

    Further information about the eVALUEd project can be found on the Web site at: http://www.cie.uce.ac.uk/evalued/

ReferencesAshcroft, L. (2000), "Win-win-win: can the evaluation and promotion of electronic journals bring benefits to library suppliers, information professionals and users?", Library Management, Vol. 21 No. 9, pp. 466-71.Association of Research Libraries (2000), E-metrics project, available at: www.arl.org/stats/newmeas/emetrics/contract00%2D01.html (accessed 18 October 2002).COUNTER (2001-), Project COUNTER (Counting Online Usage of Networked Electronic Resources), available at: www.projectcounter.org/index.html (accessed 16 October 2002).Eaton, J. (2002), "Measuring user statistics", Cilip Update, Vol. 1 No. 6, pp. 44-45.EQUINOX (2000), Library Performance Measurement and Quality Management System Performance Indicators for Electronic Library Services, available at: http://equinox.dcu.ie/reports/pilist.html (accessed 16 October 2002).Wolf, M. (2001), "Electronic journals: use, evaluation and policy", Information Services and Use, Vol. 21 No. 3-4, pp. 249-61.

Rebecca Hartland-Fox (rebecca.hartland-fox@uce.ac.uk) is an eVALUEd Research Assistant, Centre for Information Research, University of Central England, Birmingham, UK.

Stella Thebridge (stella.thebridge@uce.ac.uk) is an eVALUEd Research Fellow, Centre for Information Research, University of Central England, Birmingham, UK.

Related articles