What is impact assessment and why is it important?

David Streatfield (Information Management Associates, Twickenham, UK)
Sharon Markless (King's College, London, UK)

Performance Measurement and Metrics

ISSN: 1467-8047

Publication date: 3 July 2009

Abstract

Purpose

The purpose of this paper is to offer a definition of impact assessment and to discuss some of the implications of this and other definitions. A particular approach to impact assessment is introduced, as developed for use in a variety of library and information service settings and the principles underpinning this approach are described.

Design/methodology/approach

This approach has been adapted by the Bill and Melinda Gates Foundation's Global Libraries Initiative (GL) when providing impact planning and assessment support to grantees through their “IPA Road Map”. The approach is also adopted by the International Federation of Library Associations (IFLA) in devising its impact assessment strategy for evaluation of its future Free Access to Information and Freedom of Expression (FAIFE) work.

Findings

The importance of impact assessment in a variety of settings is outlined: from school libraries to university researcher support and from public libraries to electronic information services.

Research limitations/implications

Although this paper draws on and quotes from the IPA Road Map developed by the GL, all the comments and opinions expressed are those of the authors and should not be interpreted as representing an official GL viewpoint.

Originality/value

Some “unofficial” observations are offered on the relationships between impact assessment, advocacy and service sustainability, particularly in relation to major service development programmes such as the GL.

Keywords

Citation

Streatfield, D. and Markless, S. (2009), "What is impact assessment and why is it important?", Performance Measurement and Metrics, Vol. 10 No. 2, pp. 134-141. https://doi.org/10.1108/14678040911005473

Download as .RIS

Publisher

:

Emerald Group Publishing Limited

Copyright © 2009, Emerald Group Publishing Limited


Defining impact

In preparing our book on Evaluating the Impact of your Library (Markless and Streatfield, 2006a), we borrowed a definition of impact from the educational evaluation literature (Fitz‐Gibbon, 1996), as:

… any effect of the service [or of an event or initiative] on an individual or group.

This definition acknowledges that the impact can be positive or negative and may be intended or accidental. When using this definition, measuring impact is about identifying and evaluating change.

These broad precepts were adopted by the Bill and Melinda Gates Foundation in relation to the Global Libraries Initiative (GL) work in various countries, some aspects of which will be described in more detail in later papers in this seminar. The GL Impact Planning and Assessment (IPA) Road Map (Global Libraries Initiative, 2008)[2], which is intended to help grantees to engage with impact assessment (and which we helped to write), again emphasises change, but in more specific ways:

The essential element of impact is change: the ways in which individuals, groups, communities or organizations are changed through your country grant program; the results of the program. We may therefore define impact as: any effect of your country grant program on an individual, group or community.

The GL definition repeats the caveats about negative impacts and about intention but adds a dimension about the breadth and depth of change. An impact:

  • may be wide ranging, affecting many stakeholders from library staff to library users; from local government officials to local community groups, or;

  • may be more specific, directly affecting only one group of stakeholders; and

  • can occur on levels from the superficial to the life‐changing.

For a large‐scale programme such as the GL, it is important to strike a balance between the different levels and types of impact, as well as between short and longer term effects. As the GL IPA Road Map says:

Too much long term, far reaching impact will be difficult to achieve and monitor; too much short term, limited impact will not allow you to use the full potential of your country grant program.

The main levels at which a major programme can have an impact are summarised in Figure 1.

Clearly some of these areas of impact would require long‐term attention but collecting evidence of impact about others could be accommodated within an annual planning cycle.

Evaluation or assessment: implications of the definitions

Working in a UK and European context, we have readily adopted the nomenclature of impact evaluation, which we have interpreted as largely dependent on qualitative research evidence about effectiveness and as broadly complementary to “traditional” performance measurement, seen as quantitative and largely focused on monitoring service efficiency. In this broad context, assessment is primarily a term used in gauging educational performance of students.

In strong contrast, the GL Program reflects wide (but not universal) US research usage in distinguishing between impact planning and assessment and impact evaluation. In this view, IPA is seen as the entire process of gathering both qualitative impact evidence and performance metrics and applying them through advocacy to secure sustainability of services. The term impact evaluation is reserved to describe systematic causation or attribution studies (using a rigorous approach to collecting evidence that shows whether and how an intervention is directly responsible for particular changes or benefits). Causation studies seek to answer such questions as: “How much better off are beneficiaries as a result of the intervention?” or “Does the intervention have a different impact on different groups?”; “Did the intervention cause the impact?” and “What would have happened if the intervention had not taken place?”

Although this distinction is important (and reflects the wider impact evaluation concerns of the Bill and Melinda Gates Foundation) the IPA Road Map stresses that:

GL does not expect grantees to undertake attribution studies. In the context of Global Libraries, IPA concentrates on planning to make a difference through the country grant program; on assessing implementation (what has been put in place); and on gathering evidence of benefits that have accrued or changes that have been made since the country grant program began. We can reasonably assume that the country grant program has contributed to the changes identified and has “added value” but there may be other influences involved so we do not seek to prove a causal relationship.

An approach to impact assessment

Over the past ten years we have developed an approach to impact assessment[1] that is specifically designed to help managers of libraries and information services to get to grips with the impact of their services. (n.b. Although “impact evaluation” is our preferred term, as rehearsed earlier in this paper, we have used “impact assessment” throughout in the interests of clarity.) This work emerged from recognition that although most library service managers are comfortable with collecting traditional performance data to monitor the efficiency of their services, many found it hard to step back and address the deeper effects of their services on users and their communities. The book referred to earlier sought to distil our experience of running impact evaluation workshops and consultancies for library and information service managers of all kinds (including public, health, education, government, law and business LIS managers). In essence, this approach encourages LIS managers to:

  • carefully articulate the impact objectives of that part of the service under review (so that you know what you are trying to achieve);

  • consider which impact indicators will be most useful to tell whether that element of the service is succeeding or not;

  • then, to decide what evidence will be collected to illuminate the chosen impact indicators and how to collect it; and

  • decide how to use that evidence to enhance services and secure service sustainability.

In working through this process, we encourage participants to:
  • maintain a clear focus on what is manageable and achievable within available resources;

  • base choices about where to focus on what is known from the research evidence about where libraries can make a difference;

  • align library themes for impact assessment with wider organisational/national objectives and priorities;

  • approach impact objectives by asking what will tell you that change has occurred;

  • recognise that most impact evidence is qualitative in character, calling for rigorous application of social science research methods based on observation, asking questions and inferring change from review of the products of people's activity (e.g. student assignments);

  • see the need for baseline impact evidence to enable judgements to be made about progress over time; and

  • recognise that impact assessment timetables may not readily fit into the library annual planning cycle.

The impact indicators generated through this type of process can be as broad or as narrow as the vision of the participants. For example, strategic impact objectives for a library service might range from more community involvement in local government decision making to individuals use information to make better business decisions.

The main principles underlying our approach to impact assessment are that:

  • staff should be empowered to understand and conduct impact assessment work;

  • key decisions about the scope and focus of this work should be influenced by those directly involved, including service users where feasible;

  • impact indicators should be appropriate for the service and not imposed by an outside agency for political purposes;

  • flexibility is needed in conducting impact assessment but based on rigorous application of appropriate evidence‐collection methods; and

  • given the relative novelty of impact assessment for most library service managers, impact assessment should be viewed as a process of managing organisational change.

Applying this approach globally

Again, these principles were adhered to when this broad approach was adopted by the GL in developing the IPA Road Map to support grantees. The IPA journey for grantees is presented in four stages as shown in Figure 2.

Grantees are encouraged to identify the most relevant impact areas for their own countries; themes where the country grant program can potentially have a large impact on individuals, organizations and communities. The main areas suggested where public access to information in libraries can make a difference are:

  • education;

  • health;

  • culture and leisure;

  • economic development;

  • communication; and

  • e‐government.

A matrix is provided in the IPA Road Map covering each of these impact areas in greater detail and showing outcomes for libraries and librarians, short‐term impacts on communities and users, and longer term impacts on the community and society. This matrix is designed to offer ideas to grantees when holding local discussions and making decisions to create their own impact framework. Grantees are encouraged to ask themselves which of these elements are most relevant to their community, government and other stakeholders and what their objectives should be in each of these areas of activity. They are also advised to avoid spending unnecessary resources on assessing elements that will not add much to the picture.

As well as presenting short and longer‐term impacts on users and their communities, the matrix gives examples of how implementing a country grant program can build capacity within libraries themselves. The library can be enhanced to provide equal and unhindered access to information, be a centre for the community, and give access to relevant local content; librarians can develop knowledge and expertise to support each of the areas of activity.

When grantees have decided on the locally‐relevant impacts that they want to achieve through their GL Program they are encouraged to formulate impact indicators to provide information on program effectiveness in meeting impact goals. Help is then offered for the other stages in the impact assessment process.

In addition to their own impact evidence, grantees also have to collect required performance metrics intended to help monitor progress of the GL program as a whole. In some cases the requisite methodology is also specified by GL. Further performance metrics are recommended but not mandatory.

Why does the Bill and Melinda Gates Foundation give so much attention to impact planning and assessment? The answer provided for grantees is that:

The Foundation sees the inclusion of Impact Planning and Assessment as a commitment to program management that, from the beginning, incorporates assessment and adjustment into planning and implementation. As a grantee, you will design a country grant program to best fit your local environment, targeting audience needs and local stakeholder/government priorities. In so doing, you will propose activities, tools and metrics that allow you to plan, measure, learn and sustain your programs (GL IPA Road Map, 2008)[2]

Our general approach to impact assessment was also adopted recently by the International Federation of Library Associations (IFLA) in devising its impact assessment strategy for evaluation of its future Free Access to Information and Freedom of Expression (FAIFE) work. The only essential difference was that the planners chose to focus their impact indicators on the effects of the programme training on the library staff who took part rather than on the users of their services. This choice of focus was dictated by the limited resources available for evaluation work.

From global to local: why impact assessment matters

Impact assessment of LIS services is important whether the work is undertaken on a global, national or local scale. The GL identifies three main reasons for collecting impact evidence; to show:

  1. 1.

    whether projects are being conducted effectively, in order to learn from and improve project activities;

  2. 2.

    whether the program is making a difference to people, groups, organizations or communities; and

  3. 3.

    to use that evidence of impact to advocate for continued support and/or funding from relevant stakeholders (GL IPA Road Map, 2008)[2]

Similar principles apply at a national or local level, but other elements may also come into play. Confining ourselves to projects with which we have been directly involved:

  • For IFLA, there is an additional incentive of being able to meet the changing impact evidence requirements of potential project funders.

  • The School Library Self‐Evaluation process and materials commissioned by the then Department for Education and Skills (England) gives school library managers a basis of comparison for their library with others, as well as a framework for developing the chosen aspects of their service (Markless and Streatfield, 2004; Streatfield and Markless, 2004).

  • For the 22 university library teams participating in the Impact Initiative the incentive was primarily to gather evidence to inform the development of their services at a time of rapid change in teaching and learning, information literacy provision and support and e‐information provision (Markless and Streatfield, 2005[3], 2006b). Their work also provided a model and evidence‐gathering materials for other universities (see http://vamp.diglib.shrivenham.cranfield.ac.uk/impact/impact‐initiative).

  • The main issue for health libraries is to meet organisational priorities requiring that services should have a positive impact on patient care (see Weightman and Williamson, 2005).

  • As for public access to ICTs, there is still much to do. The recent literature review by the Centre for Information and Society found that: … there is limited conclusive evidence on downstream impacts of public access to ICTs. The evidence that does exist suggests that the public access ICT model is not living up to the expectations placed on it. This is not necessarily because public access has had no impacts, but because its impact is particularly difficult to identify and measure. As a model, public access to ICTs has experienced success and failure, leading to both reinforcement of the belief that the model should be expanded and strengthened; as well as claims that public access ICTs are ultimately ineffective or even counter‐productive from the development perspective (Sey and Fellows, 2009).

Impact assessment, advocacy and service sustainability

There are clearly an important series of relationships between impact assessment, advocacy and service sustainability, especially in relation to major service development programmes. The GL makes this explicit:

The impact planning and assessment process can assist grantees, program officers, and other strategists and policy makers with impact planning and assessment [and] with this evidence, advocate for continued development of libraries, computer use, and financial sustainability.

Our experience suggests that there are three significant potential pitfalls in using impact assessment evidence to make the advocacy case and secure sustainable services. We have noted some tendency among national organisations to conflate impact assessment and advocacy, even to the extent of commissioning the collection of impact evidence to support the advocacy case.

There is also a tendency to “cherry pick” only the most positive evidence in making the advocacy case. In our view, impact assessment should be seen as an independent process that may or may not give rise to evidence of success and should lead to a “warts and all” approach to the presentation of findings for advocacy purposes. Most politicians at local or national level can readily identify the “good news only” approach to presenting evidence and any such approach will rapidly undermine the case being presented.

Finally, although we confidently assert that rigorous impact assessment combined with lucid advocacy will lead to service sustainability, there is comparatively little evidence to suggest that this is how key decisions are usually made. Evidence‐based strategic decision making may be highly desirable, but are our societies ready for this? Or are we still at the stage of development characterised by the Irish dramatist George Bernard Shaw, when he said that “Reformers have the idea that change can be achieved by brute sanity”?

Figure 1

Figure 1

Figure 2

Figure 2

Notes

Although “impact evaluation” is our preferred term (as rehearsed earlier in this paper), we have used “impact assessment” throughout in the interests of clarity.

This Road Map is part of the GL Toolkit (closed web site) provided to help grantees in relation to the Global Libraries Initiative.

The whole of this special issue of LIRN is devoted to the Impact Implementation Programme.

Corresponding author

David Streatfield can be contacted at: streatfield@blueyonder.co.uk

References

Fitz‐Gibbon, C.T. (1996), Monitoring Education: Indicators, Quality and Effectiveness, Cassell, London.

Global Libraries Initiative (GL) (2008), IPA Road Map, Bill & Melinda Gates Foundation, Seattle, WA.

Markless, S. and Streatfield, D.R. (2004), Improve your Library: A Self‐evaluation Process for Secondary School Libraries and Learning Resource Centres, Vol. 2 vols, Department for Education and Skills, London, available at: www.teachernet.gov.uk/schoollibraries (accessed 9 August 2009).

Markless, S. and Streatfield, D.R. (2005), “Facilitating the impact implementation programme”, Library and Information Research, Vol. 29 Nos 91, Spring.

Markless, S. and Streatfield, D.R. (2006a), Evaluating the Impact of your Library, Facet Publishing, London.

Markless, S. and Streatfield, D.R. (2006b), “Gathering and applying evidence of the impact of UK university libraries on student learning and research: a facilitated action research approach”, International Journal of Information Management, Vol. 26 No. 1, pp. 315.

Sey, A. and Fellows, M. (2009), “Literature review on the impact of public access to information and communication technologies” CIS Working paper No. 6, Center for Information and Society, University of Washington, Seattle, WA.

Streatfield, D.R. and Markless, S. (2004), Improve your Library: A Self‐evaluation Process for Primary Schools, Department for Education and Skills, London, available at: www.teachernet.gov.uk/schoollibraries (accessed 9 August 2009).

Weightman, A.L. and Williamson, J. (2005), “The value and impact of information provided through library services for patient care: a systematic review”, Health Information and Libraries Journal, Vol. 22 No. 1, pp. 425.