Advances in Librarianship: Volume 35
Table of contents
(16 chapters)Assessment and evaluation have become increasingly important in the nonprofit sector. Although initially used mostly in educational contexts to measure student learning, the strategy has migrated to other contexts such as measuring overall organizational and institutional successes, and the impact of projects, programs, and operational changes. This growing emphasis is in part due to increasingly stringent requirements imposed by government agencies, foundations, and other funding sources seeking to ensure that their investments result in significant impacts. In addition, the current economic climate and retrenchments in nonprofit agencies including colleges, universities, and public libraries have raised the need for assessment and outcomes evaluation to a critical level.
This chapter examines the challenges of developing and implementing a new national evaluation approach in a complex library funding program. The approach shifts a prior outcome-based evaluation legacy using logic models to one relying on nonlinear logic mapping. The new approach is explored by studying the Measuring Success initiative, launched in March 2011 for the largest funded library services program in the United States, the Institute for Museum and Library Services formula-based Grants to States program. The chapter explores the relative benefits of nonlinear logic maps and emphasizes the importance of scaling evaluation from individual projects toward clusters of similar library services and activities. The introduction of this new evaluation approach required a new conceptual frame, drawing on diffusion, strategic planning, and other current evaluation theories. The new approach can be widely generalized to many library services, although its focus is on a uniform interorganizational social network embedded in service delivery. The chapter offers a new evaluation perspective for library service professionals by moving from narrow methodological concerns involving measurement to broader administrative issues including diffusion of library use, effective integration of systematic data into program planning and management, and strengthening multi-stakeholder communication.
The chapter describes the Outcome-Based Evaluation (OBE) Initiative of the New York State Library (NYSL) from its start in 2003. Through extensive training, online support, and integration into statewide processes and grant projects, the initiative has brought OBE to New York State's library community with the overall goals of measuring impact and leveraging funding. NYSL's OBE activities and lessons learned are especially helpful to those interested in developing a similar initiative or aspects of it. The activities and findings of the initiative are reviewed including implementation of the ten-stage OBE Training Plan that was the project's foundation. Logic models and outcomes were used to plan and evaluate most of the initiative.
The OBE Initiative has been a success on many levels. Training and support have been effective in teaching library staff how to implement OBE at regional and local levels. The approach has been widely accepted by libraries. NYSL has also integrated OBE techniques into several statewide processes and grant projects. Through OBE, libraries are able to determine the impact of their programs and services. Outcome data leads to improved planning and better decision making. Users ultimately receive higher quality library services, resulting in a more literate community and workforce. OBE can also support advocacy efforts, leading to increased funding for services. While many in the library community are now using OBE, very few have developed a statewide initiative. The chapter is original and has high value. Each of the three authors has carried out multiple aspects of the project.
This chapter documents the evolution of the application of evaluation methods to public library services for children and teens in the United States. It describes the development of age-specific output measures and the subsequent requirement by funding agencies for outcome evaluations that measure changes in skills, attitudes, behavior, knowledge, or status as a result of an individual's participation in a service or program. Some early outcomes research studies are cited, and California initiative to implement statewide outcome evaluation of its Summer Reading Program is presented as a case study. Training and education are suggested as ways to counter the major challenges for wider implementation of outcome evaluation of youth services programs in public libraries.
This case study demonstrates the positive changes that evolved from a series of assessment activities. It shows that even smaller libraries can conduct assessment, with the support of colleagues and the library administration. Librarians can take a proactive role rather than waiting for a mandate from college administration. Two years of LibQUAL+® survey results (2005 and 2008) were analyzed in depth using statistical correlation analyses. Following this, respondents’ comments were categorized by dimension and analyzed to detect correlations. The information collected was then used to track trends and highlight the strengths and weaknesses of the library through the eyes of its users. A comparison of the survey results showed an increase in perceived service in all dimensions. However, user expectations rose even faster, especially with regard to e-resources, equipment, and study space. Positive results about staff expertise and service attitudes demonstrated that users valued the personal attention and capabilities of the library staff. The study shows that a user survey is only the first step in an assessment process. Assessment can be effective only if follow-up actions are taken to address negative feedback and the actions then communicated to all stakeholders. While assessment has become a necessity for many libraries, small- and medium-sized libraries often shy away from it, due to limited resources. The Richard Stockton College Library undertook assessment in areas in which it could expect achievable results. Another outcome came in the form of additional resources, which narrowed the gap between library services and users needs.
As any library strives to improve services and make them increasingly relevant, planning for change has become routine. During 2011, the University of Arizona's Libraries undertook extensive assessments in order to develop and improve services in support of research and grant services so that campus-wide achievements in research, scholarship, and creative works could improve. A project explored ways for the library to become more effective at increasing research and grant support to faculty, researchers, and graduate students in a scalable way, and to help the campus increase achievements in research, scholarship, and creative works. The project defined the library's role in research and grant activities and explored ways for the library to be involved at optimal points in these cycles. This chapter discusses the process developed for assessing what new research and grant support services the library might want to develop. This involved interviewing peer university libraries and surveying faculty and graduate students at the University of Arizona about their research and grant needs. The chapter also describes how results were analyzed to identify potential new library services. The project team recommended new services which were presented to the library for inclusion in its Strategic Plan. The methodology presented in this chapter can be used by any type of library for developing new services to include in their strategic plans.
This chapter examines how selected accrediting bodies and academic librarians define collection strength and its relationship to student achievement. Standards adopted by accreditation bodies and library associations, such as the Association of Research Libraries, are reviewed to determine the most common ones which are used to assess library collections. Librarians’ efforts to define and demonstrate the adequacy of library resources are also examined in light of increased focus on institutional accountability, and requirements to provide planned and documented evidence of student success. Also reviewed are the challenges faced by academic librarians in a shift as they shift from traditional collection-centered philosophies and practices to those which focus on client-centered collection development such as circulation analysis, citation analysis, interlibrary loans, and student satisfaction surveys to determine collection use and relevance. The findings from a review of standards and existing library literature indicated that student use of library collections depends on faculty perceptions of the library and whether they require students to use library resources and services for their research papers. Through marketing strategies, improvement of student awareness of collections and library services, the chapter concludes that multiple collection-related factors influence the academic success of students, not just the size and importance of library collections per se. The significance of the chapter lies in its identification of halting and difficult adjustments in measuring both collection “adequacy” and student achievements.
Accreditation agencies both institutional and professional (such as the American Library Association) have asked educators to demonstrate student learning outcomes for every academic program that they are assessing, and that they use the data gathered for continuous improvement of programs. This chapter reports on the development of an electronic portfolio (ePortfolio) structure for accomplishing an assessment process within a school of library and information science. From the student side, the portfolio prompts them to select work that they feel is their best effort for each program outcome such as “assist and educate users.” From the faculty side, all items for a given outcome can be downloaded and assessed quantitatively and qualitatively so as to arrive at an understanding of how well the program as a whole is doing, with sufficient detail to guide specific improvement decisions. During design, researchers employed a sequential qualitative feedback system to pose tasks (usability testing) and gather commentaries (through interviews) from students while faculty debated the efficacy of this approach and its place within the school's curricular structure. The local end product was a usable portfolio system implemented within a course management system (Oncourse/Sakai). The generalizable outcome is an understanding of key elements necessary for ePortfolios to function as a program-level assessment system: a place for students to select and store artifacts, a way for faculty to access and review the artifacts, simple aggregations of scoring and qualitative information, and a feedback loop of results into program design for improved student learning.
This chapter compares faculty self-assessment of teaching with student opinion of instruction in an online environment, in order to determine the level of agreement between faculty self-assessment and student assessment, in areas of overall program strength and directions for individual and whole-group professional development. A faculty self-assessment of teaching inventory based on established guidelines was administered to participating faculty in the Master of Library Science program at East Carolina University, and scores were compared to students’ ratings of instruction for one academic year. Scores were corrected for bias, tabulated, and Pearson correlation and t-scores were calculated. The method used produced an effective benchmarking and diagnostic tool, and indicated directions for instructional improvement. Because the study was for the express purpose of internal, formative evaluation, model data tabulations are presented as examples only. Data from the actual study are not presented. Limitations of the study are that items on student evaluation of teaching surveys may not always lend themselves to concept mapping, and that data were collected only for one academic year in a single program. The chapter contributes a method that is replicable and scalable, demonstrates that data are relatively easy to acquire, and that procedures are simple to implement, requiring only basic statistical tests and measures for analysis. Results can be interpreted and understood without extensive knowledge of quantitative methods. There are few studies that compare students teaching evaluations with faculty self-evaluations, and none that specifically address it for library and information science education programs.
- DOI
- 10.1108/S0065-2830(2012)35
- Publication date
- Book series
- Advances in Librarianship
- Editors
- Series copyright holder
- Emerald Publishing Limited
- ISBN
- 978-1-78190-060-4
- eISBN
- 978-1-78190-061-1
- Book series ISSN
- 0065-2830