AN INTERVIEW WITH JOHN CARLO BERTOT

Library Hi Tech News

ISSN: 0741-9058

Article publication date: 1 April 2001

51

Citation

Calvert, P. (2001), "AN INTERVIEW WITH JOHN CARLO BERTOT", Library Hi Tech News, Vol. 18 No. 4. https://doi.org/10.1108/lhtn.2001.23918dae.006

Publisher

:

Emerald Group Publishing Limited

Copyright © 2001, MCB UP Limited


AN INTERVIEW WITH JOHN CARLO BERTOT

AN INTERVIEW WITH JOHN CARLO BERTOT

John Carlo Bertot PhD is Associate Professor, School of Information Studies, Florida State (jcbertot@lis.fsu.edu). He is a joint author of Statistics and Performance Measures for Public Library Networked Services (American Library Association, 2001) reviewed in this issue of Library Hi Tech News. He was kind enough to agree to an interview with Philip Calvert, Co-Editor of Library Hi Tech News.

Calvert.Let's start with a corny question. What sparked your interest in the topic of measures for library networked services?

Bertot.My interest in the topic began back in 1993. This Internet "thing" was beginning to become more dynamic, with the real possibility of interactive services through a new invention called the World Wide Web and a browser application called Mosaic. At that point, I realized the potential for library services in a networked environment.

Simultaneously, I had been using a number of library service measures (e.g. circulation, walk-ins), outputs, and others. It dawned on me that all of those measures were simply inadequate in a networked environment. Moreover, it seemed unlikely that some of the network services measures would have a counterpart in the print world.

Putting the two together, it was fairly obvious that, for libraries to "prove their worth" in an increasingly network-based service environment, they would need measures that captured the use and uses of network-based services.

Calvert.You have been researching this complex and rapidly changing topic for quite a few years. Is it getting any easier?

Bertot.I wish! Actually, it seems to be going in the opposite direction. This is so for a variety of reasons: we all know that technology changes rapidly. This means that what we may want to measure, can measure, have the tools to measure, and can comprehend is constantly changing ­ and rapidly. As such, there is almost no time between measure development, field-testing, and use in the field. Once in the field, technology changes again, thus beginning the entire cycle once more; the methods necessary to capture networked service usage are evolving and under constant change as well. This means that researchers are in a continual state of experimentation in terms of being able to capture ­ with relative accuracy and meaning ­ use and uses of library networked services. Many traditional research methodologies are simply ill-suited to this particular area of research; and people want more out of network measures than they have ever wanted with traditional measures. For example, a key phrase we hear constantly is "what are the outcomes of the use?" Library administrators (and others) want to know "what difference" using an online database made in the life of a user/patron. The same goes for public access workstations and any number of network-based services. And yet, we never traditionally asked users about the difference reading a book made in their lives, or what it meant to have access to CD-ROM-based databases. We also never tracked the use of library services by patron, as many libraries now do users through their Web sites through the use of Web log file analysis. There is substantially more pressure to get at this kind of information today. Of course, I am not addressing the privacy and confidentiality issues that these measures may raise.

Calvert.And now to the manual. It comes as something of a shock to discover how dependent libraries will be upon database vendors for many of the statistics they want to collect. Have you had much reaction to this from library managers in the "real world"?

Bertot.This is true. Some of the most important data are no longer in the control of libraries and/or librarians. There are two distinct sides to this issue ­ the library side that purchases database services and the vendor side that produces databases, bundles them, or some combination of the two. On the library side, there is no small dismay at the idea of not controlling data about an increasingly large method of service provision. Slowly, libraries are working a number of reporting requirements into their contracts with vendors. On the vendor side, there is a keen awareness that at least some baseline reporting is necessary. Unfortunately, vendors differ in their statistics, reporting approaches, and, frankly, attitude toward the need to report usage to libraries.

A number of collaborative efforts are working towards changing the current environment of database statistics. There is the ICOLC (International Coalition of Library Consortia), our research projects, The US National Commission on Libraries and Information Science, and others (including a variety of European ones as well). We are all working with several vendors to create standard statistics, definitions, reporting requirements, and reporting formats. This work continues, and I am more optimistic than ever that we will make substantial progress in the next year.

There are a variety of issues that require resolution, however, that are worth mentioning:

Agreement in the library community on a core set of statistics. Vendors do want to provide useful and meaningful data back to libraries ­ they do not, though, want to create unique reports for every single library that subscribes to their services. This requires substantial effort and resources.

Librarians require training and understanding as to what is available already. For example, some vendors already provide a substantial amount of reporting and ways in which to access such reports on demand. Conversations with some vendors indicate that such services are used by less than 1 percent of their subscribing libraries!

Vendors, while improving, need to understand the real need of libraries to justify their database expenditures and be more responsive to those needs.

Calvert.In the manual you predicted that many of the recommended measures may have a "shelf-life" of three to five years at most. Is it worth trying to standardize measures if that's as long as they will stay relevant?

Bertot.Yes. Though comparative and longitudinal data may be short-lived, there is a need to have standard definitions and methods for even brief periods. Libraries want at least two things from their data: comparisons with themselves over time; and comparisons with others over time.

Internal and external benchmarking, if they are to work, require standardized definitions and methods.

Calvert.Have you had much reaction to your point that often we can't gather precise measures in the networked environment, and that estimates will have to do instead?

Bertot.The answer to this depends on the audience. Library professionals in the "trenches" want useful and meaningful data that help them solve real issues and problems that they face in their particular situations. These individuals are willing to live with estimates of their services, as they are in a "good enough" circumstance in which perfecting measures to a ±1 percent error is not necessarily possible, given time and other finite resources.

Then there are the academic researchers. These individuals want reliability, validity, proven methods, etc.

The two are not necessarily out of sync and can inform each other. However, even though I am an academic researcher, I am willing to experiment with new methods, measurement techniques, and approaches to get a better sense of the networked environment. Without experimentation, we will never get better at this. I am unwilling to sacrifice exploration for the sake of perfection. Having said that, however, we need to work harder to develop more accurate, valid, and reliable measures and techniques. That is always something I consider in the various research activities in which I engage. It is, though, something that requires balance between stringent academic research requirements, on one hand, and practical and useful data for decision making, on the other hand.

Calvert.How much cooperation has there been between the different projects developing measures for the electronic library environment?

Bertot.I mentioned some above. There is an increasing amount of cooperation among the groups. Differences and egos do exist, frankly, but all see the importance of collaboration where possible. A key area of collaboration (as discussed above) is in the area of vendor database statistics. Other collaboration exists through various standards organizations, professional societies, research groups, and working groups.

Calvert.Can we develop measures that tell us about the customer's experience of networked services? To put it another way, can we measure service quality of electronic library services?

Bertot.I think that I answered some of this above. I think that the short answer is "yes". However, we first need to develop baseline measures of what "quality" is in the networked environment. In truth, many of the measures with which we are currently dealing are "input" measures ­ they answer questions like "how much?" or "how many?" We need to move beyond that and tie these measures to institutional or patron outcomes. This is much more difficult, as there are a number of factors that comprise outcomes ­ not just, say, the database a patron searched to find certain information. Our current E-metrics project with the Association of Research Libraries (ARL) will address this in a more direct manner beginning in the summer/fall 2001. Check back with me later and I will let you know how successful we were!

Related articles