The Critical Assessment of Research: Traditional and New Methods of Evaluation

Milena Dobreva (School of Creative Technologies, University of Portsmouth and Department of Computer and Information Sciences, University of Strathclyde, Glasgow, UK)

Library Review

ISSN: 0024-2535

Article publication date: 6 September 2011

221

Keywords

Citation

Dobreva, M. (2011), "The Critical Assessment of Research: Traditional and New Methods of Evaluation", Library Review, Vol. 60 No. 8, pp. 723-735. https://doi.org/10.1108/00242531111166755

Publisher

:

Emerald Group Publishing Limited

Copyright © 2011, Emerald Group Publishing Limited


It has been said that “science gives us knowledge but takes away meaning”. This book is about a very specific issue related to both knowledge and meaning – how can those who are not professional researchers not only understand but also trust research outputs?

The book is written with the intention of helping general readers to understand and evaluate research outputs regardless of the particular topic or domain. Understanding research, as the book persuasively demonstrates through a series of well thought through case studies, is far from being a purely intellectual exercise for the curious. This is a very timely theme with impact and value of research being scrutinized on both governmental and academic levels. In addition to such wider societal concerns, authors argue that the misunderstanding of research – and indeed cases of compromised research outputs – could have a serious influence on individual human lives, ranging from financial losses to potentially dangerous medical treatments. Hence, personal skills to undertake the critical assessment of research outcomes become an important part of the information literacy skill set. Although understanding research is not a generally expressed information need, it could be triggered by life circumstances that result in a significant change in the information behaviour (Moore, 2000).

After an introductory chapter which provides a general overview of issues related to understanding research, the chapter entitled “The gold standards” looks at the three types of measure that engender trust of research outputs. Peer review, publisher reputation and author credentials are considered here, as well as how the mass media is treating those gold standards and what their limitations are. While important, they are not sufficient – as demonstrated with modern and historical examples of “efforts to corrupt and suppress the research process” (p. 15).

The discerning reader has to be alert and should attempt to answer a wider range of questions, including what is the motivation behind the research? What are the funding sources? What is the wider context and goal(s) of the research? The authors provide a compact collection of case studies related to sponsorships and funding, research paradigms and their change, and dissemination.

The issues of sponsorship and funding are considered in the context of hormone replacement therapy, the scandal around the Enron corporation financial reporting which was misleading due to corrupt auditing, and The Bell Curve: Intelligence and Class Structure in American Life, a study by R. Hernstein and C. Murray published in 1994, which illustrates how funding can be interconnected with political and social interests.

Research paradigms and the process of introducing new paradigms is illustrated by case studies on intelligence testing; theories on causes of ulcer, and artistic canons. Within the dissemination domain, the cases selected address research on homosexuality and feminist research, pharmaceutical research and gray literature.

The conclusive chapter suggests other sources to consult beyond the three gold standards; here, the authors suggest newspapers and magazines; Web 2.0 (blogs, wikis and social networking sites) as well as scholarly materials.

A general conclusion after reading the book could be that there is no universal checklist which would help to identify the trustworthiness of particular research outputs – on the contrary, establishing the value of research is a rather tedious process of questioning and cross‐checking a number of sources. It would have been beneficial for the reader to have included more on the evaluation of those research outputs which appear in open access journals or local publications, which would not meet all gold standards but nevertheless are part of the body of research outputs. It would also be interesting to know more about the role of citation indices and the pitfalls of their use as one of the gold standards.

Another issue which will have to be addressed in the future is how to treat the trustworthiness of crowdsourcing[1] when it is used to generate research outcomes.

Alan Bailin and Ann Grafstein are both associate professors of library services at Hofstra University. They deserve credit for the impressive work on addressing this issue in a well‐structured and clear manner and for bringing together a set of interesting case studies. The book is for the general reader but is also very useful for researchers who want to understand more about the “end‐users” of their research outputs.

Notes

See the recent Workshop on Crowdsourcing and Human Computation: Systems, Studies and Platforms, 8 May 2011, Vancouver, Canada, available at: http://crowdresearch.org/chi2011‐workshop/

Further Reading

Moore, N. (2000), The Information Needs of Visually Impaired People, p. 77, available at: www.leeds.ac.uk/disability‐studies/archiveuk/moore%20nick/rnib%20report.pdf.

Related articles