The purpose of this paper is to examine the effects of the use of the citation‐based journal impact factor for evaluative purposes upon the behaviour of authors and editors. It seeks to give a critical examination of a number of claims as regards the manipulability of this indicator on the basis of an empirical analysis of publication and referencing practices of authors and journal editors
The paper describes mechanisms that may affect the numerical values of journal impact factors. It also analyses general, “macro” patterns in large samples of journals in order to obtain indications of the extent to which such mechanisms are actually applied on a large scale. Finally it presents case studies of particular science journals in order to illustrate what their effects may be in individual cases.
The paper shows that the commonly used journal impact factor can to some extent be relatively easily manipulated. It discusses several types of strategic editorial behaviour, and presents cases in which journal impact factors were – intentionally or otherwise – affected by particular editorial strategies. These findings lead to the conclusion that one must be most careful in interpreting and using journal impact factors, and that authors, editors and policy makers must be aware of their potential manipulability. They also show that some mechanisms occur as of yet rather infrequently, while for others it is most difficult if not impossible to assess empirically how often they are actually applied. If their frequency of occurrence increases, one should come to the conclusion that the impact of impact factors is decreasing.
The paper systematically describes a number of claims about the manipulability of journal impact factors that are often based on “informal” or even anecdotal evidences and illustrates how these claims can be further examined in thorough empirical research of large data samples.
Emerald Group Publishing Limited
Copyright © 2008, Emerald Group Publishing Limited