Reflections about Garfield’s algorithm

Laura Sinay (UNIRIO, Rio de Janeiro, Brazil and University of the Sunshine Coast, Sunshine Coast, Australia)
Maria Cristina Fogliatti de Sinay (PPGA, Unigranrio, Rio de Janeiro, Brazil)
Rodney William (Bill) Carter (University of the Sunshine Coast, Sunshine Coast, Australia)
Aurea Martins (Unigranrio, Rio de Janeiro, Brazil)

RAUSP Management Journal

ISSN: 2531-0488

Article publication date: 30 September 2019

Issue publication date: 9 December 2019

910

Abstract

Purpose

The purpose of this paper is to critically analyze the influence of the algorithm used on scholarly search engines (Garfield’s algorithm) and propose metrics to improve it so that science could be based on a more democratic way.

Design/methodology/approach

This paper used a snow-ball approach to collect data that allowed identifying the history and the logic behind the Garfield’s algorithm. It follows on excerpting the foundation of existing algorithm and databases of major scholarly search engine. It concluded proposing new metrics so as to surpass restraints and to democratize the scientific discourse.

Findings

This paper finds that the studied algorithm currently biases the scientific discourse toward a narrow perspective, while it should take into consideration several researchers’ characteristics. It proposes the substitution of the h-index by the number of times the scholar’s most cited work has been cited. Finally, it proposes that works in languages different than English should be included.

Research limitations/implications

The broad comprehension of any phenomena should be based on multiple perspectives; therefore, the inclusion of diverse metrics will extend the scientific discourse.

Practical implications

The improvement of the existing algorithm will increase the chances of contact among different cultures, which stimulate rapid progress on the development of knowledge.

Originality/value

The value of this paper resides in demonstrating that the algorithm used in scholarly search engines biases the development of science. If updated as proposed here, science will be unbiased and bias aware.

Keywords

Citation

Sinay, L., Sinay, M.C.F.d., Carter, R.W.(B). and Martins, A. (2019), "Reflections about Garfield’s algorithm", RAUSP Management Journal, Vol. 54 No. 4, pp. 548-558. https://doi.org/10.1108/RAUSP-05-2019-0079

Publisher

:

Emerald Publishing Limited

Copyright © 2019, Laura Sinay, Maria Cristina Fogliatti de Sinay, Rodney William (Bill) Carter and Aurea Martins.

License

Published in RAUSP Management Journal. Published by Emerald Publishing Limited. This article is published under the Creative Commons Attribution (CC BY 4.0) licence. Anyone may reproduce, distribute, translate and create derivative works of this article (for both commercial and non-commercial purposes), subject to full attribution to the original publication and authors. The full terms of this licence may be seen at http://creativecommons.org/licences/by/4.0/legalcode


1. Introduction

In times when the influence of fake news prevails over the influence of facts, science becomes, more than ever, the most reliable source of knowledge. Yet, to be really trustworthy, science needs to be, among others and at the same time, unbiased and bias aware (Denzin & Lincoln, 1994). That is, while scholars need to be loyal to their findings and cannot skew results toward a particular interest (unbiased), they need to be aware that the position (geographically, financially, politically and so on) from where they develop the investigation influences the research questions they ask. Consequently, multiple perspectives need to be taken into consideration for the comprehensive understanding of any phenomena (bias aware) (Sinay, 2008). Nevertheless, there is evidence to suggest that science is not being built based on multiple perspectives; instead, it is being mostly influenced by male scholars who are primarily affiliated to highly developed countries where English is the official language (Analytics, 2017a). While gender, political, economic and linguistic constraints partly explain the limited contribution of scholars with different profiles from the above mentioned, our research looks into the influence of the algorithm currently used on scholarly search engines, such as Web of Science, Google Scholar and Scopus, and how to improve it so that science develops based on a more democratic base.

The initial hypothesis is that under the conditions included in this algorithm, which we here call the algorithm of science, the works of scholars with the above-described profile tend to appear on the top of scholarly searches. As the works presented first are more likely to be read and cited than those presented at the end of the result list, the circular argument is that scholars that work in wealthier environments are likely to be given exceeding visibility by search engines. Yet, if the conditions of the algorithm are modified, then the order of the result list is likely to change. This can diversify the profile of the authors that have written the works presented in result lists and, hence, can help in breaking the hegemony of the scientific discourse. In this context, this paper focuses on summarizing the history and the logic behind the algorithm of science and on presenting and discussing conditions that should be included in it so to surpass current restraints and to democratize the scientific discourse.

2. Theoretical foundation

In 1955, Dr Eugene Garfield, PhD in Structural Linguistic by Pennsylvania University and founder of the Institute for Scientific Information – ISI in 1960, uttered a speech for the American [USA] Association for the Advancement of Science, in which he started talking about “the boring art of documenting,” which includes “anything and everything involved in the creation and use of documents,” from “writing and publishing the paper, analysing it, indexing it, storing it, copying it, retrieving it, and using and evaluating the data in it.” He explained that, while it is an art that has been described and discussed since the early Greeks, it had advanced little in regards to its capacity to retrieve documents in a quick and efficient manner. Garfield understood this as a problem because documenting shapes the scientific intelligence, which allows estimating “whether a country's scientists and laboratories will make a significant contribution in case of a war emergency.” Therefore, because of its importance, the art of documenting needed a better name and to be studied in a scientific manner. He finished his speech renaming the “boring art of documenting” as scientific citation index.

Garfield’s proposal for the first scientific citation index was published in Science in 1955 (Garfield, 1955; Garfield & Hayne, 1955). The primary idea was to create an “up-to-date tool to facilitate the dissemination and retrieval of scientific literature” (Garfield, 2007) and identify authorship (i.e. who influenced who in the scientific world; Garfield, 1956). The scientific citation index he proposed was, fundamentally, a methodological set of steps used to organize the existing literature: a code, as he called in his first works, or an algorithm, as he called it his later years.

Garfield’s tool was introduced to the public as the Web of Knowledge (later renamed to Web of Science) in 1964 by the Institute for Scientific Information (Analytics, 2018a). The scientific citation index was first developed as a record of what each academic published and where and how often the papers were cited (Garfield, 2007; Monastersky, 2005). It was a useful tool for advancing science-based on building on previous work and opening studies to falsification. Yet, as time passed by, the same algorithm used for the scientific citation index started to be used for other purposes. It rapidly affected library purchasing policy for journals and authors’ decisions on where to target their articles for publication (Garfield, 2007; Monastersky, 2005). It became an instrument for measuring scientific productivity (Garfield, 1979, 2007; Hall, 2010b) and a determinant of research funding and tenure decisions (Adam, 2002; Baneyx, 2008; Garfield, 1979; Hall, 2010b). Moreover, it orders the results of scholarly search engines, regardless of the veracity of the review process and of scientific merit to schools of thought and individuals that have the greatest output. Being applicable to so many tasks, the result is that Garfield’s algorithm of scientific citation indexes started to direct the evolution of science itself (Adam, 2002; American Society for Cell Biology [ASCB], 2012; Hicks, Wouters, Waltman, Rijcke & Rafols, 2015; Monastersky, 2005). This is why, in this work, Garfield’s algorithm is referred to as the algorithm of science.

3. Conditions that rule the algorithm of science

Today, there are about 145 academic search engines and databases in use (Wikipedia, 2018d). As they were inspired by Garfield’s work, they are based on the same four core assumptions and indicators (Hall, 2010b; Ortega, 2014). These are described in this section.

The first assumption refers to entries to databases used by academic search engines and reliance on source (journal) impact factors. These are the primary filter for ranking systems of scholarly searches. This postulation was based on Bradford's law (Bradford, 1934; Brookes, 1985), which states that a “small percentage of journals account for a large percentage of the articles published in a specific field of science” (Garfield, 1965). Based on this assumption, scholarly search engines as Web of Science, for example, do not include papers published in journals with moderate to low impact factors in their database (Analytics, 2018c).

The second assumption is that the number of papers a scholar publishes is the best indicator for a scholar’s productivity. The third assumption is the number of times a scholar has been cited is the most appropriate indicator to measure the scholar’s success.

The fourth assumption is based on Merton’s technical norms of science (Merton, 1942), according to which “race, nationality, religion, class, and personal qualities are as such irrelevant” for science development (Merton, 1942). This rationale has been incorporated in the algorithm of science by the null consideration of personal characteristics. That is, as the conclusion of a research is supposed to be always the same independently of the scholar’s profile, individualities, like gender, for example, are not important and are not taken into consideration.

These four assumptions relate to the three of the most important metrics of Garfield’s algorithm:

  1. productivity: the total number of papers within the database in use;

  2. impact: number of times papers are cited; and

  3. h-index: combines “productivity (number of documents) and impact (number of citations) in one metric” (Analytics, 2018b).

4. Critical analysis of the assumptions that rule the algorithm of science

There are three main criticisms about the logic behind the databases of most of the scholarly search engines (Assumption 1). First, because only papers published in high impact journals are included in the databases, individual articles, regardless of their merit, are not presented in scholarly searches and, therefore, are considered to be less important (less weight) by default. Relatedly, articles in lowly ranked journals, despite high levels of innovative thought and original contribution, are unlikely to gain the level of prominence merit because of the retrieval system. Even Nature, one of the highest-ranked scientific journals (Scholar, 2018), has criticized this parameter (Monastersky, 2005), because “a typical paper in a journal with a high impact factor may not, in fact, be cited much more frequently than the average paper in a lower-ranking journal” (Westland, 2004). This is partly because of the democratization of scientific knowledge, as made possible by free search engines like Google Scholar. Works that are freely accessible are currently more likely to be cited than those that require payment for downloading, especially by scholars affiliated to institutes that cannot afford access payments. That is, a contribution’s value and standing may not be determined by merit but by its accessibility, which is determined by the wealth of individual academics or their institutions.

Second, despite the tremendous competition, the high levels of rejections and the existence of thousands of academic journals, a great number of scholars prefer to submit their work to high impact journals, so as to be included in the databases and have their works presented on scholarly searches. While “the journal does not help the article; it is the other way round” (Adam, 2002), this is affecting what kind of research is conducted, shifting research toward topics and issues that are of interest of high impact journals: “for example, it is easy to catch attention when one describes a previously unknown gene or protein related to a disease, even if the analysis is done only superficially […] [Yet] Follow-up studies, to uncover the true functions of the molecules or sometimes to challenge the initial analysis, are typically more difficult to be published in journals of top impact” (Monastersky, 2005).

The third critic relates to the fact that the most highly cited journals only accept and publish papers written in English (Scholar, 2018), which obviously benefits those with better English skills (Drubin & Kellogg, 2017). While English is the language of science (Kaplan, 2001), only about 250 million people in the world speak English as their first language (Wikipedia, 2018b). Those non-native speakers, who struggle to publish in English, frequently complain “that manuscript reviewers often focus on criticizing their English, rather than looking beyond the language to evaluate the scientific results and logic of a manuscript” (Drubin & Kellogg, 2017). Under this context, scholars tend to prefer publishing on the languages they master. By doing so, their works are left out of scientific search engines outcomes, unless the search is done with Google Scholar in the same language as the article was written. Another reason for scholars to publish in their own language is to influence local policies (ASCB, 2012), which should be considered a productivity indicator but is not, because it is difficult and expensive to measure this sort of impact and productivity. Counting papers and citations is much easier.

The fourth critic refers to the second assumption, which says that productivity can be measured by the number of published papers. This limits the measurement of a scholar’s productivity to the number of paper he/she published, thus ignoring the importance of lecturing, social service (ASCB, 2012; Boyer, 1990) and helping on administrative tasks of the institute where the scholar works. It also ignores that the development of knowledge needs reflection, hence time (Adam, 2002). Under the influence of this rule, scholars have surprisingly high numbers of publications, reaching as high as 3,000 scholarly publications. Although 3,000 publications is not the rule, many have published well above 2,000 (Table I).

Around 2,000 scholarly works within a lifespan of, say, 40 years is the equivalent of publishing one work per week (with no holidays), which makes one wonder if there were enough time for experiments, fieldwork or reflection. While these high numbers of publications are important for ranking systems – therefore for grants and tenure – they are likely to diminish the pace of the development of knowledge, as scholars are likely to spend more time “trying to publish their work rather than moving on to the next set of experiments” (Monastersky, 2005) or, in the case of social sciences, to the next fieldwork. In this context, it is important to remember that Isaac Newton wrote 8 papers (Wikipedia, 2018c) and Einstein 20 (Wikipedia, 2018a), and they have made history.

The fifth critic refers to the assumption used in the algorithm of science that established that the more a paper or a scholar is cited, the more important he/she is (Garfield, 1970). Hence, his/her works should be presented first on search results. The first problem with this assumption is that a paper or a scholar might be frequently cited in a negative way, that is, not because the work is good but because the findings are erroneous. Second, there is an audience issue, that is, an important matter might be studied by only a few scholars or a mediocre study may refer to a matter studied by many. Third, “the practice of citing one’s work can meaningfully influence a number of metrics, including total citation counts, citation speed, the ratio of external to internal cites, Diffusion scores and Hirsch’s h-indices” (Carley, Porter & Youtie, 2013); this is especially the case for works authored by many scholars. While it does not seem possible, some papers have as many as 3,000 authors (Analytics, 2017b). If each one of these 3,000 scholars writes just one paper and puts the 3,000 names on it, then in a short time, they will all have 3,000 scholarly works, and if each one of them cites their joint work on their paper, then each one will have been cited nine million times. This, of course, rules out scholars that work in more manageable groups of three or four scholars. Consequently, the number of papers and of citations do not reflect the success of the scholar but the structure in which he/she works.

The main critique to the assumption that citation reflects quality refers to the Mathew effect (Bol, De Vaan & Van De Rijt, 2018; Piezunka, Lee, Haynes & Bothner, 2017). With the algorithm now on use, the scholar who publishes more papers gets more ‘points;’ hence, his work appears on the top of search results. If the scholar has written, say, 1,000 papers, then his works will fill the first pages of search results, while the works of a scholar who has written, say, ‘only’ a dozen papers will appear after the avalanche of papers written by one or few ‘very productive’ scholars. In settings as these, it is likely that who has spoken more will be cited more than the scholar who actually had time for experiences, fieldwork and reflection. In fact, the works of the ‘less productive’ scholar are likely not be read at all, despite the fact hihe/sher works might have more quality than the papers written on production lines. Citation, then, does not reflect the quality of a work or of a scholar (Merton, 1979) but the mechanics of a system.

Critic 6 refers to the fourth assumption of the algorithm of science that established the bias-free rationale of science, which has “been exposed to extensive criticism from both conservatives and radicals alike” (Hull, 1990) and has been denied by many (Brightman, 1939; Ihde, 2002), because personal attributes define the problems that are studied, the methods and technologies that are applied and, in the case of social sciences, the moral values from where the observer considers the research (Bauman, 1998; Ihde, 2002). For the sake of this explanation, say there are three perspectives to be analyzed, namely, A, B and C, under a cultural change regarding female genital mutilation in two moments: a pre-change state at time t0 and a post-change state at t0+1:

  • A represents a female scholar from a low human development country.

  • B represents a female scholar from a medium human development country.

  • C represents a male scholar from a highly human development country.

The female scholar from a low human development country (perspective A) is likely to be directly affected by this change, as well as her daughters, sisters, family and friends. Hence, it is likely this will be a subject of frequent conversations and worries. In this context, such change is likely to be greatly noted by this scholar.

The life of the female scholar from a medium human development country (perspective B) is not likely to be directly affected by this change, but because this scholar is also a woman, she is probably aware of the change and empathic to the women whose lives will be directly affected. While it is likely she is aware of the changes this law would bring, it is less likely she understands the full complexity of the situation.

The life of the male scholar from a highly human development country (perspective C), however, is not likely to be affected at all by this sort of issues. Hence, he might not even be aware of such policy change.

As represented in the illustration of Figure 1: “from perspective A, only eight elements and sixteen links can be seen; from perspective B, 15 elements and 40 links can be seen; and, from perspective C, eight elements and 20 links can be seen. Also, from perspectives A and B, the changes can be identified, although it is more evident from perspective A. From perspective C, the change is hidden and not observable. It is masked by other expressions” (Sinay, 2008). Following this logic, it is unlikely that a male scholar from a highly human development country would be capable of seeing and being able to deeply understand the dynamics involved in female genital mutilation, for example. As the issue is not seen, questions are not asked and knowledge for solving this sort of problem is not developed by male scholars of highly human developed countries (Bauman, 1998; Hall, 2010a).

A good analogy for this discussion is the blind spots of a car, which refers to the areas that cannot be seen by the driver on his/her usual position. Because of them and to avoid accidents, drivers need to be aware they cannot see everything around them, not even with the mirrors, and they tend to rapidly learn to change their position (turn their heads for example) to gain perspective. The same is true for science. While scholars need to be loyal to their findings and not alter results to suit status quo, they need to be aware that, within other cultural systems, the research question could have been asked in a different way and, also, that materials and methods may be culturally determined. In other words, what the observers see depend on where they stand (Hall, 2010a, 2010b; Sinay, 2008) and what the researchers’ study depends on what they see. Consequently, science is always biased.

5. Improving the algorithm of science

Before we advance to discuss how to improve the algorithm of science, we need to be clear in regards to what exactly is the goal of the improved algorithm that we want to propose: do we want scholars with different profiles to have equal chances to have their work recognized or do we want reality to be studied from different perspectives? Each one of these goals would need a different algorithm.

To be fair and to give scholars equivalent chances to have their work recognized would involve estimating the number of scholars with different characteristics (e.g. number of males and females, number of researchers per country, etc.), using this information, we could calculate a weight system. Just to illustrate, according to UNESCO (UNESCO, 2018), the USA has 1,390,406 researchers, Uruguay 1,748 and Lesotho 11 (Table II). So, if the objective were to give equal opportunities for all, then for every researcher from Lesotho, we would have 158 from Uruguay and 126,400 from the USA.

Within this discussion, another point that we need to consider is who is paying for the development of science. While 72 per cent of the budget used in the USA for the development of science comes from business, business only contributes to about 5 per cent of the budget in Uruguay (UNESCO, 2018). Therefore, while the industry is heavily influencing research topics in the USA, it has minimum impact on the studies developed in Uruguay. Then, giving voice to researchers with multiple perspectives also means giving voice to multiple sponsors, which, by default, have great influence on the research topics.

In this context, the changes proposed to the algorithm of science should not aim at giving equal chances to researchers but to build science based on multiple perspectives. That is, the algorithm should not consider the number of men and women in science or the number of citizens or researchers per country, but that the results presented by scholarly search engines alternate the works of women and men and, also, of scholars affiliated to different countries, with different levels of development and that speak different languages.

6. Conclusions

This work focused on looking into the influence of the algorithm currently used on scholarly search engines with the aim of proposing improvements so that a more reliable algorithm can help on breaking the hegemony of science. To do so, this research started by exploring the history behind Garfield’s algorithm used by scholarly search engines. This was done with the objective of understanding the socio-cultural background based on which this algorithm was developed.

The second step of this work involved identifying the most important parameters used by the algorithm and logically discussing their relevance. This allowed concluding that, while defensible on the past, the four main assumptions used by the algorithm are misplaced and, more importantly, significantly bias the development of science toward the perception of male scholars who are primarily affiliated to highly developed countries where English is the official language.

Because of the rules incorporated into the existing algorithm, science has been evolving with limited influence from different cultures. Yet contact with different cultures is one of the quickest paths for cultural evolution (Goodenough, 2003). Therefore, the improvement of the existing algorithm will increase the chances of contact among different cultures, which is likely to stimulate rapid progress on the development of knowledge.

For science to progress based on plural understandings of the world, first, databases used by scholarly search engines need to significantly expand their entries so as to include journals in different languages, at least some of the most spoken languages, as Mandarin and Spanish. This is already being done by Google Scholar. Automatic translators, as Google Translator, are well developed and can efficiently translate works, mitigating (or, in some cases, eliminating) linguistic barriers.

The algorithm of science needs to incorporate the understanding that research questions, topics, materials and methods are always cultural biased. Hence, researchers’ characteristics, like gender, languages that are spoken and the level of development of the country of affiliation, need to be taken into consideration so that result lists of scholarly search incorporate a similar proportion of scholars with different profiles.

The productivity and impact logics, while defensible on the past, are now creating more noise than knowledge; hence, they need to be substituted. If we use Einstein’s and Newton’s scientific production as standard, then we can redefine productivity as one article per year, which is even more than they have published. Success could then be measured by the number of times the scholar’s most cited work was cited, excluding, of course, self-citation.

As research topics are related to scholars’ perspectives, it is possible to conclude that science is developing based on the interests of the most privileged people. This (at least partially) explains why science is making significant advances on issues that are only important for this group, such as travelling further into space, while much simpler problems (from a scientific perspective), such as how to distribute food so as to eliminate famine, remain unanswered.

Yet, one of the key issues relating to the scientific bias might not even be the questions that remain answered and the problems that continue unsolved. In our opinion, the most significant problem seems to be that people are losing trust in science. The consequences of this lack of trust can be tremendous. Vaccination, or lack of, is probably the best (and the most worrying) example of the impact of the general public losing faith in science. It has brought back measles and other illnesses to developed countries and is already costing lives. Climate change and the lack of responses is another example.

If science is to improve the well-being of all, as we believe it should be, then a new approach needs to be adopted. This fundamentally involves exploring issues that can actually make a difference, and this will only happen when those that today lack voice start to be heard. We understand that gender, political, economic and linguistic constraints play an important role in this discussion. However, we know that women from less developed countries where English is not the official language are doing science. In fact, three of the author of this work are women from less developed countries where English is not the official language.

The main problem, as this work has demonstrated, is caused by the algorithm of science. This is a comforting conclusion, as the algorithm can be easily adjusted. Change the algorithm and automatically the voices of sciences will expand.

Figures

The impact of perception on research topics

Figure 1.

The impact of perception on research topics

Examples of scholars who have published more than one thousand scholarly works

No. of publications No. of citations Scholar Link to scholars’ profiles
3,000 358,716 Solomon H Snyder Retrieved from https://scholar.google.com/citations?hl=en&user=gm9yzgEAAAAJ
3,000 334,396 Braunwald E. Retrieved from https://scholar.google.com/citations?hl=en&user=yQoYhjwAAAAJ
2,982 273,580 Robert Langer Retrieved from https://scholar.google.com/citations?hl=en&user=5HX--AYAAAAJ
2,420 267,022 JoAnn E. Manson Retrieved from https://scholar.google.com/citations?hl=en&user=QK07bYEAAAAJ
2,313 235,750 Gordon Guyatt Retrieved from https://scholar.google.com/citations?hl=en&user=VKGc654AAAAJ
2,172 291,952 Graham Colditz Retrieved from https://scholar.google.com.au/citations?user=M5_mEHQAAAAJ&hl=en
1,963 308,262 Michael Graetzel Retrieved from https://scholar.google.com/citations?hl=en&user=B0h47WAAAAAJ
1,814 200,750 Richard A. Flavell Retrieved from https://scholar.google.com/citations?hl=en&user=IPbxgZkAAAAJ
1,598 326,983 Shizuo Akira Retrieved from https://scholar.google.com/citations?hl=en&user=0TG2laoAAAAJ
1,567 333,912 Ronald C Kessler Retrieved from https://scholar.google.com/citations?hl=en&user=EicYvbwAAAAJ
Notes:

Number of publications and of citations as per Google Scholar on October 17, 2018. Note that Google Scholar retrieves a maximum of 3,000 works

Source: Google Scholar

Number of researchers and budget for R&D

Uruguay USA Lesotho
Total population 3,469,551.00a 326,766,748.00b 2,233c
Researchers per million 504.00 4,255.00 5
Total number of researchers 1,748.65 (K) 1,390,392.51 (M) 10
Business 11,046.00 (K) 340,728.00 (M) Not informed
Government 82,487.50 (K) 54,106.00 (M) 91,785
Universities 143,328.20 (K) 62,354.00 (M) 599,295
Private non-profit 2,968.20 (K) 19,272.00 (M) Not informed
Total budget 239,829.90 (K) 476,460.00 (M) Unknown
Total budget from non-business 228,783.90 (K) 135,732.00 (M) Unknown
% budget from non-business 95% 28% Unknown

Source: Data regarding researchers per million and the budget for R&D from UNESCO (UNESCO, 2018). Data about the population per country from Google, search done on November 7, 2018; aRetrieved from www.google.com.au/search?rlz=1C1GGRV_enAU808AU809&ei=U1TiW6SsH8GsrQGkvZfABw&q=population+of+uruguay&oq=population+of+uruguay&gs_l=psy-ab.3.0l4j0i22i30k1l6.54442.56088.0.57384.7.7.0.0.0.0.158.604.0j4.4.0…0.0…1c.1.64.psy-ab.3.4.603…0i131i67k1j0i67k1j0i10k1.0.iMiZ5uO-BNk; bRetrieved fromwww.google.com.au/search?hl=en-AU&rlz=1C1GGRV_enAU808AU809&ei=i1TiW5rMJM_VsAGnvIaoAw&q=population+of+usa&oq=population+of+usa&gs_l=psy-ab.3.0i131i67k1j0l7j0i10k1j0.18254.18541.0.18733.3.3.0.0.0.0.172.335.0j2.2.0…0.0…1c.1.64.psy-ab.1.2.335…0i67k1.0.Xh-rFjWbBUQ; cRetrieved from www.google.com.au/search?q=population+of+lesotho&rlz=1C1GGRV_enAU808AU809&oq=pop&aqs=chrome.2.69i57j69i59j35i39l2j0l2.3045j0j7&sourceid=chrome&ie=UTF-8

References

Adam, D. (2002). News feature: The counting house. Nature, 415, 726729.

American Society for Cell Biology [ASCB]. (2012). San Francisco declaration on research assessment. Retrieved from https://sfdora.org/read

Analytics, C. (2017a). Clarivate analytics names the world’s most impactful scientific researchers with the release of the 2017 highly cited researchers list. Retrieved from https://clarivate.com/blog/news/clarivate-analytics-names-worlds-impactful-scientific-researchers-release-2017-highly-cited-researchers-list/

Analytics, C. (2017b). Look up to the brightest stars: Introducing 2017’s highly cited researchers. Retrieved from https://hcr.clarivate.com/wp-content/uploads/2017/11/2017-Highly-Cited-Researchers-Report-1.pdf

Analytics, C. (2018a). The concept of citation indexing: A unique and innovative tool for navigating the research literature. Retrieved from https://clarivate.com/essays/concept-citation-indexing/

Analytics, C. (2018b). In cites benchmarking and analytics: Understanding the metrics. Retrieved from https://clarivate.libguides.com/incites_ba/understanding-indicators

Analytics, C. (2018c). Web of science databases: Make web of science your own. Retrieved from https://clarivate.com/products/web-of-science/databases/

Baneyx, A. (2008). “Publish or Perish” as citation metrics used to analyze scientific output in the humanities: International case studies in economics, geography, social sciences, philosophy, and history. Archivum Immunologiae et Therapiae Experimentalis, 56, 363371.

Bauman, Z. (1998). Globalization: The human consequences, New York, NY: Columbia University Press.

Bol, T., De Vaan, M. and Van De Rijt, A. (2018). The Matthew effect in science funding. Proceedings of the National Academy of Sciences, 115, 48874890.

Boyer, E. L. (1990). Scholarship reconsidered: Priorities of the professoriate, Princeton, NJ: Princeton University Press.

Bradford, S. C. (1934). Sources of information on specific subjects. Engineering, 137, 8586.

Brightman, R. (1939). The social function of science. Nature, 143, 262263.

Brookes, B. C. (1985). “Sources of information on specific subjects” by SC Bradford. Journal of Information Science, 10, 173175.

Carley, S., Porter, A. L. and Youtie, J. (2013). Toward a more precise definition of self-citation. Scientometrics, 94, 777780.

Denzin, N. K. and Lincoln, Y. S. (1994). The SAGE handbook of qualitative research, Thousand Oaks, CA: Sage Publications.

Drubin, D. G. and Kellogg, D. R. (2017). English as the universal language of science: Opportunities and challenges. Molecular Biology of the Cell, 23, 1399. http://dx.doi.org/10.1091/mbc.E12-02-0108

Garfield, E. (1955). Citation indexes to science: A new dimension in documentation through association of ideas. Science, 122, 108111.

Garfield, E. (1956). Citation indexes: New paths to scientific knowledge. The Chemical Bulletin, 43, 1112.

Garfield, E. (1965). Science citation index – answers to frequently asked questions. Revue: Internationale De La Documentation, 32, 112116.

Garfield, E. (1970). Citation indexing for studying science. Nature, 227, 669671.

Garfield, E. (1979). Is citation analysis a legitimate evaluation tool?. Scientometrics, 1, 359375.

Garfield, E. and Hayne, R. L. (1955). Needed - A national science intelligence and documentation center. “Symposium on Storage and Retrieval of Scientific Information” of the Annual Meeting of the American Association for the Advancement of Science in Atlanta, 24, 17.

Garfield, E. (2007). The evolution of the science citation index. International Microbiology, 10, 6569.

Goodenough, W. H. (2003). In pursuit of culture. Annual Review of Anthropology, 32, 112.

Hall, C. M. (2010a). Academic capitalism, academic responsibility and tourism academics: Or, the silence of the lambs?. Tourism Recreation Research, 35, 298301.

Hall, C. M. (2010b). A citation analysis of tourism recreation research. Tourism Recreation Research, 35, 305309.

Hicks, D., Wouters, P., Waltman, L., Rijcke, S. D. and Rafols, I. (2015). Bibliometrics: The Leiden manifesto for research metrics. Nature News, 520, 429431.

Hull, D. L. (1990). Particularism in science. Criticism, 32, 343359.

Ihde, D. (2002). How could we ever believe science is not political?. Technology in Society, 24, 179189.

Kaplan, R. B. (2001). English – the accidental language of science?. In U. Ammon (Ed.) The dominance of English as a language of science: Effects on other languages and language communities (chap.1). New York, NY: Mouton de Gruyter, 326.

Merton, R. K. (1942). A note on science and democracy. Journal of Legal and Political Sociology, 1, 115-126.

Merton, R. K. (1979). Foreword. In E. Garfield and R. K. Merton (Eds) Citation indexing: Its theory and application in science, technology, and humanities (preface pp. 59). Philadelphia, PA: ISI Press.

Monastersky, R. (2005). The number that’s devouring science. Chronicle of Higher Education, 52, 14. Retrieved from https://www3.nd.edu/∼pkamat/citations/chronicle.pdf

Ortega, J. L. (2014). Academic search engines: A quantitative outlook, Amsterdam, The Netherlands: Elsevier.

Piezunka, H., Lee, W., Haynes, R. and Bothner, M. S. (2017). The Matthew effect as an unjust competitive advantage: Implications for competition near status boundaries. Journal of Management Inquiry, 27, 378381.

Scholar, G. (2018). Top publications. Retrieved from https://scholar.google.com.au/citations?view_op=top_venues&hl=en&vq=en

Sinay, L. (2008). Modelling and forecasting cultural and environmental changes (PhD). The University of Queensland, Brisbane, Australia.

UNESCO (2018). How much does your country invest in R&D?. Retrieved from http://uis.unesco.org/apps/visualisations/research-and-development-spending/

Westland, J. C. (2004). The IS core XII: Authority, dogma, and positive science in information systems research. Communications of the Association for Information Systems, 13, 12.

Wikipedia (2018a). Albert Einstein. Retrieved from https://en.wikipedia.org/wiki/Albert_Einstein

Wikipedia (2018b). English-speaking world. Retrieved from https://en.wikipedia.org/wiki/English-speaking_world

Wikipedia (2018c). Isaac Newton. Retrieved from https://en.wikipedia.org/wiki/Isaac_Newton

Wikipedia (2018d). List of academic databases and search engines. Retrieved from https://en.wikipedia.org/wiki/List_of_academic_databases_and_search_engines

Acknowledgements

Author contributions: Sinay, L. conceptualized this work and was responsible for data curation, formal analysis, investigation and writing the initial draft. Sinay, C. and Carter, B. equally contributed with defining the methodology approach, writing review and editing. Martins, A. helped with the final editing.

Corresponding author

Maria Cristina Fogliatti de Sinay can be contacted at: cristinasinay@gmail.com

Related articles