Search results

1 – 10 of 927
To view the access options for this content please click here
Article
Publication date: 14 January 2014

Ronald Rousseau

– The purpose of this paper is to extend the h-index framework to the case that articles are counted fractionally.

Downloads
500

Abstract

Purpose

The purpose of this paper is to extend the h-index framework to the case that articles are counted fractionally.

Design/methodology/approach

Three restrictions related to the standard h-index are explained: as the standard h-index is a natural number it is a rather coarse indicator; if a scientist has published a relatively small number of publications then the h-index is completely determined by the number of publications; the standard h-index cannot be applied if publications are counted fractionally, or when magnitude values smaller than one occur.

Findings

We recall solutions we proposed in earlier publications regarding the first two problems (the use of the interpolated h-index and of the pseudo h-index) and add a new proposal to solve the third problem. The relation between the recently introduced window/field-normalized h-type index (hwf-index) and the interpolated h-index is described. A real-world example proves the feasibility of this proposal.

Research limitations/implications

Colleagues have shown that the h-index and its variations have fatal flaws and hence should never be used. Yet, not everyone agrees with this opinion.

Originality/value

Assuming that the h-index still has some value, this paper introduces a refinement of the interpolated h-index, called the generalized interpolated h-index. In this way the h-index framework is extended to incorporate, for instance, the case that fractional counting for publications and citations is applied.

Details

Aslib Journal of Information Management, vol. 66 no. 1
Type: Research Article
ISSN: 2050-3806

Keywords

To view the access options for this content please click here
Article
Publication date: 14 September 2021

Fayaz Ahmad Loan, Nahida Nasreen and Bisma Bashir

The study's main purpose is to scrutinize Google Scholar profiles and find the answer to the question, “Do authors play fair or manipulate Google Scholar Bibliometric…

Abstract

Purpose

The study's main purpose is to scrutinize Google Scholar profiles and find the answer to the question, “Do authors play fair or manipulate Google Scholar Bibliometric Indicators like h-index and i10-index?”

Design/methodology/approach

The authors scrutinized the Google Scholar profiles of the top 50 library and science researchers claiming authorship of 21,022 publications. The bibliographic information of all the 21,022 publications like authorship and subject details were verified to identify accuracy, discrepancies and manipulation in their authorship claims. The actual and fabricated entries of all the authors along with their citations were recorded in the Microsoft Office Excel 2007 for further analyses and interpretation using simple arithmetic calculations.

Findings

The results show that the h-index of authors obtained from the Google Scholar should not be approved at its face value as the variations exist in the publication count and citations, which ultimately affect their h-index and i10 index. The results reveal that the majority of the authors have variations in publication count (58%), citations (58%), h-index (42%) and i10-index (54%). The magnitude of variation in the number of publications, citations, h-index and i10-index is very high, especially for the top-ranked authors.

Research limitations/implications

The scope of the study is strictly restricted to the faculty members of library and information science and cannot be generalized across disciplines. Further, the scope of the study is limited to Google Scholar and caution needs to be taken to extend results to other databases like Web of Science and Scopus.

Practical implications

The study has practical implications for authors, publishers, and academic institutions. Authors must stop the unethical research practices; publishers must adopt techniques to overcome the problem and academic institutions need to take precautions before hiring, recruiting, promoting and allocating resources to the candidates on the face value of the Google Scholar h-index. Besides, Google needs to work on the weak areas of Google Scholar to improve its efficacy.

Originality/value

The study brings to light the new ways of manipulating bibliometric indicators like h-index, and i10-index provided by Google Scholar using false authorship claims.

Details

Library Hi Tech, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0737-8831

Keywords

To view the access options for this content please click here
Article
Publication date: 9 November 2015

Wen-Chin Hsu, Chih-Fong Tsai and Jia-Huan Li

Although journal rankings are important for authors, readers, publishers, promotion, and tenure committees, it has been argued that the use of different measures (e.g. the…

Abstract

Purpose

Although journal rankings are important for authors, readers, publishers, promotion, and tenure committees, it has been argued that the use of different measures (e.g. the journal impact factor (JIF), and Hirsch’s h-index) often lead to different journal rankings, which render it difficult to make an appropriate decision. A hybrid ranking method based on the Borda count approach, the Standardized Average Index (SA index), was introduced to solve this problem. The paper aims to discuss these issues.

Design/methodology/approach

Citations received by the articles published in 85 Health Care Sciences and Services (HCSS) journals in the period of 2009-2013 were analyzed with the use of the JIF, the h-index, and the SA index.

Findings

The SA index exhibits a high correlation with the JIF and the h-index (γ > 0.9, p < 0.01) and yields results with higher accuracy than the h-index. The new, comprehensive citation impact analysis of the 85 HCSS journals shows that the SA index can help researchers to find journals with both high JIFs and high h-indices more easily, thereby harvesting references for paper submissions and research directions.

Originality/value

The contribution of this study is the application of the Borda count approach to combine the HCSS journal rankings produced by the two widely accepted indices of the JIF and the h-index. The new HCSS journal rankings can be used by publishers, journal editors, researchers, policymakers, librarians, and practitioners as a reference for journal selection and the establishment of decisions and professional judgment.

Details

Online Information Review, vol. 39 no. 7
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 3 July 2017

Lisa S. Panisch, Thomas E. Smith, Tyler Edison Carter and Philip J. Osteen

The purpose of this paper is to analyze the role of gender and faculty rank to determine their contribution to individual variance in research productivity for doctoral…

Abstract

Purpose

The purpose of this paper is to analyze the role of gender and faculty rank to determine their contribution to individual variance in research productivity for doctoral social work faculty in Israel.

Design/methodology/approach

H-index scores were used to assess research productivity. Quantitative comparisons of the h-index scores were performed for a sample (n=92) of social work faculty from Israeli universities with social work doctoral programs. Average h-index differences were assessed between genders at each tenure-track faculty rank and between faculty ranks for each gender.

Findings

Scholarly impact varied as a function of faculty rank. There was little indication of variance due to gender or the interaction of gender and rank. The average h-index of male faculty was higher than the mean h-index for women at the rank of lecturer and full professor. Women had a higher mean h-index than men at the rank of senior lecturer and associate professor. H-index means varied most at the full professor level.

Originality/value

Results were congruent with previous studies demonstrating that male faculty in the social sciences have higher overall h-index scores than women. However, this study was unique in its finding that this gap was reversed for Israeli social work faculty at the senior lecturer and associate professor. Further research is needed to examine the differences in publication patterns of social work faculty in different countries.

Details

Journal of Applied Research in Higher Education, vol. 9 no. 3
Type: Research Article
ISSN: 2050-7003

Keywords

To view the access options for this content please click here
Article
Publication date: 3 August 2012

Mu‐Hsuan Huang

The purpose of this study is to evaluate the scientific performance of universities by extending the application of the h‐index from the individual to the institutional…

Downloads
1654

Abstract

Purpose

The purpose of this study is to evaluate the scientific performance of universities by extending the application of the h‐index from the individual to the institutional level. A ranking of the world's top universities based on their h‐index scores was produced. The geographic distribution of the highly ranked universities by continent and by country was also analysed.

Design/approach/methodology

This study uses bibliometric analysis to rank the universities. In order to calculate their h‐index the numbers of papers and citations in each university were gathered from Web of Science, including the Science Citation Index and Social Science Citation Index. Authority control dealing with variations in university names ensured the accuracy of each university's number of published journal papers and the subsequent statistics of their citations.

Findings

It was found that a high correlation exists between the h‐index ranking generated in this study and that produced by Shanghai Jiao Tong University. The results confirm the validity of the h‐index in the assessment of research performance at the university level.

Originality/value

The h‐index has been used to evaluate research performance at the institutional level in several recent studies; however these studies evaluated institutions' performance only in certain disciplines or in a single country. This paper measures the research performance of universities all over the world, and the applicability of the h‐index at the institutional level was validated by calculating the correlation between the ranking result of the h‐index and the ranking by the Shanghai Jiao Tong University.

Details

Online Information Review, vol. 36 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 27 November 2009

Richard J.C. Brown

The purpose of this conceptual paper is to present a simple, novel method for excluding self‐citation from h‐index values – the b‐index.

Downloads
1056

Abstract

Purpose

The purpose of this conceptual paper is to present a simple, novel method for excluding self‐citation from h‐index values – the b‐index.

Design/methodology/approach

The work described assumes that relative self‐citation rate is constant across an author's publications and that the citation profile of a set of papers follows a Zipfian distribution, and from this derives a simple mathematical expression for excluding self‐citation from h‐index values.

Findings

It is shown that this new index is simply equal to the integer value of the author's external citation rate (non‐self‐citations) to the power three quarters, multiplied by their h‐index. This value, called the b‐index, does not require an extensive analysis of the self‐citation rates of individual papers to produce, and appropriately shows the biggest numerical decreases, as compared to the corresponding h‐index, for very high self‐citers.

Practical implications

The method presented allows the user to assess quickly and simply the effects of self‐citation on an author's h‐index.

Originality/value

This paper provides a simple and novel method for excluding self‐citation from the h‐index and should be of interest to those interested in bibliometrics and databases of scientific literature.

Details

Online Information Review, vol. 33 no. 6
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 7 September 2010

Michael Norris and Charles Oppenheim

This review aims to show, broadly, how the h‐index has become a subject of widespread debate, how it has spawned many variants and diverse applications since first…

Downloads
1811

Abstract

Purpose

This review aims to show, broadly, how the h‐index has become a subject of widespread debate, how it has spawned many variants and diverse applications since first introduced in 2005 and some of the issues in its use.

Design/methodology/approach

The review drew on a range of material published in 1990 or so sources published since 2005. From these sources, a number of themes were identified and discussed ranging from the h‐index's advantages to which citation database might be selected for its calculation.

Findings

The analysis shows how the h‐index has quickly established itself as a major subject of interest in the field of bibliometrics. Study of the index ranges from its mathematical underpinning to a range of variants perceived to address the indexes' shortcomings. The review illustrates how widely the index has been applied but also how care must be taken in its application.

Originality/value

The use of bibliometric indicators to measure research performance continues, with the h‐index as its latest addition. The use of the h‐index, its variants and many applications to which it has been put are still at the exploratory stage. The review shows the breadth and diversity of this research and the need to verify the veracity of the h‐index by more studies.

Details

Journal of Documentation, vol. 66 no. 5
Type: Research Article
ISSN: 0022-0418

Keywords

To view the access options for this content please click here
Article
Publication date: 11 April 2008

Péter Jacsó

This paper aims to provide a general overview, to be followed by a series of papers focusing on the analysis of pros and cons of the three largest…

Downloads
944

Abstract

Purpose

This paper aims to provide a general overview, to be followed by a series of papers focusing on the analysis of pros and cons of the three largest, cited‐reference‐enhanced, multidisciplinary databases (Google Scholar, Scopus, and Web of Science) for determining the h‐index.

Design/methodology/approach

The paper focuses on the analysis of pros and cons of the three largest, cited‐reference‐enhanced, multidisciplinary databases (Google Scholar, Scopus and Web of Science).

Findings

The h‐index, developed by Jorge E. Hirsch to quantify the scientific output of researchers, has immediately received well‐deserved attention in academia. The theoretical part of his idea was widely embraced, and even enhanced, by several researchers. Many of them also recommended derivative metrics based on Hirsch's idea to compensate for potential distortion factors, such as high self‐citation rates. The practical aspects of determining the h‐index also need scrutiny, because some content and software characteristics of reference‐enhanced databases can strongly influence the h‐index values.

Originality/value

The paper focuses on the analysis of pros and cons of the three largest, cited‐reference‐enhanced, multidisciplinary databases.

Details

Online Information Review, vol. 32 no. 2
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 3 October 2019

Prem Vrat

The purpose of this paper is to reveal the limitations of h-index in assessing research performance through citation analysis and suggest two new indexes called prime…

Abstract

Purpose

The purpose of this paper is to reveal the limitations of h-index in assessing research performance through citation analysis and suggest two new indexes called prime index (P-index) and value added index (V-index), which are simpler to compute than g-index and more informative. For more serious research performance evaluation, analytic hierarchy process (AHP) methodology is proposed.

Design/methodology/approach

The methodology adopted is to compare existing indexes for citation-based research assessment and identify their limitations, particularly the h-index, which is most commonly employed. It gives advantages of g-index over h-index and then proposes P-index which is simpler to compute than g-index but is more powerful in information content than g-index. Another V-index is proposed on a similar philosophy as P-index by considering total number of citations/author. For serious evaluation of finite candidates for awards/recognitions, a seven-criteria-based AHP is proposed. All new approaches have been illustrated by drawing raw data from Google scholar-powered website H-POP.

Findings

This paper demonstrates over-hype about use of h-index over g-index. However, it shows that newly proposed P-index is much simpler in computation than g but better than g-index. V-index is a quick way to ascertain the value added by a research scientist in multiple-authored research papers. P-index gives a value 3–4 percent higher than g and it is holistic index as it uses complete data of citations. AHP is a very powerful multi-criteria approach and it also shows g-index to be a more important factor, whereas h-index is the least important but frequently used approach. It is hoped that the findings of this paper will help in rectifying the misplaced emphasis on h-index alone.

Research limitations/implications

The research focus has been to suggest new faster, better methods of research assessment. However, a detailed comparison of all existing approaches with the new approaches will call for testing these over a large number of data sets. Its limitation is that it has tested the approaches on 5 academics for illustrating AHP and 20 researchers for comparing new indexes with some of the existing indexes. All existing indexes are also not covered.

Practical implications

The outcomes of this research may have major practical applications for research assessment of academics/researchers and rectify the imbalance in assessment by reducing over-hype on h-index. For more serious evaluation of research performance of academics, the seven-criteria AHP approach will be more comprehensive and holistic in comparison with a single criterion citation metric. One hopes that the findings of this paper will receive much attention/debate.

Social implications

Research assessment based on proposed approaches is likely to lead to greater satisfaction among those evaluated and higher confidence in the evaluation criteria.

Originality/value

P- and V-indexes are original. Application of AHP for multi-criteria assessment of research through citation analysis is also a new idea.

Details

Journal of Advances in Management Research, vol. 17 no. 1
Type: Research Article
ISSN: 0972-7981

Keywords

To view the access options for this content please click here
Article
Publication date: 1 June 2015

Mousa Yaminfirooz and Hemmat Gholinia

This paper aims to evaluate some of the known scientific indexes by using virtual data and proposes a new index, named multiple h-index (mh-index), for removing the limits…

Abstract

Purpose

This paper aims to evaluate some of the known scientific indexes by using virtual data and proposes a new index, named multiple h-index (mh-index), for removing the limits of these variants.

Design/methodology/approach

Citation report for 40 researchers in Babol, Iran, was extracted from the Web of Science and entered in a checklist together with their scientific lifetimes and published ages of their papers. Some statistical analyses, especially exploratory factor analysis (EFA) and structural correlations, were done in SPSS 19.

Findings

EFA revealed three factors with eigenvalues greater than 1 and explained variance of over 96 per cent in the studied indexes, including the mh-index. Factors 1, 2 and 3 explained 44.38, 28.19 and 23.48 of the variance in the correlation coefficient matrix, respectively. The m-index (with coefficient of 90 per cent) in Factor 1, a-index (with coefficient of 91 per cent) in Factor 2 and h- and h2-indexes (with coefficients of 93 per cent) in Factor 3 had the highest factor loadings. Correlation coefficients and related comparative diagrams showed that the mh-index is more accurate than the other nine variants in differentiating the scientific impact of researchers with the same h-index.

Originality/value

As the studied variants could not satisfy all limits of the h-index, scientific society needs an index which accurately evaluates individual researcher’s scientific output. As the mh-index has some advantages over the other studied variants, it can be an appropriate alternative for them.

Details

The Electronic Library, vol. 33 no. 3
Type: Research Article
ISSN: 0264-0473

Keywords

1 – 10 of 927