Search results

1 – 10 of 476
To view the access options for this content please click here
Article
Publication date: 1 June 2015

Mousa Yaminfirooz and Hemmat Gholinia

This paper aims to evaluate some of the known scientific indexes by using virtual data and proposes a new index, named multiple h-index (mh-index), for removing the limits…

Abstract

Purpose

This paper aims to evaluate some of the known scientific indexes by using virtual data and proposes a new index, named multiple h-index (mh-index), for removing the limits of these variants.

Design/methodology/approach

Citation report for 40 researchers in Babol, Iran, was extracted from the Web of Science and entered in a checklist together with their scientific lifetimes and published ages of their papers. Some statistical analyses, especially exploratory factor analysis (EFA) and structural correlations, were done in SPSS 19.

Findings

EFA revealed three factors with eigenvalues greater than 1 and explained variance of over 96 per cent in the studied indexes, including the mh-index. Factors 1, 2 and 3 explained 44.38, 28.19 and 23.48 of the variance in the correlation coefficient matrix, respectively. The m-index (with coefficient of 90 per cent) in Factor 1, a-index (with coefficient of 91 per cent) in Factor 2 and h- and h2-indexes (with coefficients of 93 per cent) in Factor 3 had the highest factor loadings. Correlation coefficients and related comparative diagrams showed that the mh-index is more accurate than the other nine variants in differentiating the scientific impact of researchers with the same h-index.

Originality/value

As the studied variants could not satisfy all limits of the h-index, scientific society needs an index which accurately evaluates individual researcher’s scientific output. As the mh-index has some advantages over the other studied variants, it can be an appropriate alternative for them.

Details

The Electronic Library, vol. 33 no. 3
Type: Research Article
ISSN: 0264-0473

Keywords

To view the access options for this content please click here
Article
Publication date: 3 October 2019

Prem Vrat

The purpose of this paper is to reveal the limitations of h-index in assessing research performance through citation analysis and suggest two new indexes called prime…

Abstract

Purpose

The purpose of this paper is to reveal the limitations of h-index in assessing research performance through citation analysis and suggest two new indexes called prime index (P-index) and value added index (V-index), which are simpler to compute than g-index and more informative. For more serious research performance evaluation, analytic hierarchy process (AHP) methodology is proposed.

Design/methodology/approach

The methodology adopted is to compare existing indexes for citation-based research assessment and identify their limitations, particularly the h-index, which is most commonly employed. It gives advantages of g-index over h-index and then proposes P-index which is simpler to compute than g-index but is more powerful in information content than g-index. Another V-index is proposed on a similar philosophy as P-index by considering total number of citations/author. For serious evaluation of finite candidates for awards/recognitions, a seven-criteria-based AHP is proposed. All new approaches have been illustrated by drawing raw data from Google scholar-powered website H-POP.

Findings

This paper demonstrates over-hype about use of h-index over g-index. However, it shows that newly proposed P-index is much simpler in computation than g but better than g-index. V-index is a quick way to ascertain the value added by a research scientist in multiple-authored research papers. P-index gives a value 3–4 percent higher than g and it is holistic index as it uses complete data of citations. AHP is a very powerful multi-criteria approach and it also shows g-index to be a more important factor, whereas h-index is the least important but frequently used approach. It is hoped that the findings of this paper will help in rectifying the misplaced emphasis on h-index alone.

Research limitations/implications

The research focus has been to suggest new faster, better methods of research assessment. However, a detailed comparison of all existing approaches with the new approaches will call for testing these over a large number of data sets. Its limitation is that it has tested the approaches on 5 academics for illustrating AHP and 20 researchers for comparing new indexes with some of the existing indexes. All existing indexes are also not covered.

Practical implications

The outcomes of this research may have major practical applications for research assessment of academics/researchers and rectify the imbalance in assessment by reducing over-hype on h-index. For more serious evaluation of research performance of academics, the seven-criteria AHP approach will be more comprehensive and holistic in comparison with a single criterion citation metric. One hopes that the findings of this paper will receive much attention/debate.

Social implications

Research assessment based on proposed approaches is likely to lead to greater satisfaction among those evaluated and higher confidence in the evaluation criteria.

Originality/value

P- and V-indexes are original. Application of AHP for multi-criteria assessment of research through citation analysis is also a new idea.

Details

Journal of Advances in Management Research, vol. 17 no. 1
Type: Research Article
ISSN: 0972-7981

Keywords

To view the access options for this content please click here
Article
Publication date: 3 August 2012

Mu‐Hsuan Huang

The purpose of this study is to evaluate the scientific performance of universities by extending the application of the h‐index from the individual to the institutional…

Abstract

Purpose

The purpose of this study is to evaluate the scientific performance of universities by extending the application of the h‐index from the individual to the institutional level. A ranking of the world's top universities based on their h‐index scores was produced. The geographic distribution of the highly ranked universities by continent and by country was also analysed.

Design/approach/methodology

This study uses bibliometric analysis to rank the universities. In order to calculate their h‐index the numbers of papers and citations in each university were gathered from Web of Science, including the Science Citation Index and Social Science Citation Index. Authority control dealing with variations in university names ensured the accuracy of each university's number of published journal papers and the subsequent statistics of their citations.

Findings

It was found that a high correlation exists between the h‐index ranking generated in this study and that produced by Shanghai Jiao Tong University. The results confirm the validity of the h‐index in the assessment of research performance at the university level.

Originality/value

The h‐index has been used to evaluate research performance at the institutional level in several recent studies; however these studies evaluated institutions' performance only in certain disciplines or in a single country. This paper measures the research performance of universities all over the world, and the applicability of the h‐index at the institutional level was validated by calculating the correlation between the ranking result of the h‐index and the ranking by the Shanghai Jiao Tong University.

Details

Online Information Review, vol. 36 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 12 September 2016

Cameron Stewart Barnes

The purpose of this paper is to show how bibliometrics would benefit from a stronger programme of construct validity.

Abstract

Purpose

The purpose of this paper is to show how bibliometrics would benefit from a stronger programme of construct validity.

Design/methodology/approach

The value of the construct validity concept is demonstrated by applying this approach to the evaluation of the h-index, a widely used metric.

Findings

The paper demonstrates that the h-index comprehensively fails any test of construct validity. In simple terms, the metric does not measure what it purports to measure. This conclusion suggests that the current popularity of the h-index as a topic for bibliometric research represents wasted effort, which might have been avoided if researchers had adopted the approach suggested in this paper.

Research limitations/implications

This study is based on the analysis of a single bibliometric concept.

Practical implications

The conclusion that the h-index fails any test in terms of construct validity implies that the widespread use of this metric within the higher education sector as a management tool represents poor practice, and almost certainly results in the misallocation of resources.

Social implications

This paper suggests that the current enthusiasm for the h-index within the higher education sector is misplaced. The implication is that universities, grant funding bodies and faculty administrators should abandon the use of the h-index as a management tool. Such a change would have a significant effect on current hiring, promotion and tenure practices within the sector, as well as current attitudes towards the measurement of academic performance.

Originality/value

The originality of the paper lies in the systematic application of the concept of construct validity to bibliometric enquiry.

Details

Journal of Documentation, vol. 72 no. 5
Type: Research Article
ISSN: 0022-0418

Keywords

To view the access options for this content please click here
Article
Publication date: 8 August 2008

Péter Jacsó

The purpose of this paper is to discuss the pros and cons of computing the h‐index using Scopus.

Abstract

Purpose

The purpose of this paper is to discuss the pros and cons of computing the h‐index using Scopus.

Design/methodology/approach

The paper looks at the content features and the software capabilities of Scopus from the perspective of computing a reasonable h‐index for scholars.

Findings

Although there are limitations in the content, and even in the mostly excellent, swift, powerful and innovative software of Scopus, it can produce a much more reliable and reproducible h‐index – at least for relatively junior researchers – than Google Scholar.

Originality/value

The paper adds insight into computing the h‐index using Scopus.

Details

Online Information Review, vol. 32 no. 4
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 9 November 2015

Wen-Chin Hsu, Chih-Fong Tsai and Jia-Huan Li

Although journal rankings are important for authors, readers, publishers, promotion, and tenure committees, it has been argued that the use of different measures (e.g. the…

Abstract

Purpose

Although journal rankings are important for authors, readers, publishers, promotion, and tenure committees, it has been argued that the use of different measures (e.g. the journal impact factor (JIF), and Hirsch’s h-index) often lead to different journal rankings, which render it difficult to make an appropriate decision. A hybrid ranking method based on the Borda count approach, the Standardized Average Index (SA index), was introduced to solve this problem. The paper aims to discuss these issues.

Design/methodology/approach

Citations received by the articles published in 85 Health Care Sciences and Services (HCSS) journals in the period of 2009-2013 were analyzed with the use of the JIF, the h-index, and the SA index.

Findings

The SA index exhibits a high correlation with the JIF and the h-index (γ > 0.9, p < 0.01) and yields results with higher accuracy than the h-index. The new, comprehensive citation impact analysis of the 85 HCSS journals shows that the SA index can help researchers to find journals with both high JIFs and high h-indices more easily, thereby harvesting references for paper submissions and research directions.

Originality/value

The contribution of this study is the application of the Borda count approach to combine the HCSS journal rankings produced by the two widely accepted indices of the JIF and the h-index. The new HCSS journal rankings can be used by publishers, journal editors, researchers, policymakers, librarians, and practitioners as a reference for journal selection and the establishment of decisions and professional judgment.

Details

Online Information Review, vol. 39 no. 7
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 21 June 2011

Péter Jacsó

The h‐index has been used to evaluate research productivity and impact (as manifested by the number of publications and the number of citations received) at many levels of…

Abstract

Purpose

The h‐index has been used to evaluate research productivity and impact (as manifested by the number of publications and the number of citations received) at many levels of aggregations for various targets. The purpose of this paper is to examine the bibliometric characteristics of the largest multidisciplinary databases that are the most widely used for measuring research productivity and impact.

Design/methodology/approach

The paper presents preliminary findings about the Scopus database. It is to be complemented and contrasted by the bibliometric profile of the Web of Science (WoS) database.

Findings

The test results showed that 18.7 million Scopus records had one or more cited references, representing 42 per cent of the entire database content. The ratio of cited reference enhanced records kept slightly increasing year by year from 1996 to 2009. Scopus classifies the journals and other serial sources into 27 broad subject areas by assigning its journals to 21 science disciplines, four social science disciplines, a single Arts and Humanities category, and/or a multidisciplinary category. The distribution of records among the broad subject areas can be searched in Scopus using the four‐character codes of the subject areas. A journal or a single primary document may be assigned to more than one subject area. However, Scopus overdoes this, and it significantly distorts the h‐index for the broad subject areas. The h‐index of the pre‐1996 subset of records for the 21,066,019 documents published before 1996 is 1,451, i.e. there are records for 1,451 documents in that subset that were cited more than 1,450 times. The total number of citations received by these 1,451 papers (i.e. the h‐core, representing the number of items that contribute to the h‐index) is 4,416,488, producing an average citation rate of 3,044 citations per item in the h‐core of the pre‐1996 subset of the entire Scopus database. For the subset providing records for 23,455,354 documents published after 1995, the h‐index is 1,339, so the total number of citations must be at least 1,792,921. In reality the total number of citations received by these papers is 3,903,157, yielding a citation rate of 2,915 citations per document in the h‐core. For the entire Scopus database of 44.5 million records the h‐index is 1,757.

Originality/value

Knowing the bibliometric features of databases, their own h‐index and related metrics versus those of the alternative tools can be very useful for computing a variety of research performance indicators. However, we need to learn much more about our tools in our rush to metricise everything before we can rest assured that our gauges gauge correctly or at least with transparent limitations. Learning the bibliometric profile of the tools used to measure the research performance of researchers, departments, universities and journals can help in making better informed decisions, and discovering the limitations of the measuring tools.

Details

Online Information Review, vol. 35 no. 3
Type: Research Article
ISSN: 1468-4527

Keywords

To view the access options for this content please click here
Article
Publication date: 27 October 2020

James C. Ryan

The purpose of this paper is to shed light on the use of bibliometric indicators as a people analytics tool for examining research performance outcome differences in…

Abstract

Purpose

The purpose of this paper is to shed light on the use of bibliometric indicators as a people analytics tool for examining research performance outcome differences in faculty mobility and turnover.

Design/methodology/approach

Employing bibliometric information from research databases, the publication, citations, h-index and newly developed individual annualized h-index (hIa-index) for a sample of university faculty is examined (N = 684). Information relating to turnover decisions from a human resource (HR) information system and bibliometric data from a research database are combined to explore research performance differences across cohorts of retained, resigned or terminated faculty over a five-year period in a single university.

Findings

Analysis of variance (ANOVA) results indicate traditional bibliometric indicators of h-index, publication count and citation count which are limited in their ability to identify performance differences between employment status cohorts. Results do show some promise for the newly developed hIa-index, as it is found to be significantly lower for terminated faculty (p < 0.001), as compared to both retained and resigned faculty. Multinomial logistic regression analysis also confirms the hIa metric as a predictor of terminated employment status.

Research limitations/implications

First, the results imply that the hIa-index, which controls for career length and elements of coauthorship is a superior bibliometric indicator for comparison of research performance.

Practical implications

Results suggest that the hIa metric may serve as a useful tool for the examination of employment decisions for universities. It also highlights the potential usefulness of bibliometric indicators for people analytics and the examination of employment decisions, performance management and faculty turnover in research-intensive higher education contexts.

Originality/value

This empirical paper is entirely unique. No research has previously examined the issue of turnover in a university setting using the bibliometric measures employed here. This is a first example of the potential use of hIa bibliometric index as an HR analytics tool for the examination of HR decisions such as employee turnover in the university context.

Details

Personnel Review, vol. 50 no. 5
Type: Research Article
ISSN: 0048-3486

Keywords

To view the access options for this content please click here
Article
Publication date: 5 January 2018

Tehmina Amjad, Ali Daud and Naif Radi Aljohani

This study reviews the methods found in the literature for the ranking of authors, identifies the pros and cons of these methods, discusses and compares these methods. The…

Abstract

Purpose

This study reviews the methods found in the literature for the ranking of authors, identifies the pros and cons of these methods, discusses and compares these methods. The purpose of this paper is to study is to find the challenges and future directions of ranking of academic objects, especially authors, for future researchers.

Design/methodology/approach

This study reviews the methods found in the literature for the ranking of authors, classifies them into subcategories by studying and analyzing their way of achieving the objectives, discusses and compares them. The data sets used in the literature and the evaluation measures applicable in the domain are also presented.

Findings

The survey identifies the challenges involved in the field of ranking of authors and future directions.

Originality/value

To the best of the knowledge, this is the first survey that studies the author ranking problem in detail and classifies them according to their key functionalities, features and way of achieving the objective according to the requirement of the problem.

Details

Library Hi Tech, vol. 36 no. 1
Type: Research Article
ISSN: 0737-8831

Keywords

To view the access options for this content please click here
Article
Publication date: 27 September 2011

Péter Jacsó

The purpose of this paper is to discuss the new version of the Web of Science (WoS) software.

Abstract

Purpose

The purpose of this paper is to discuss the new version of the Web of Science (WoS) software.

Design/methodology/approach

This paper discusses the new version of the Web of Science (WoS) software.

Findings

The new version of the Web of Science (WoS) software released in mid‐2011 eliminated the 100,000‐record limit in the search results. This, in turn, makes it possible to study the bibliometric profile of the entire WoS database (which consists of 50 million unique records), and/or any subset licensed by a library. In addition the maximum record set for the automatic production of the informative citation report was doubled from 5,000 to 10,000 records. These are important developments for getting a realistic picture of WoS, and gauging the most widely used gauge. It also helps in comparing WoS with the Scopus database using traceable and reproducible quantitative measures, including the h‐index and its variants, the citation rate of the documents making up the h‐core (the set of records that contribute to the h‐index), and computing additional bibliometric indicators that can be used as proxies in evaluating the research performance of individuals, research groups, educational and research institutions as well as serial publications for the broadest subject areas and time span – although with some limitations and reservations.

Originality/value

This paper, which attempts to describe some of the bibliometric traits of WoS in three different configurations (in terms of the composition and time span of the components licensed), complements the one published in a previous issue of Online Information Review profiling the Scopus database.

Details

Online Information Review, vol. 35 no. 5
Type: Research Article
ISSN: 1468-4527

Keywords

1 – 10 of 476