Search results

1 – 10 of over 86000
Book part
Publication date: 5 February 2016

Craig Tutterow and James A. Evans

University rankings and metrics have become an increasingly prominent basis of student decisions, generalized university reputation, and the resources university’s attract. We…

Abstract

University rankings and metrics have become an increasingly prominent basis of student decisions, generalized university reputation, and the resources university’s attract. We review the history of metrics in higher education and scholarship about the influence of ranking on the position and strategic behavior of universities and students. Most quantitative analyses on this topic estimate the influence of change in university rank on performance. These studies consistently identify a small, short-lived influence of rank shift on selectivity (e.g., one rank position corresponds to ≤1% more student applicants), comparable to ranking effects documented in other domains. This understates the larger system-level impact of metrification on universities, students, and the professions that surround them. We explore one system-level transformation likely influenced by the rise of rankings. Recent years have witnessed the rise of enrollment management and independent educational consultation. We illustrate a plausible pathway from ranking to this transformation: In an effort to improve rankings, universities solicit more applications from students to reduce their acceptance rate. Lower acceptance rates lead to more uncertainty for students about acceptance, leading them to apply to more schools, which decreases the probability that accepted students will attend. This leads to greater uncertainty about enrollment for students and universities and generates demand for new services to manage it. Because these and other system-level transformations are not as cleanly measured as rank position and performance, they have not received the same treatment or modeling attention in higher education scholarship, despite their importance for understanding and influencing education policy.

Details

The University Under Pressure
Type: Book
ISBN: 978-1-78560-831-5

Keywords

Book part
Publication date: 5 February 2016

Catherine Paradeise and Ghislaine Filliatreau

Much has been analyzed regarding the origins and the impact of rankings and metrics on policies, behaviors, and missions of universities. Surprisingly, little attention has been…

Abstract

Much has been analyzed regarding the origins and the impact of rankings and metrics on policies, behaviors, and missions of universities. Surprisingly, little attention has been allocated to describing and analyzing the emergence of metrics as a new action field. This industry, fueled by the “new public management” policy perspectives that operate at the backstage of the contemporary pervasive “regime of excellence,” still remains a black box worth exploring in depth. This paper intends to fill this loophole. It first sets the stage for this new action field by stressing the differences between the policy fields of higher education in the United States and Europe, as a way to understand the specificities of the use of metrics and rankings on both continents. The second part describes the actors of the field, which productive organizations they build, what skills they combine, which products they put on the market, and their shared norms and audiences.

Details

The University Under Pressure
Type: Book
ISBN: 978-1-78560-831-5

Keywords

Article
Publication date: 18 November 2022

İrfan Ayhan and Ali Özdemir

The purpose of this research is to determine the competitive advantages of higher education institutions (HEIs) and create a new methodology to rank universities according to the…

Abstract

Purpose

The purpose of this research is to determine the competitive advantages of higher education institutions (HEIs) and create a new methodology to rank universities according to the competitive advantages.

Design/methodology/approach

The research determines the competitive advantages of HEIs by analysing expert opinions through a semi-structured interview form, matches codes and themes to performance indicators using Saldana's two-cycle coding methods, evaluates content validity through Lawshe and reveals the item weights of the ranking with analytical hierarchy process (AHP). Simple additive weighting (SAW) and Technique for Order of Preference by Similarity (TOPSIS) methods were used for ranking universities.

Findings

Seven dimensions stand out in regard to what should be considered while ranking HEIs: research and publication, education, management, infrastructure, financial resources, human resources and social and economic contribution. Under the 7 dimensions, 69 indicators were determined.

Practical implications

The research provides a scientific reference point where HEIs can compare themselves with other HEIs regarding where they are in the sector, especially in terms of competitive advantages.

Originality/value

Although there are many different ranking methods that rank universities in the national and international literature, almost all these methods are largely based on the outputs of the university such as the number of publications, the number of patents, the number of projects, etc. A framework which ranks universities by considering different aspects of the institution, such as management, human resources and financial resources, has not been developed yet. In this respect, this research aims to fill this gap in the literature.

Details

The TQM Journal, vol. 35 no. 8
Type: Research Article
ISSN: 1754-2731

Keywords

Article
Publication date: 9 September 2021

Yuan George Shan, Junru Zhang, Manzurul Alam and Phil Hancock

This study aims to investigate the relationship between university rankings and sustainability reporting among Australia and New Zealand universities. Even though sustainability…

Abstract

Purpose

This study aims to investigate the relationship between university rankings and sustainability reporting among Australia and New Zealand universities. Even though sustainability reporting is an established area of investigation, prior research has paid inadequate attention to the nexus of university ranking and sustainability reporting.

Design/methodology/approach

This study covers 46 Australian and New Zealand universities and uses a data set, which includes sustainability reports and disclosures from four reporting channels including university websites, and university archives, between 2005 and 2018. Ordinary least squares regression was used with Pearson and Spearman’s rank correlations to investigate the likelihood of multi-collinearity and the paper also calculated the variance inflation factor values. Finally, this study uses the generalized method of moments approach to test for endogeneity.

Findings

The findings suggest that sustainability reporting is significantly and positively associated with university ranking and confirm that the four reporting channels play a vital role when communicating with university stakeholders. Further, this paper documents that sustainability reporting through websites, in addition to the annual report and a separate environment report have a positive impact on the university ranking systems.

Originality/value

This paper contributes to extant knowledge on the link between university rankings and university sustainability reporting which is considered a vital communication vehicle to meet the expectation of the stakeholder in relevance with the university rankings.

Details

Meditari Accountancy Research, vol. 30 no. 6
Type: Research Article
ISSN: 2049-372X

Keywords

Article
Publication date: 20 August 2018

Corren G. McCoy, Michael L. Nelson and Michele C. Weigle

The purpose of this study is to present an alternative to university ranking lists published in U.S. News & World Report, Times Higher Education, Academic Ranking of World

Abstract

Purpose

The purpose of this study is to present an alternative to university ranking lists published in U.S. News & World Report, Times Higher Education, Academic Ranking of World Universities and Money Magazine. A strategy is proposed to mine a collection of university data obtained from Twitter and publicly available online academic sources to compute social media metrics that approximate typical academic rankings of US universities.

Design/methodology/approach

The Twitter application programming interface (API) is used to rank 264 universities using two easily collected measurements. The University Twitter Engagement (UTE) score is the total number of primary and secondary followers affiliated with the university. The authors mine other public data sources related to endowment funds, athletic expenditures and student enrollment to compute a ranking based on the endowment, expenditures and enrollment (EEE) score.

Findings

In rank-to-rank comparisons, the authors observed a significant, positive rank correlation (τ = 0.6018) between UTE and an aggregate reputation ranking, which indicates UTE could be a viable proxy for ranking atypical institutions normally excluded from traditional lists.

Originality/value

The UTE and EEE metrics offer distinct advantages because they can be calculated on-demand rather than relying on an annual publication and they promote diversity in the ranking lists, as any university with a Twitter account can be ranked by UTE and any university with online information about enrollment, expenditures and endowment can be given an EEE rank. The authors also propose a unique approach for discovering official university accounts by mining and correlating the profile information of Twitter friends.

Details

Information Discovery and Delivery, vol. 46 no. 3
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 26 February 2018

Sheeja N.K., Susan Mathew K. and Surendran Cherukodan

This study aims to examine if there exists a relation between scholarly output and institutional ranking based on National Institutional Ranking Framework (NIRF) of India. This…

1176

Abstract

Purpose

This study aims to examine if there exists a relation between scholarly output and institutional ranking based on National Institutional Ranking Framework (NIRF) of India. This paper also aims to analyze and compare the parameters of NIRF with those of leading world ranking university rankings.

Design/methodology/approach

The data for the study were collected through Web content analysis. The major parts of data were collected from the official websites of NIRF, Times Higher Education World University Rankings and QS World University rankings.

Findings

The study found that the parameters fixed for the assessment of Indian institutions under NIRF are par with those of other world university ranking agencies. Scholarly output of a university is one of the major parameters of university ranking schemes. Indian universities who scored high for research productivity came top in NIRF. These universities were also figured in world university rankings. Universities from South India excel in NIRF and there is a close relationship between scholarly productivity and institutional ranking.

Originality/value

Correlation between h-index and scholarly productivity has been dealt with in several studies. This paper is the first attempt to find the relationship between scholarly productivity and ranking of universities in India based on NIRF.

Details

Global Knowledge, Memory and Communication, vol. 67 no. 3
Type: Research Article
ISSN: 0024-2535

Keywords

Article
Publication date: 2 August 2013

Teerasak Markpin, Nongyao Premkamolnetr, Santi Ittiritmeechai, Chatree Wongkaew, Wutthisit Yochai, Preeyanuch Ratchatahirun, Janjit Lamchaturapatr, Kwannate Sombatsompop, Worsak Kanok‐Nukulchai, Lee Inn Beng and Narongrit Sombatsompop

The purpose of this paper is to study the effects of the choice of database and data retrieval methods on the research performance of a number of selected Asian universities from…

Abstract

Purpose

The purpose of this paper is to study the effects of the choice of database and data retrieval methods on the research performance of a number of selected Asian universities from 33 countries using two different indicators (publication volume and citation count) and three subject fields (energy, environment and materials) during the period 2005‐2009.

Design/methodology/approach

To determine the effect of the choice of database, Scopus and Web of Science databases were queried to retrieve the publications and citations of the top ten Asian universities in three subject fields. In ascertaining the effect of data retrieval methods, the authors proposed a new data retrieval method called Keyword‐based Data Retrieval (KDR), which uses relevant keywords identified by independent experts to retrieve publications and their citations of the top 30 Asian universities in the Environment field from the entire Scopus database. The results were then compared with those retrieved using the Conventional Data Retrieval (CDR) method.

Findings

The Asian university ranking order is strongly affected by the choice of database, indicator, and the data retrieval method used. The KDR method yields many more publications and citation counts than the CDR method, shows better understanding of the university ranking results, and retrieves publications and citations in source titles outside those classified by the database. Moreover the publications found by the KDR method have a multidisciplinary research focus.

Originality/value

The paper concludes that KDR is a more suitable methodology to retrieve data for measuring university research performance, particularly in an environment where universities are increasingly engaging in multidisciplinary research.

Article
Publication date: 11 May 2015

Jasmina Berbegal-Mirabent and D. Enrique Ribeiro-Soriano

The purpose of this paper is to examine the role of university ranking systems as instruments of university quality assessment. Some controversy surrounds the methodology used to…

1124

Abstract

Purpose

The purpose of this paper is to examine the role of university ranking systems as instruments of university quality assessment. Some controversy surrounds the methodology used to compile such instruments. Accordingly, different compilers have adopted different methods to produce these rankings. This study examines to what extent this diversity in methodology is now converging in the context of Spanish university rankings.

Design/methodology/approach

To conduct this research, a two-step approach was adopted. First, the indicators used in four Spanish rankings were examined. Second, empirical analysis was used to identify differences between university rankings.

Findings

Results reveal that, despite the vast number and variety of indicators, there is a positive, significant relationship between rankings. Spanish university rankings thus show some degree of convergence.

Social implications

Because rankings influence behavior and shape institutional decision making, a better understanding of how these assessment tools are devised is essential. Research on these ranking systems therefore offers an important contribution to improving the quality of higher education institutions.

Originality/value

This paper presents the results of a comprehensive survey of Spanish university rankings. It offers a new perspective of the state of the art of the Spanish university ranking system. The paper also presents a set of managerial implications for improving these benchmarking tools.

Details

Journal of Service Theory and Practice, vol. 25 no. 3
Type: Research Article
ISSN: 2055-6225

Keywords

Open Access
Article
Publication date: 16 June 2022

Núria Bautista-Puig, Enrique Orduña-Malea and Carmen Perez-Esparrells

This study aims to analyse and evaluate the methodology followed by the Times Higher Education Impact Rankings (THE-IR), as well as the coverage obtained and the data offered by…

4410

Abstract

Purpose

This study aims to analyse and evaluate the methodology followed by the Times Higher Education Impact Rankings (THE-IR), as well as the coverage obtained and the data offered by this ranking, to determine if its methodology reflects the degree of sustainability of universities, and whether their results are accurate enough to be used as a data source for research and strategic decision-making.

Design/methodology/approach

A summative content analysis of the THE-IR methodology was conducted, paying special attention to the macro-structure (university score) and micro-structure (sustainable development goals [SDG] score) levels of the research-related metrics. Then, the data published by THE-IR in the 2019, 2020 and 2021 edition was collected via web scraping. After that, all the data was statistically analysed to find out performance rates, SDGs’ success rates and geographic distributions. Finally, a pairwise comparison of the THE-IR against the Times Higher Education World University Rankings (THE-WUR) was conducted to calculate overlap measures.

Findings

Severe inconsistencies in the THE-IR methodology have been found, offering a distorted view of sustainability in higher education institutions, allowing different strategic actions to participate in the ranking (interested, strategic, committed and outperformer universities). The observed growing number of universities from developing countries and the absence of world-class universities reflect an opportunity for less-esteemed institutions, which might have a chance to gain reputation based on their efforts towards sustainability, but from a flawed ranking which should be avoided for decision-making.

Practical implications

University managers can be aware of the THE-IR validity when demanding informed decisions. University ranking researchers and practitioners can access a detailed analysis of the THE-IR to determine its properties as a ranking and use raw data from THE-IR in other studies or reports. Policy makers can use the main findings of this work to avoid misinterpretations when developing public policies related to the evaluation of the contribution of universities to the SDGs. Otherwise, these results can help the ranking publisher to improve some of the inconsistencies found in this study.

Social implications

Given the global audience of the THE-IR, this work contributes to minimising the distorted vision that the THE-IR projects about sustainability in higher education institutions, and alerts governments, higher education bodies and policy makers to take precautions when making decisions based on this ranking.

Originality/value

To the best of the authors’ knowledge, this contribution is the first providing an analysis of the THE-IR’s methodology. The faults in the methodology, the coverage at the country-level and the overlap between THE-IR and THE-WUR have unveiled the existence of specific strategies in the participation of universities, of interest both for experts in university rankings and SDGs.

Details

International Journal of Sustainability in Higher Education, vol. 23 no. 8
Type: Research Article
ISSN: 1467-6370

Keywords

Article
Publication date: 12 October 2018

Güleda Doğan and Umut Al

The purpose of this paper is to analyze the similarity of intra-indicators used in research-focused international university rankings (Academic Ranking of World Universities

3523

Abstract

Purpose

The purpose of this paper is to analyze the similarity of intra-indicators used in research-focused international university rankings (Academic Ranking of World Universities (ARWU), NTU, University Ranking by Academic Performance (URAP), Quacquarelli Symonds (QS) and Round University Ranking (RUR)) over years, and show the effect of similar indicators on overall rankings for 2015. The research questions addressed in this study in accordance with these purposes are as follows: At what level are the intra-indicators used in international university rankings similar? Is it possible to group intra-indicators according to their similarities? What is the effect of similar intra-indicators on overall rankings?

Design/methodology/approach

Indicator-based scores of all universities in five research-focused international university rankings for all years they ranked form the data set of this study for the first and second research questions. The authors used a multidimensional scaling (MDS) and cosine similarity measure to analyze similarity of indicators and to answer these two research questions. Indicator-based scores and overall ranking scores for 2015 are used as data and Spearman correlation test is applied to answer the third research question.

Findings

Results of the analyses show that the intra-indicators used in ARWU, NTU and URAP are highly similar and that they can be grouped according to their similarities. The authors also examined the effect of similar indicators on 2015 overall ranking lists for these three rankings. NTU and URAP are affected least from the omitted similar indicators, which means it is possible for these two rankings to create very similar overall ranking lists to the existing overall ranking using fewer indicators.

Research limitations/implications

CWTS, Mapping Scientific Excellence, Nature Index, and SCImago Institutions Rankings (until 2015) are not included in the scope of this paper, since they do not create overall ranking lists. Likewise, Times Higher Education, CWUR and US are not included because of not presenting indicator-based scores. Required data were not accessible for QS for 2010 and 2011. Moreover, although QS ranks more than 700 universities, only first 400 universities in 2012–2015 rankings were able to be analyzed. Although QS’s and RUR’s data were analyzed in this study, it was statistically not possible to reach any conclusion for these two rankings.

Practical implications

The results of this study may be considered mainly by ranking bodies, policy- and decision-makers. The ranking bodies may use the results to review the indicators they use, to decide on which indicators to use in their rankings, and to question if it is necessary to continue overall rankings. Policy- and decision-makers may also benefit from the results of this study by thinking of giving up using overall ranking results as an important input in their decisions and policies.

Originality/value

This study is the first to use a MDS and cosine similarity measure for revealing the similarity of indicators. Ranking data is skewed that require conducting nonparametric statistical analysis; therefore, MDS is used. The study covers all ranking years and all universities in the ranking lists, and is different from the similar studies in the literature that analyze data for shorter time intervals and top-ranked universities in the ranking lists. It can be said that the similarity of intra-indicators for URAP, NTU and RUR is analyzed for the first time in this study, based on the literature review.

1 – 10 of over 86000