Search results

1 – 10 of over 35000
Book part
Publication date: 5 February 2016

Craig Tutterow and James A. Evans

University rankings and metrics have become an increasingly prominent basis of student decisions, generalized university reputation, and the resources university’s…

Abstract

University rankings and metrics have become an increasingly prominent basis of student decisions, generalized university reputation, and the resources university’s attract. We review the history of metrics in higher education and scholarship about the influence of ranking on the position and strategic behavior of universities and students. Most quantitative analyses on this topic estimate the influence of change in university rank on performance. These studies consistently identify a small, short-lived influence of rank shift on selectivity (e.g., one rank position corresponds to ≤1% more student applicants), comparable to ranking effects documented in other domains. This understates the larger system-level impact of metrification on universities, students, and the professions that surround them. We explore one system-level transformation likely influenced by the rise of rankings. Recent years have witnessed the rise of enrollment management and independent educational consultation. We illustrate a plausible pathway from ranking to this transformation: In an effort to improve rankings, universities solicit more applications from students to reduce their acceptance rate. Lower acceptance rates lead to more uncertainty for students about acceptance, leading them to apply to more schools, which decreases the probability that accepted students will attend. This leads to greater uncertainty about enrollment for students and universities and generates demand for new services to manage it. Because these and other system-level transformations are not as cleanly measured as rank position and performance, they have not received the same treatment or modeling attention in higher education scholarship, despite their importance for understanding and influencing education policy.

Details

The University Under Pressure
Type: Book
ISBN: 978-1-78560-831-5

Keywords

Book part
Publication date: 5 February 2016

Catherine Paradeise and Ghislaine Filliatreau

Much has been analyzed regarding the origins and the impact of rankings and metrics on policies, behaviors, and missions of universities. Surprisingly, little attention…

Abstract

Much has been analyzed regarding the origins and the impact of rankings and metrics on policies, behaviors, and missions of universities. Surprisingly, little attention has been allocated to describing and analyzing the emergence of metrics as a new action field. This industry, fueled by the “new public management” policy perspectives that operate at the backstage of the contemporary pervasive “regime of excellence,” still remains a black box worth exploring in depth. This paper intends to fill this loophole. It first sets the stage for this new action field by stressing the differences between the policy fields of higher education in the United States and Europe, as a way to understand the specificities of the use of metrics and rankings on both continents. The second part describes the actors of the field, which productive organizations they build, what skills they combine, which products they put on the market, and their shared norms and audiences.

Details

The University Under Pressure
Type: Book
ISBN: 978-1-78560-831-5

Keywords

Abstract

Details

Social Sciences: A Dying Fire
Type: Book
ISBN: 978-1-80117-041-3

Article
Publication date: 9 September 2021

Yuan George Shan, Junru Zhang, Manzurul Alam and Phil Hancock

This study aims to investigate the relationship between university rankings and sustainability reporting among Australia and New Zealand universities. Even though…

Abstract

Purpose

This study aims to investigate the relationship between university rankings and sustainability reporting among Australia and New Zealand universities. Even though sustainability reporting is an established area of investigation, prior research has paid inadequate attention to the nexus of university ranking and sustainability reporting.

Design/methodology/approach

This study covers 46 Australian and New Zealand universities and uses a data set, which includes sustainability reports and disclosures from four reporting channels including university websites, and university archives, between 2005 and 2018. Ordinary least squares regression was used with Pearson and Spearman’s rank correlations to investigate the likelihood of multi-collinearity and the paper also calculated the variance inflation factor values. Finally, this study uses the generalized method of moments approach to test for endogeneity.

Findings

The findings suggest that sustainability reporting is significantly and positively associated with university ranking and confirm that the four reporting channels play a vital role when communicating with university stakeholders. Further, this paper documents that sustainability reporting through websites, in addition to the annual report and a separate environment report have a positive impact on the university ranking systems.

Originality/value

This paper contributes to extant knowledge on the link between university rankings and university sustainability reporting which is considered a vital communication vehicle to meet the expectation of the stakeholder in relevance with the university rankings.

Details

Meditari Accountancy Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 2049-372X

Keywords

Article
Publication date: 23 September 2020

José C. Dextre-Chacón, Santiago Tejedor and Luis M. Romero-Rodriguez

This study evaluates the correlations between the universities' type of property (public, private associative and private corporate), institutional seniority (<20, 20–45…

Abstract

Purpose

This study evaluates the correlations between the universities' type of property (public, private associative and private corporate), institutional seniority (<20, 20–45 and >45 years) and the presence and position in national and international university rankings.

Design/methodology/approach

It considers 90 Peruvian universities certified by SUNEDU (public agency for the accreditation of universities in Peru). According to their presence in 20 university rankings (yes/no) and the position (tertiles) in two world rankings: Webometrics and SIR Iberoamericano, four universities participated in 10 or more rankings and only 16 (18%) in six or more.

Findings

The private corporate universities were the least old (p < 0.01). No association was found with the type of property both in the presence in rankings and in the positioning (p > 0.05), except in one where there was less participation of public institutions. Long-lived universities had higher participation and better positioning in rankings than those with less seniority (p < 0.01). The presence and better positioning in university rankings depend on institutional seniority and not on the type of ownership in Peruvian licensed universities.

Originality/value

This research highlights the lack of equity in several international rankings for the evaluation of the quality of universities, in the respect that most of them give priority to aspects related to institutional seniority and size. At the same time, the results of younger and smaller institutions are not put into perspective.

Details

Journal of Applied Research in Higher Education, vol. 13 no. 4
Type: Research Article
ISSN: 2050-7003

Keywords

Article
Publication date: 20 August 2018

Corren G. McCoy, Michael L. Nelson and Michele C. Weigle

The purpose of this study is to present an alternative to university ranking lists published in U.S. News & World Report, Times Higher Education, Academic Ranking of World

Abstract

Purpose

The purpose of this study is to present an alternative to university ranking lists published in U.S. News & World Report, Times Higher Education, Academic Ranking of World Universities and Money Magazine. A strategy is proposed to mine a collection of university data obtained from Twitter and publicly available online academic sources to compute social media metrics that approximate typical academic rankings of US universities.

Design/methodology/approach

The Twitter application programming interface (API) is used to rank 264 universities using two easily collected measurements. The University Twitter Engagement (UTE) score is the total number of primary and secondary followers affiliated with the university. The authors mine other public data sources related to endowment funds, athletic expenditures and student enrollment to compute a ranking based on the endowment, expenditures and enrollment (EEE) score.

Findings

In rank-to-rank comparisons, the authors observed a significant, positive rank correlation (τ = 0.6018) between UTE and an aggregate reputation ranking, which indicates UTE could be a viable proxy for ranking atypical institutions normally excluded from traditional lists.

Originality/value

The UTE and EEE metrics offer distinct advantages because they can be calculated on-demand rather than relying on an annual publication and they promote diversity in the ranking lists, as any university with a Twitter account can be ranked by UTE and any university with online information about enrollment, expenditures and endowment can be given an EEE rank. The authors also propose a unique approach for discovering official university accounts by mining and correlating the profile information of Twitter friends.

Details

Information Discovery and Delivery, vol. 46 no. 3
Type: Research Article
ISSN: 2398-6247

Keywords

Article
Publication date: 26 February 2018

Sheeja N.K., Susan Mathew K. and Surendran Cherukodan

This study aims to examine if there exists a relation between scholarly output and institutional ranking based on National Institutional Ranking Framework (NIRF) of India…

Abstract

Purpose

This study aims to examine if there exists a relation between scholarly output and institutional ranking based on National Institutional Ranking Framework (NIRF) of India. This paper also aims to analyze and compare the parameters of NIRF with those of leading world ranking university rankings.

Design/methodology/approach

The data for the study were collected through Web content analysis. The major parts of data were collected from the official websites of NIRF, Times Higher Education World University Rankings and QS World University rankings.

Findings

The study found that the parameters fixed for the assessment of Indian institutions under NIRF are par with those of other world university ranking agencies. Scholarly output of a university is one of the major parameters of university ranking schemes. Indian universities who scored high for research productivity came top in NIRF. These universities were also figured in world university rankings. Universities from South India excel in NIRF and there is a close relationship between scholarly productivity and institutional ranking.

Originality/value

Correlation between h-index and scholarly productivity has been dealt with in several studies. This paper is the first attempt to find the relationship between scholarly productivity and ranking of universities in India based on NIRF.

Details

Global Knowledge, Memory and Communication, vol. 67 no. 3
Type: Research Article
ISSN: 0024-2535

Keywords

Article
Publication date: 16 October 2017

Richard Croucher, Paul Gooderham and Marian Rizov

The purpose of this paper is to test Shattock’s legacy reputation thesis that non-leading universities in the UK face insuperable resource barriers to entering the leading group.

Abstract

Purpose

The purpose of this paper is to test Shattock’s legacy reputation thesis that non-leading universities in the UK face insuperable resource barriers to entering the leading group.

Design/methodology/approach

Employing regression analysis, the authors examine whether prioritizing research performance is a viable strategy for non-leading UK universities aiming to improve their organizational effectiveness. The dependent variable, organizational effectiveness, is measured by the annual Guardian rankings of universities. The main independent variable, research performance, is measured using “research power” (“RP”). RP is derived from the UK Research Excellence Framework.

Findings

For 2008-2014, the authors find that changes in research performance impacted university rankings. However, the authors also find that changes to the rankings are largely confined to non-leading universities and have not led to these institutions breaking into the group of leading universities. Therefore, Shattock’s thesis is supported.

Practical implications

Failing to maintain research performance can have significant negative consequences for the rankings of non-leading universities.

Originality/value

This is the first study that examines the relationship between the research performance of universities in the UK with a measure of their overall organizational effectiveness.

Details

Journal of Organizational Effectiveness: People and Performance, vol. 5 no. 1
Type: Research Article
ISSN: 2051-6614

Keywords

Article
Publication date: 2 August 2013

Teerasak Markpin, Nongyao Premkamolnetr, Santi Ittiritmeechai, Chatree Wongkaew, Wutthisit Yochai, Preeyanuch Ratchatahirun, Janjit Lamchaturapatr, Kwannate Sombatsompop, Worsak Kanok‐Nukulchai, Lee Inn Beng and Narongrit Sombatsompop

The purpose of this paper is to study the effects of the choice of database and data retrieval methods on the research performance of a number of selected Asian…

Abstract

Purpose

The purpose of this paper is to study the effects of the choice of database and data retrieval methods on the research performance of a number of selected Asian universities from 33 countries using two different indicators (publication volume and citation count) and three subject fields (energy, environment and materials) during the period 2005‐2009.

Design/methodology/approach

To determine the effect of the choice of database, Scopus and Web of Science databases were queried to retrieve the publications and citations of the top ten Asian universities in three subject fields. In ascertaining the effect of data retrieval methods, the authors proposed a new data retrieval method called Keyword‐based Data Retrieval (KDR), which uses relevant keywords identified by independent experts to retrieve publications and their citations of the top 30 Asian universities in the Environment field from the entire Scopus database. The results were then compared with those retrieved using the Conventional Data Retrieval (CDR) method.

Findings

The Asian university ranking order is strongly affected by the choice of database, indicator, and the data retrieval method used. The KDR method yields many more publications and citation counts than the CDR method, shows better understanding of the university ranking results, and retrieves publications and citations in source titles outside those classified by the database. Moreover the publications found by the KDR method have a multidisciplinary research focus.

Originality/value

The paper concludes that KDR is a more suitable methodology to retrieve data for measuring university research performance, particularly in an environment where universities are increasingly engaging in multidisciplinary research.

Content available
Article
Publication date: 4 February 2020

Maruša Hauptman Komotar

This paper aims to investigate how global university rankings interact with quality and quality assurance in higher education along the two lines of investigation, that…

1330

Abstract

Purpose

This paper aims to investigate how global university rankings interact with quality and quality assurance in higher education along the two lines of investigation, that is, from the perspective of their relationship with the concept of quality (assurance) and the development of quality assurance policies in higher education, with particular emphasis on accreditation as the prevalent quality assurance approach.

Design/methodology/approach

The paper firstly conceptualises quality and quality assurance in higher education and critically examines the methodological construction of the four selected world university rankings and their references to “quality”. On this basis, it answers the two “how” questions: How is the concept of quality (assurance) in higher education perceived by world university rankings and how do they interact with quality assurance and accreditation policies in higher education? Answers are provided through the analysis of different documentary sources, such as academic literature, glossaries, international studies, institutional strategies and other documents, with particular focus on official websites of international ranking systems and individual higher education institutions, media announcements, and so on.

Findings

The paper argues that given their quantitative orientation, it is quite problematic to perceive world university rankings as a means of assessing or assuring the institutional quality. Like (international) accreditations, they may foster vertical differentiation of higher education systems and institutions. Because of their predominant accountability purpose, they cannot encourage improvements in the quality of higher education institutions.

Practical implications

Research results are beneficial to different higher education stakeholders (e.g. policymakers, institutional leadership, academics and students), as they offer them a comprehensive view on rankings’ ability to assess, assure or improve the quality in higher education.

Originality/value

The existing research focuses principally either on interactions of global university rankings with the concept of quality or with processes of quality assurance in higher education. The comprehensive and detailed analysis of their relationship with both concepts thus adds value to the prevailing scholarly debates.

Details

Quality Assurance in Education, vol. 28 no. 1
Type: Research Article
ISSN: 0968-4883

Keywords

1 – 10 of over 35000