Search results

1 – 10 of over 23000
Per page
102050
Citations:
Loading...
Access Restricted. View access options
Article
Publication date: 7 March 2025

Prem Vrat

The QS World ranking framework is a globally renowned ranking framework that ranks institutions globally as well as on a regional basis. This research aims to examine an…

2

Abstract

Purpose

The QS World ranking framework is a globally renowned ranking framework that ranks institutions globally as well as on a regional basis. This research aims to examine an alternative methodology for obtaining world rankings. This ranking is very popular, yet the research presents a case to have a re-look on the methodology used in the ranking.

Design/methodology/approach

The QS Ranking framework uses a simple additive weighting (SAW) approach to get a total weighted score for each candidate institution, and these are ranked according to descending order of total weighted score. An illustrative example of QS world ranking 2025 of four institutions is taken, and the results are compared with ranks obtained using the SAW methodology implicitly employed by the QS ranking framework. In our research, a multi-criteria decision-making (MCDM) method, TOPSIS, is employed to rank and compare institutions with the QS ranking.

Findings

In the QS World University Ranking 2025, RWTH scores 59.9 with a rank of 99. The University of Copenhagen, Denmark, scores 59.6 with a rank of 100; IIT Bombay scores 56.6 and IIT Delhi 52.1. These ranks are interchanged with subtractive differences in the TOPSIS Score of 0.6350 for Copenhagen University and 0.4617 for RWTH and remain unchanged for IIT Bombay and IIT Delhi.

Research limitations/implications

This paper adopted a small dataset of four universities/institutions to test the alternative methodology that appears intuitively appealing to derive meaningful inferences. However, this paper does not comment on the basic structure of the QS ranking system, giving large weight to academic and employer reputations based on a survey-based approach.

Originality/value

This paper suggested an alternative but well-known MCDM technique (TOPSIS) for ranking world universities rather than the SAW technique implicitly employed by QS.

Details

Journal of Advances in Management Research, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 0972-7981

Keywords

Access Restricted. View access options
Book part
Publication date: 5 February 2016

Catherine Paradeise and Ghislaine Filliatreau

Much has been analyzed regarding the origins and the impact of rankings and metrics on policies, behaviors, and missions of universities. Surprisingly, little attention has been…

Abstract

Much has been analyzed regarding the origins and the impact of rankings and metrics on policies, behaviors, and missions of universities. Surprisingly, little attention has been allocated to describing and analyzing the emergence of metrics as a new action field. This industry, fueled by the “new public management” policy perspectives that operate at the backstage of the contemporary pervasive “regime of excellence,” still remains a black box worth exploring in depth. This paper intends to fill this loophole. It first sets the stage for this new action field by stressing the differences between the policy fields of higher education in the United States and Europe, as a way to understand the specificities of the use of metrics and rankings on both continents. The second part describes the actors of the field, which productive organizations they build, what skills they combine, which products they put on the market, and their shared norms and audiences.

Details

The University Under Pressure
Type: Book
ISBN: 978-1-78560-831-5

Keywords

Access Restricted. View access options
Article
Publication date: 12 October 2018

Güleda Doğan and Umut Al

The purpose of this paper is to analyze the similarity of intra-indicators used in research-focused international university rankings (Academic Ranking of World Universities…

3631

Abstract

Purpose

The purpose of this paper is to analyze the similarity of intra-indicators used in research-focused international university rankings (Academic Ranking of World Universities (ARWU), NTU, University Ranking by Academic Performance (URAP), Quacquarelli Symonds (QS) and Round University Ranking (RUR)) over years, and show the effect of similar indicators on overall rankings for 2015. The research questions addressed in this study in accordance with these purposes are as follows: At what level are the intra-indicators used in international university rankings similar? Is it possible to group intra-indicators according to their similarities? What is the effect of similar intra-indicators on overall rankings?

Design/methodology/approach

Indicator-based scores of all universities in five research-focused international university rankings for all years they ranked form the data set of this study for the first and second research questions. The authors used a multidimensional scaling (MDS) and cosine similarity measure to analyze similarity of indicators and to answer these two research questions. Indicator-based scores and overall ranking scores for 2015 are used as data and Spearman correlation test is applied to answer the third research question.

Findings

Results of the analyses show that the intra-indicators used in ARWU, NTU and URAP are highly similar and that they can be grouped according to their similarities. The authors also examined the effect of similar indicators on 2015 overall ranking lists for these three rankings. NTU and URAP are affected least from the omitted similar indicators, which means it is possible for these two rankings to create very similar overall ranking lists to the existing overall ranking using fewer indicators.

Research limitations/implications

CWTS, Mapping Scientific Excellence, Nature Index, and SCImago Institutions Rankings (until 2015) are not included in the scope of this paper, since they do not create overall ranking lists. Likewise, Times Higher Education, CWUR and US are not included because of not presenting indicator-based scores. Required data were not accessible for QS for 2010 and 2011. Moreover, although QS ranks more than 700 universities, only first 400 universities in 2012–2015 rankings were able to be analyzed. Although QS’s and RUR’s data were analyzed in this study, it was statistically not possible to reach any conclusion for these two rankings.

Practical implications

The results of this study may be considered mainly by ranking bodies, policy- and decision-makers. The ranking bodies may use the results to review the indicators they use, to decide on which indicators to use in their rankings, and to question if it is necessary to continue overall rankings. Policy- and decision-makers may also benefit from the results of this study by thinking of giving up using overall ranking results as an important input in their decisions and policies.

Originality/value

This study is the first to use a MDS and cosine similarity measure for revealing the similarity of indicators. Ranking data is skewed that require conducting nonparametric statistical analysis; therefore, MDS is used. The study covers all ranking years and all universities in the ranking lists, and is different from the similar studies in the literature that analyze data for shorter time intervals and top-ranked universities in the ranking lists. It can be said that the similarity of intra-indicators for URAP, NTU and RUR is analyzed for the first time in this study, based on the literature review.

Access Restricted. View access options
Article
Publication date: 21 January 2025

Abroon Qazi and M.K.S. Al-Mhdawi

This study aims to address a gap in traditional university ranking methodologies by investigating the interrelations among key indicators featured in the QS rankings, within the…

38

Abstract

Purpose

This study aims to address a gap in traditional university ranking methodologies by investigating the interrelations among key indicators featured in the QS rankings, within the broader context of benchmarking in higher education.

Design/methodology/approach

Utilizing the 2024 QS ranking data and a Bayesian Belief Network (BBN) model, this research explores the interconnected relationships among indicators such as “academic reputation,” “employer reputation,” “faculty-to-student ratio,” “sustainability” and others to predict university rankings.

Findings

The developed model achieves 80% predictive accuracy and shows that strong performance in “employment outcomes,” “academic reputation” and “employer reputation” contributes to higher overall scores. In contrast, weaker performance in “academic reputation” and “sustainability” is associated with lower scores. Among these factors, “academic reputation” is the most informative indicator for predicting the overall score.

Originality/value

This research contributes to the literature by emphasizing the interconnections among ranking criteria and advocating for network-based models for benchmarking in higher education. Particularly, it underscores the importance of “sustainability” in forecasting rankings, aligning well with the broader theme of predicting university performance and societal impact. This study offers valuable insights for researchers and policymakers, promoting a comprehensive approach that considers the interdependencies among criteria to enhance educational quality and address societal change within the framework of benchmarking in university rankings.

Details

Benchmarking: An International Journal, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1463-5771

Keywords

Access Restricted. View access options
Article
Publication date: 26 February 2018

Sheeja N.K., Susan Mathew K. and Surendran Cherukodan

This study aims to examine if there exists a relation between scholarly output and institutional ranking based on National Institutional Ranking Framework (NIRF) of India. This…

1248

Abstract

Purpose

This study aims to examine if there exists a relation between scholarly output and institutional ranking based on National Institutional Ranking Framework (NIRF) of India. This paper also aims to analyze and compare the parameters of NIRF with those of leading world ranking university rankings.

Design/methodology/approach

The data for the study were collected through Web content analysis. The major parts of data were collected from the official websites of NIRF, Times Higher Education World University Rankings and QS World University rankings.

Findings

The study found that the parameters fixed for the assessment of Indian institutions under NIRF are par with those of other world university ranking agencies. Scholarly output of a university is one of the major parameters of university ranking schemes. Indian universities who scored high for research productivity came top in NIRF. These universities were also figured in world university rankings. Universities from South India excel in NIRF and there is a close relationship between scholarly productivity and institutional ranking.

Originality/value

Correlation between h-index and scholarly productivity has been dealt with in several studies. This paper is the first attempt to find the relationship between scholarly productivity and ranking of universities in India based on NIRF.

Details

Global Knowledge, Memory and Communication, vol. 67 no. 3
Type: Research Article
ISSN: 0024-2535

Keywords

Access Restricted. View access options
Article
Publication date: 1 January 2021

Muhammad Sajid Qureshi, Ali Daud, Malik Khizar Hayat and Muhammad Tanvir Afzal

Academic rankings are facing various issues, including the use of data sources that are not publicly verifiable, subjective parameters, a narrow focus on research productivity and…

334

Abstract

Purpose

Academic rankings are facing various issues, including the use of data sources that are not publicly verifiable, subjective parameters, a narrow focus on research productivity and regional biases and so forth. This research work is intended to enhance creditability of the ranking process by using the objective indicators based on publicly verifiable data sources.

Design/methodology/approach

The proposed ranking methodology – OpenRank – drives the objective indicators from two well-known publicly verifiable data repositories: the ArnetMiner and DBpedia.

Findings

The resultant academic ranking reflects common tendencies of the international academic rankings published by the Shanghai Ranking Consultancy (SRC), Quacquarelli Symonds (QS) and Times Higher Education (THE). Evaluation of the proposed methodology advocates its effectiveness and quick reproducibility with low cost of data collection.

Research limitations/implications

Implementation of the OpenRank methodology faced the issue of availability of the quality data. In future, accuracy of the academic rankings can be improved further by employing more relevant public data sources like the Microsoft Academic Graph, millions of graduate's profiles available in the LinkedIn repositories and the bibliographic data maintained by Association for Computing Machinery and Scopus and so forth.

Practical implications

The suggested use of open data sources would offer new dimensions to evaluate academic performance of the higher education institutions (HEIs) and having comprehensive understanding of the catalyst factors in the higher education.

Social implications

The research work highlighted the need of a purposely built, publicly verifiable electronic data source for performance evaluation of the global HEIs. Availability of such a global database would help in better academic planning, monitoring and analysis. Definitely, more transparent, reliable and less controversial academic rankings can be generated by employing the aspired data source.

Originality/value

We suggested a satisfying solution for improvement of the HEIs' ranking process by making the following contributions: (1) enhancing creditability of the ranking results by merely employing the objective performance indicators extracted from the publicly verifiable data sources, (2) developing an academic ranking methodology based on the objective indicators using two well-known data repositories, the DBpedia and ArnetMiner and (3) demonstrating effectiveness of the proposed ranking methodology on the real data sources.

Details

Library Hi Tech, vol. 41 no. 2
Type: Research Article
ISSN: 0737-8831

Keywords

Access Restricted. View access options
Article
Publication date: 9 September 2021

Yuan George Shan, Junru Zhang, Manzurul Alam and Phil Hancock

This study aims to investigate the relationship between university rankings and sustainability reporting among Australia and New Zealand universities. Even though sustainability…

673

Abstract

Purpose

This study aims to investigate the relationship between university rankings and sustainability reporting among Australia and New Zealand universities. Even though sustainability reporting is an established area of investigation, prior research has paid inadequate attention to the nexus of university ranking and sustainability reporting.

Design/methodology/approach

This study covers 46 Australian and New Zealand universities and uses a data set, which includes sustainability reports and disclosures from four reporting channels including university websites, and university archives, between 2005 and 2018. Ordinary least squares regression was used with Pearson and Spearman’s rank correlations to investigate the likelihood of multi-collinearity and the paper also calculated the variance inflation factor values. Finally, this study uses the generalized method of moments approach to test for endogeneity.

Findings

The findings suggest that sustainability reporting is significantly and positively associated with university ranking and confirm that the four reporting channels play a vital role when communicating with university stakeholders. Further, this paper documents that sustainability reporting through websites, in addition to the annual report and a separate environment report have a positive impact on the university ranking systems.

Originality/value

This paper contributes to extant knowledge on the link between university rankings and university sustainability reporting which is considered a vital communication vehicle to meet the expectation of the stakeholder in relevance with the university rankings.

Details

Meditari Accountancy Research, vol. 30 no. 6
Type: Research Article
ISSN: 2049-372X

Keywords

Access Restricted. View access options
Article
Publication date: 8 January 2025

Seema Gupta, S. Sushil and Khushboo Gulati

The study intends to evaluate first the performance of Indian institutions ranked in the National Institutional Ranking Framework 2019. Second, it compares the performance of…

14

Abstract

Purpose

The study intends to evaluate first the performance of Indian institutions ranked in the National Institutional Ranking Framework 2019. Second, it compares the performance of Indian institutions with international rankings. Third, it spotlights a model for predicting the criteria that will improve these institutions' national and international rankings.

Design/methodology/approach

The cluster analysis has been undertaken to group the sample of 100 institutions into three groups. Further, discriminant analysis has been performed to uncover the criteria that cause significant variations in the ranking. Third, a comparative study is conducted on the international ranking parameters to explore the factors responsible for their lower rank in the global ranking.

Findings

The results reveal that most institutions are low-performing for “research and professional practice” and “peer perception” criteria. Meanwhile, the performance of top-ranked institutions is unsatisfactory on the “outreach and inclusivity” criterion. The study also finds that the national ranking of Indian institutions is significantly affected by the “research and professional practice” and “peer perception” scores, which also results in their low rank in the Times Higher Education (THE) ranking.

Research limitations/implications

This study can be an exemplary model for any developing nation to upgrade its higher education institutions' (HEIs’) performance in international tables

Practical implications

The government can develop policies to better low-performing universities and initiate policy changes in the incapacitated spheres for building a globally distinctive image for the Indian universities. The policymakers can recognise the institutions showcasing excellent research and teaching performance and encourage them accordingly to be the best research or teaching universities as it is not possible for every university to be the best in teaching and research aspect both.

Social implications

The policymakers can also focus more on developing research collaboration form other countries and industry for research universities and professional staff from leading industries for teaching collaborations for the rest. The HPIs should focus on creating a global image for themselves and MPIs and LPIs should be encouraged to raise their national ranking.

Originality/value

The study is a novel attempt to present the present state of Indian institutions grounded on all the national ranking criteria. It further compares the performance of sample NIRF institutions with international rankings criterion to suggest policy changes for improving their performance at the global level.

Details

International Journal of Productivity and Performance Management, vol. ahead-of-print no. ahead-of-print
Type: Research Article
ISSN: 1741-0401

Keywords

Access Restricted. View access options
Article
Publication date: 4 February 2020

Maruša Hauptman Komotar

This paper aims to investigate how global university rankings interact with quality and quality assurance in higher education along the two lines of investigation, that is, from…

2577

Abstract

Purpose

This paper aims to investigate how global university rankings interact with quality and quality assurance in higher education along the two lines of investigation, that is, from the perspective of their relationship with the concept of quality (assurance) and the development of quality assurance policies in higher education, with particular emphasis on accreditation as the prevalent quality assurance approach.

Design/methodology/approach

The paper firstly conceptualises quality and quality assurance in higher education and critically examines the methodological construction of the four selected world university rankings and their references to “quality”. On this basis, it answers the two “how” questions: How is the concept of quality (assurance) in higher education perceived by world university rankings and how do they interact with quality assurance and accreditation policies in higher education? Answers are provided through the analysis of different documentary sources, such as academic literature, glossaries, international studies, institutional strategies and other documents, with particular focus on official websites of international ranking systems and individual higher education institutions, media announcements, and so on.

Findings

The paper argues that given their quantitative orientation, it is quite problematic to perceive world university rankings as a means of assessing or assuring the institutional quality. Like (international) accreditations, they may foster vertical differentiation of higher education systems and institutions. Because of their predominant accountability purpose, they cannot encourage improvements in the quality of higher education institutions.

Practical implications

Research results are beneficial to different higher education stakeholders (e.g. policymakers, institutional leadership, academics and students), as they offer them a comprehensive view on rankings’ ability to assess, assure or improve the quality in higher education.

Originality/value

The existing research focuses principally either on interactions of global university rankings with the concept of quality or with processes of quality assurance in higher education. The comprehensive and detailed analysis of their relationship with both concepts thus adds value to the prevailing scholarly debates.

Details

Quality Assurance in Education, vol. 28 no. 1
Type: Research Article
ISSN: 0968-4883

Keywords

Access Restricted. View access options
Article
Publication date: 21 August 2024

Elizane Maria Siqueira Wilhelm, Celso Bilynkievycz dos Santos and Luiz Alberto Pilatti

The purpose of this study is to analyze the integration of sustainable practices in the strategies and operations of world-class higher education institutions (HEIs) under the…

93

Abstract

Purpose

The purpose of this study is to analyze the integration of sustainable practices in the strategies and operations of world-class higher education institutions (HEIs) under the theoretical guidance of Max Weber's instrumental and value rationalities.

Design/methodology/approach

The results of the Quacquarelli-Symonds World University Ranking, Times Higher Education World University Rankings, THE Impact Rankings and GreenMetric World University Ranking rankings from 2019 to 2022 were paired, and the correlation between them was verified. Institutions with simultaneous occurrence in the four rankings in at least one of the years were also classified. A quantitative and qualitative methodology was used to explore how elite HEIs integrate sustainable practices into their operations and strategies, under the theoretical guidance of Max Weber's instrumental and value rationalities. Furthermore, multivariate regression models with supervised data mining techniques were applied, using the SMOReg algorithm on 368 instances with multiple attributes, to predict the numerical value of sustainability in the rankings. Coefficients were assigned to variables to determine their relative importance in predicting rankings.

Findings

The results of this study suggest that although many HEIs demonstrate a commitment to sustainability, this rarely translates into improvements in traditional rankings, indicating a disconnect between sustainable practices and global academic recognition.

Research limitations/implications

The research has limitations, including the analysis being restricted to data from specific rankings between 2019 and 2022, which may limit generalization to future editions or rankings. The predictive models used selected data and, therefore, cannot cover the full complexity of metrics from other rankings. Furthermore, internal factors of HEIs were not considered, and the correlations identified do not imply direct causality. The limited sample and potential methodological biases, together with the heterogeneity of the rankings, restrict the generalization of the results. These limitations should be considered in future studies.

Practical implications

The theoretical contributions of this study include an in-depth understanding of the intersection between academic excellence and environmental and social responsibility. From a management perspective, guidance is provided on integrating sustainability into HEI strategies to enhance visibility and classification in global rankings, while maintaining academic integrity and commitment to sustainability.

Social implications

This highlights the importance of reassessing academic rankings criteria to include sustainability assessments, thereby encouraging institutions to adopt practices that genuinely contribute to global sustainable development.

Originality/value

The originality lies in the predictive analysis between these rankings, examining the link between the level of sustainability of an HEI and its classification as a World Class University. Furthermore, it combines theories of rationality with the analysis of sustainability integration in elite HEIs, introducing new analytical perspectives that can influence future educational policies and institutional practices.

Details

International Journal of Sustainability in Higher Education, vol. 26 no. 3
Type: Research Article
ISSN: 1467-6370

Keywords

1 – 10 of over 23000
Per page
102050