Asian higher education revolution a long way off

By Scott Jaschik. Quacquarelli Symonds, one of the major groups conducting international rankings of universities, has banned universities from recruiting people to participate in the peer review surveys conducted for the evaluations of institutions. QS accepts academic volunteers to participate in its rankings reviews. Up until now, QS has permitted universities to recruit volunteers, provided that the institutions don't suggest how they should evaluate the universities. The action by QS, as the company is known, follows the news that the president of University College Cork sent a letter to all faculty members urging them each to ask three people they know at other universities -- people who would understand the university and its need to move up in the rankings -- to participate in the QS process. Read more...
A new report entitled “Global university rankings and their impact II” was published by EUA and launched in a special session during the EUA Annual Conference, on 12 April.
PART II: Methodological changes and new developments in rankings since 2011
1. The SRC ARWU rankings
SRC ARWU World University Ranking (SRC ARWU) is the most consolidated of the popular university-based global rankings. There have been no changes in the core methodology of this ranking since 2010.
2. National Taiwan University Ranking: performance ranking of scientific papers for world universities
The NTU Ranking aims to be a ranking of scientific papers, i.e. it deliberately uses publication and citation indicators only; therefore, data is reliable. However as no field normalisation is used the results are skewed towards the life sciences and the natural sciences. The original ranking strongly favours large universities. The “reference ranking” option changes indicators to relative ones but only shows the overall score, not the scores per academic staff member for individual indicators.
3. Times Higher Education
THE descriptions of methodology customarily refer solely to the methodological tools used, without always providing enough information about how the scores are actually calculated from raw data (Berlin Principle 6). Overall there were several – albeit discreet – changes in methodology in the THE World University Ranking in 2010 and 2011, but with nothing further since then. Most of them represent improvements, such as the slight decrease (from 34.5% to 33%) in the total weight of reputation indicators which thus account for one third of the overall score. The reputation indicators in THE World University Ranking and the 2012 THE Reputation Survey are discussed in more detail in the next section.
4. Thomson Reuters’ Global Institutional Profiles Project
The Thomson Reuters Global Institutional Profiles Project (GPP) is a Thomson Reuters’ copyright. The aim of Thomson Reuters is to create portraits of globally significant institutions in terms of their reputation, scholarly outputs, funding levels, academic staff characteristics and other information, in one comprehensive database (Thomson Reuters, 2012a). GPP is not a ranking as such; however one of the parameters used is the ranking position of institutions. These ranking positions are taken from THE rankings.
5. Quacqarelli-Symmonds rankings
Comparisons between universities (QS, 2012f ) on a subject basis can be much more useful for them than global university league tables that try to encapsulate entire institutions in a single score. Furthermore comparisons made within a single subject lessen the field bias caused by different publishing cultures and citation practices within different fields of research. In 2012 the QS subject rankings covered 29 of the 52 subject areas defined. These rankings are strongly based on reputation surveys. The methodology used is not sufficiently transparent for users to repeat the calculations and various mathematical adjustments are made before the final score is reached. In relation to the academic reputation survey QS admits that a university may occasionally be nominated as excellent and ranked in a subject in which it “neither operates programmes nor research” (QS, 2011b, p.11). In an attempt to address this, QS specifies thresholds and conducts a final screening to ensure that listed institutions are, indeed, active in the subject concerned. This demonstrates that academics risk nominating universities on the basis of their previous reputation or reputation in other areas, rather than based on their own real knowledge of the institution. While the measures taken may help to eliminate inappropriate choices, they prevent academics from sometimes nominating universities which have programmes, but no capacity or strength in a given subject.
6. CWTS Leiden Ranking
Identification of the bias in MNCS indicators given their unusual sensitivity to publications with extremely high citation levels, and the introduction of indicator stability intervals to detect high citation scores possibly resulting from such publications (rather than citations covering a university’s entire publications output) are both positive developments. Yet they are also a warning that new indicators always introduce fresh biases, so that rankings are constantly liable to distortion. Only time will tell whether the new indicator – the proportion of top 10% publications (PPtop 10%) – which currently seems the most reliable will be the best in the long term or will create fresh problems. However, the inclusion of full counting and proportional counting methods does enable users to select further options as they see fit.
7. Webometrics Ranking of World Universities
The increased coverage of Webometrics to include over 20,000 higher education institutions allows nearly all higher education institutions worldwide to compare themselves with others. Apart from the addition of the “excellence” indicator based on SCImago bibliometric data, all other indicators used by Webometrics are based on web analysis, and considerably less direct proxies than the indicators used by academic rankings. Webometrics’ continued focus thus remains on providing a rough indication of how an institution performs compared to others.
8. U-Map
According to the report on U-Map in Estonia (Kaiser et al., 2011), the resulting U-Map profiles largely match the expectations of higher education institutions and the Ministry of Education, while the most interesting differences and diversity are observable in the “knowledge exchange” and “international orientation” profiles. However, the country concedes that, because U-Map is designed as a European transparency tool, it is not fully compatible with all national institutional needs. Both Estonia and Portugal acknowledge that it has raised awareness among institutions of their own profile.
9. U-Multirank
If U-Multirank meets its objectives, based upon the experience with the feasibility study, and given that the intention is to integrate the already tested U-Map classification tool, it will be substantially different from existing global rankings. The implementation phase was launched in January 2013 with the financial support of the European Commission and the first rankings are expected for early 2014.
10. U21 Rankings of National Higher Education Systems
While the development of a systems’ level ranking is an interesting new approach, as indicated in Part I there are many open questions. For example, as the weights of the indicators in the overall ranking have not been provided, it is very hard to determine which indicators have the greatest and least impact on the overall score, as the description of indicator weights is also confusing. The required calculations have been performed and the weight of each indicator added in the course of preparing the present report. While it has been assumed that the two “connectivity” indicators are equal in weight, nothing is said about them either in the overall report (Williams et al., 2012) or on the U21 website.
11. SCImago Rankings
Tools offered by SCImago are useful and available free of charge. One key feature of SCImago is that it covers more than 3 000 institutions thus allowing a large group of institutions to compare themselves with others. Users will nevertheless have to take into account that SCImago does not distinguish between universities and other research organisations. SCImago tools make it possible to compare institutions or countries: in total, by 27 subject areas and numerous narrower subject categories, by countries or regions. Journal rankings are important in the choice of a journal for publication. SCImago also has its limitations, for example only bibliometric data is used. Hence most indicators are absolute numbers which means that it favours large institutions.
12. University Ranking by Academic Performance
The greater inclusiveness of URAP compared to the most popular global university rankings is of interest. Its results should be reliable because its content is drawn solely from international bibliometric databases. At the same time, and despite the words “academic performance” in its name, URAP uses indicators concerned exclusively with research. No indicators related to teaching are included; therefore once more the focus is on research-oriented institutions. Furthermore its six ranking indicators are absolute values and therefore size-dependant. As a result, URAP is strongly biased towards large universities.
13. EUMIDA
The development of EUMIDA corresponds to the growing need for policy makers to have more extensive Europe-wide, comparable data collection. EUMIDA can therefore be seen as a positive development. In principle, the aggregation of results into an index is a ranking.
14. AHELO
EUA has been closely involved in monitoring the progress of this feasibility study, along with its partner associations in the US and Canada. The joint concerns of the three associations were raised in a letter sent to the OECD in July 2012 on behalf of the university communities in all three regions.
15. IREG ranking audit
The success of audits will no doubt greatly depend on the qualifications of audit team members and their willingness to explore ranking methodologies in depth, as well as their ability to access the websites of the ranking organisations and specifically details of the methodology applied. Experience to date, as explained in the first EUA Report, has shown that frequent gaps in the published methodologies exist, and most notably the explanation of how indicator values are calculated from the raw data. As a result, those wishing to repeat the calculation to verify the published result in the ranking table have been unable to do so. There are also cases in which the methodological content posted in more than one section of the ranking provider’s website is not consistent. While such variations are usually attributable to content relevant to ranking tables in different years, the precise years concerned are not clearly specified. Other rankings refer to the “normalisation” of data but without stating what kind of “normalisation” is meant. The term could thus denote many different things, ranging from the field normalisation of bibliometric indicators to the “normalisation” of indicators to make them relative rather than size-dependent, or to “normalisation” involving the division of a university’s result by that of the “best” university to make the former “dimensionless”. It is to be hoped that the IREG audit will be thorough, and also take these concerns into account and lead to substantial improvements in ranking methodologies and the quality of the information provided. More will only be known on how this works in practice when the first audit results are available.
SEE PART II: Methodological changes and new developments in rankings since 2011
1. The SRC ARWU rankings
ARWU Ranking Lab and Global Research University Profiles (GRUP)
Macedonian University Rankings
Greater China Ranking
2. National Taiwan University Ranking: performance ranking of scientific papers for world universities
3. Times Higher Education
Times Higher Education World University Ranking
THE academic reputation surveys and THE World Reputation Ranking
THE 100 under 50 ranking
4. Thomson Reuters’ Global Institutional Profiles Project
5. Quacqarelli-Symmonds rankings
QS World University Ranking
Additional league table information
The QS classification
QS Stars
QS World University Rankings by subject
QS Best Student Cities Ranking
QS top-50-under-50 Ranking
6. CWTS Leiden Ranking
7. Webometrics Ranking of World Universities
8. U-Map
9. U-Multirank
10. U21 Rankings of National Higher Education Systems
11. SCImago Rankings
SCImago Institutional Rankings
Other SCImago rankings and visualisations
12. University Ranking by Academic Performance
13. EUMIDA
14. AHELO
15. IREG ranking audit.
Call for Participation in the Rankings in Institutional Strategies and Processes (RISP) project
The European University Association (EUA) has issued to call to participate in an online survey in the context of the Rankings in Institutional Strategies and Processes (RISP) project.
In the perspective of its recently launched project entitled “Rankings in Institutional Strategies and Processes” (RISP), the European University Association (EUA) – together with its partners the Dublin Institute of Technology, the French Rectors’ Conference and the Latvian Academic Information Centre - aims to analyze the impact of rankings on institutional decision-making. This represents the first pan-European study of the influence of rankings on European universities.
An increasing number of university rankings are being published every year, and there is a growing consensus that rankings are becoming a part of the higher education realm. All higher education institutions are invited to participate in completing a survey by 17 June 2013.
To fill in the survey, follow this link.
For more information, follow this link.