Canalblog
Editer l'article Suivre ce blog Administration + Créer mon blog
Formation Continue du Supérieur
12 septembre 2012

University rankings and system benchmarking reach similar results

http://enews.ksu.edu.sa/wp-content/uploads/2011/10/UWN.jpgBy Benoît Millot. University International Rankings, or UIRs, have become a reality. And despite their shortcomings and the rise of resistance against them, they are likely to stick around.
Critics of UIRs target methodological weaknesses such as bias in favour of research, use of composite indicators, reliability of peers’ subjective opinions and so on. But they also point out the perverse effect of UIRs on the decisions of tertiary education institutions and of national authorities in charge of tertiary education – racing to develop world-class universities at the expense of national tertiary education systems.
In reaction to these caveats, analysts have convincingly argued that instead of focusing on individual universities, it would be more useful to put the spotlight on entire tertiary education systems. Simultaneously, there should be a shift from ranking to benchmarking. This twofold shift would allow countries to assess the health of their higher education systems and to design reforms encompassing all types of tertiary education institutions rather than focusing on a few centres of excellence.
Efforts are currently under way from various quarters to develop reliable International System Benchmarking (ISB) instruments, and the first comprehensive one of its kind has recently been released.
International rankings: Key results from the main leagues
In 2010 and 2011, the figures on which this analysis is based, the two UIRs most widely referred to by the academic community, analysts and decision-makers were arguably the Academic Ranking of World Universities (ARWU), launched by Shanghai Jiao Tong University, and the one operated by Quacquarelli Symonds under the auspices of Times Higher Education. (The THE ranking is now operated by Thomson Reuters and QS continues to run its own ranking.)
In 2010 and 2011, the 500 top universities for each ranking are concentrated in 50 countries in the QS league and in 39 countries in the more exclusive ARWU league. All but two countries hosting the top universities in the ARWU league are also present in the QS league, which is a first (and strong) hint that the two rankings yield close results. A subset of 37 countries appear in both leagues.
In order to make comparisons between countries, we cannot be satisfied with the sheer number of top universities – this number needs to be weighted in order to control for the size of the countries. One possibility would be to use each country’s population, but this is not fully satisfactory because it ignores intergenerational differences. Instead, we use the number of people of tertiary age as a weight. The ratio of the number of 500 top universities to the tertiary-age population gives us what could be labelled the ‘density of top universities’.
Density gives an idea of the number of top universities available per one million people of tertiary age. In fact, using the ARWU data for illustration purposes, it is clear that the number of top universities and their density follow almost opposite tracks. Of the 25 countries with the highest density of top universities, 23 are to be found in both the QS and ARWU rankings – an observation that confirms the first hint mentioned above regarding the convergence of the two leagues.
Secondly, the rankings of countries by density of top universities are very closely correlated for QS and ARWU. With the exception of Ireland, which is number one in the QS league and only number 13 in the ARWU league, most countries have a similar position in the two rankings, and the values of the density ratios in the two leagues are also very close for each individual country. Hence, despite their different approaches, the two leagues yield highly comparable results.
Also, all the 37 countries but one (India) are either high-income or upper-middle income. This observation substantiates the assertion that UIRs’ methodology is putting a premium on well-resourced universities. On the other hand, even within the group of less than 40 countries that harbour the top world universities, there is a huge gap between those leading the flock and those at the lagging end: while in Finland, two world-class universities serve 100,000 tertiary age people, in India two world-class universities cater to 100 million potential clients.
Indeed, there is a strong and positive correlation between the density of top universities and gross domestic product per capita. Despite significant differences in the way they are developed, the QS and ARWU rankings do share some common points, in particular the size of the universe that they cover (focusing on the top 500 universities) and their reliance on a range of indicators encompassing several areas of academic life. This is not the case for a relative newcomer to the field, the Webometrics ranking, which considers a universe of more than 12,000 institutions worldwide, and relies on several aspects of the visibility of institutions on the internet.
Given such a disparity in the methodology, one would expect widely different results in the rankings. Surprisingly, the comparison made on the 500 top universities of the three leagues shows strikingly similar results, and the correlations between the three rankings are significant and positive. There are therefore strong indications that three of the major and most popular UIRs have converging results both in terms of the set of countries hosting ‘world-class’ universities and in terms of the rankings of countries within this set.
International system benchmarking
The purpose and focus of ISBs are quite distinct from the ones explicit in UIRs, as mentioned above. The former targets country systems and vows to assess their performance against set criteria, while the latter focuses on individual institutions. Although the need for ISB instruments was identified long ago, few practical attempts have been made to implement them. Statistical challenges account for this situation.
Following the policy brief prepared for the Lisbon Council covering 17 countries, the work undertaken by the OECD, and the World Bank’s benchmarking of universities in the Middle East and North Africa, the first genuinely comprehensive ISB – the U21 Ranking of National Higher Education Systems – was developed by the Melbourne Institute in 2012. U21 is based on four sets of indicators: resources, environment, connectivity and output. Five straightforward indicators, linked to the financial resources allocated to tertiary education, are used to assess the performance in the first area (resources).
The main novelty of the U21 lies in its use of indicators designed to characterise the environment, particularly the subset of indicators related to the ‘qualitative measure of the policy and regulatory environment’. These represent significant progress because they respond to the widespread view that governance issues are a main constraint to the development and improvement of tertiary education systems. Connectivity, the third area considered by U21, is measured by two highly relevant indicators: (i) the proportion of international students in tertiary education, and (ii) the proportion of articles co-authored with international collaborators. Output, the fourth area under U21, is measured by a basket of nine indicators spanning a whole range of criteria from research products to enrolment rates and graduate unemployment rates, the latter indicator being an answer to the growing concern regarding the employability of graduates produced by tertiary education systems.
Rankings are provided separately for each of the four areas mentioned above. Finally, an overall, composite indicator is constructed by combining the four sets of indicators.
Comparing UIR and ISB
Comparing the outcomes of the UIR and ISB instruments is made possible by the fact that we have translated the results of the university-based indicators of the UIRs in countrywide terms, making them analogous to the indicators of the ISB. The comparison is presented here in two steps: (i) how do the sets of countries compare – regardless of their individual rankings, and (ii) how do the rankings compare?
While the countries covered in the three versions of UIRs (QS, ARWU and Webometrics) are the results of university rankings, those considered by U21 are a deliberate choice, itself linked to a predetermined decision.
U21 selected a set of 48 countries, using data from the National Science Foundation (NSF) ranking of research output. It is therefore not a surprise to find a strong overlap between these 48 U21 countries and the 39 and 50 countries that host top universities according to ARWU and QS, respectively, or indeed to the 37 countries to be found in both UIRs.
The main differences between the group of UIR countries (and especially the more inclusive QS list) and that of the U21 are: (i) the lesser representation of developing countries in the U21 list, and (ii) the stronger presence of Eastern Europe countries in the U21 list. These differences aside, there is strong convergence between the UIRs and U21. However, the decisive test is not the aggregate number of countries represented in both lists, but the rank of the countries. There are highly significant correlations in rankings, which show that the two instruments yield similar results.
However, differences are also to be noted, especially in the dispersion of votes for the countries ranked first – while Finland is ranked in the top four in all three leagues, Ireland is ranked first by QS but lags at 13 and 16 in the ARWU and U21 lists, respectively. Even more striking, while the United States leads the pack in the U21 list, it is relegated to ranks 17 and 22 in the ARWU and QS leagues, respectively. Still, there is a lot of stability in the rankings for most other countries, and the superposition of the three lists shows remarkable homogeneity.
QS and ARWU produce very close results, which are also confirmed by the Webometrics league, despite the differences in methodology used by these three UIRs. In all three rankings, the density of top 500 universities (‘world-class’ universities) is closely related to the wealth of the countries. Comparing these results with those obtained by the U21 ranking – the first comprehensive ISB – yields strikingly similar results, even though the focus and objectives of the ISB are clearly different from those of the UIRs. It appears that hosting world-class universities is associated with the position held in system-wide rankings. Both kinds of instruments analysed in this note suggest that being in a rich country helps both to boost the supply of high quality universities and to maintain a performing system of tertiary education.
Part of the explanation for this finding comes from the bias common to the two instruments – that is, an overemphasis on research and on well-resourced systems. Despite this, it remains that these results reflect both the choices made by universities themselves and tertiary education decision-makers at the national level, and the fact that money can buy quality. From a methodological point of view, it can be concluded that the empirical implementation of the concepts that radically differentiate the two instruments end up – so far – with very similar outcomes.
Undoubtedly, as data availability increases, both rankings and benchmarking will improve, and their respective outcomes will become more and more complementary.
* Benoît Millot is a former lead education economist with the World Bank and is currently a consultant with the same institution. This article does not represent the views of the World Bank and is the sole responsibility of the author.
Commentaires
Newsletter
49 abonnés
Visiteurs
Depuis la création 2 783 549
Formation Continue du Supérieur
Archives