Canalblog
Editer l'article Suivre ce blog Administration + Créer mon blog
Formation Continue du Supérieur
16 octobre 2011

Why do we bother so much with rankings?

http://profile.ak.fbcdn.net/hprofile-ak-snc4/50235_161806250531786_2705875_q.jpgBy Mari Elken in Higher Education News. A couple of days ago, the new Times Higher Education ranking was published. It appears that the yearly launches of the various rankings have become newsworthy events and they always attract a great deal of attention, and the Times ranking itself “the global authority on higher education performance“. Indeed. Why do we follow them so much, and on this note – why am I writing this to start with?
In national contexts, all of these launches of the various rankings are followed by the dissatisfied gasps from the ones who have lost some of their position, and the content of those who have survived yet another round or even improved their position (and the joy of finally making it to the distinguished list).
While there seems to be so wide-spread agreement that the popular rankings are not an adequate measure of overall university quality, everyone still appears to be engaged in checking the rankings once they are published. Almost like bad television that no-one admits to watch, but yet there seems to be a surprising oversight of the content and a generally high viewer ratings. So if they are so flawed, why do we bother so much with rankings?
Of course we can turn the argument to be one about global competition and the need for more transparency and information, and for this to be about accountability  - but provided the  small number of universities actually ranked in most of the top rankings, the various methodological issues reported in research, the skewedness of measurement in terms of available indicators – what do they really tell us about higher education as such?
This far, it seems that one of the consequences has been a general understanding that the US higher education system is leading in the world. Indeed, a large bulk of the top universities according to most rankings come from the US. Of course the first basic question is whether a few elite institutions represent a whole system, but also in terms of whole systems, the newest Times ranking in fact also  did a “value for money” analysis and there UK and Switzerland came out as the leaders, with US on 16th place. Well, one might almost ponder how come they only now came up with this ‘value for money’ analysis, provided that it has been such a focus in all other debates.
There seems to be quite wide-spread agreement in the research and academic community that rankings only show certain aspects of quality, depending on methodological choices and in most cases – the availability of indicators (which obviously cannot measure everything). This does not have to be a problem as such and has a value as well, as long as one is clear on the fact that this is what the rankings really are doing. However, the way they are promoted, this is not as clear. The promotional video of new Times ranking explicitly argues that they can in fact measure “university greatness”. With a jolly melody and a light popular media inspired presentation it will surely appeal to a number of people. Wanna know what are the top ten greatest institutions? Simple.
Well, we argue that the rankings do not measure it all. So what do I do? I go check the ranking of my own institution once a new ranking is being published. In part because I know that this can have some consequences for certain decisions further on, in part due to sheer curiosity. There is this intrinsic appeal to see things neatly categorised, in some sort of hierarchy – even when we know that this cannot ever represent the messy reality and that they can at best show a little aspect of reality.
One cannot underestimate the symbolic value of these rankings in practice. With the multitude of rankings around at this point, surely everyone can find one where their particular institution does well. How about rankings guiding some policy decisions (for example, they have arguably led to mergers in some countries)? While care has to be shown in assuming causal links without solid empirical evidence, if rankings with all their faults and flaws are even to some degree informing policy decisions of higher education as a whole – there might be a problem.
What is the solution – better rankings? It has now been steadily argued that rankings are here to stay, so various suggestions have emerged. Perhaps. While it is difficult to argue with the suggestion that they are indeed here to stay (there are too many vested interests in keeping this wagon going now) – the question nevertheless is what we should use the rankings for and how. Thus, it becomes extremely important to be clear on the specific purposes of each ranking and their use in practice.  To what extent should they guide decisions on institutional strategies? About national policy?
If they are used semi-consciously as some sort of indicators for overall success without reflecting over the actual content and implications, it might just be that it ends up in universities become well trained in jumping through hoops of specific performance indicators and the real focus on quality and improvement of performance (and what sort of performance do we talk about anyway?) gets forgotten somewhere on the way. Yes, I know I am not saying anything scandalously new – but we have to keep on repeating this again and again, just in case.
Commentaires
Newsletter
49 abonnés
Visiteurs
Depuis la création 2 783 472
Formation Continue du Supérieur
Archives