Canalblog
Editer l'article Suivre ce blog Administration + Créer mon blog
Formation Continue du Supérieur
16 octobre 2011

Despite ranking changes, questions persist

http://www.universityworldnews.com/layout/UW/images/logoUWorld.gifBy Richard Holmes*. The international university ranking scene is starting to look like the heavyweight boxing division. Titles are proliferating and there is no longer an undisputed champion of the world. Caltech has just been crowned top university by Times Higher Education and Thomson Reuters, their data collectors and analysts, while QS have put Cambridge in first place. Over at Webometrics, MIT holds the number one spot. But Harvard has the consolation of remaining the top university in the Scimago and HEEACT rankings as well as the Academic Ranking of World Universities, ARWU, published by Shanghai Jiao Tong University.
Does this discredit the entire idea of rankings? Not necessarily. We all have different ideas of what a university is about and there is no reason why university rankings should be unanimous about what makes a great or even a good university. The Shanghai rankers are concerned with the natural sciences, with research and with distinctions among the world's research elite. ARWU is driven ultimately by the need to emulate the West and beat it at its own game. It also measures output rather than input. The QS rankings emphasise reputation rather than bibliometrics and are unique in including an assessment of graduate quality by employers.
The Times Higher Education (THE) World University Rankings 2011-12 are now unashamedly focused on the world's elite and have little to say about teaching quality. But unlike the Shanghai rankings, they do make an attempt to measure expertise in the arts and humanities and to give due weight to the social sciences. Last year's THE rankings were greeted with astonishment when they showed mediocre universities getting unbelievably high scores, in many cases mainly because of an apparently outstanding performance for research impact. Alexandria University was the most obvious case, but it was not the only one.
Thomson Reuters have gone to considerable lengths to ensure that similar anomalies did not occur this year. In addition, they have tweaked the relative weighting given to the various indicators and introduced several methodological refinements. One of these is extending normalisation by field to yet more indicators, not just citations. Dozens of universities have signed up for the first time, the University of Texas at Austin and the Hebrew University of Jerusalem being the best known. On top of all this, there is the unpredictable effect of exchange rate fluctuations on those indicators that involve university income.
The result of all this is that the 2011 rankings are very different from those of 2010. Any attempt to compare performance over the two rankings is pointless. That includes rather strained attempts to claim that Irish universities are collapsing because academics are voting them down in the reputational surveys in response to budget cuts. There are so many changes that it is extremely difficult to determine exactly what contributed to the rise or fall of any particular university. It is, however, noticeable that changes in the research and teaching indicators often go in the same direction, suggesting that fluctuations in the academic survey, which features in both sets of indicators, may have been at least partly responsible. These fluctuations might be a by-product of the introduction of a logarithm in calculating the scores for the teaching and research surveys.
The rankings that came out on 6 October did not have the obvious absurdities of last year. Alexandria, Bilkent and Hong Kong Baptist University are way down, although they probably did not go down far enough. They still have improbably high scores for the citations indicator, but perhaps not outrageously so. There have been some remarkable changes since last year, however. Some universities have, despite the presence of new competitors, risen dramatically. Many of them are in Europe, although there are also a few American state universities, such as Penn State, the University of Minnesota and the University of California at Davis.
Dutch universities seem to have done particularly well and Irish ones badly, along with two of the French grandes écoles. This could lead to public criticism of the sort that undermined the rankings produced by THE and QS until 2009. Once we venture outside the top 100 or so there are quite a few oddities that will be regarded as suspicious by those familiar with local higher education.
Alexandria is in 330rd place, but not Cairo University. Bogazici University in Istanbul is there and so is Istanbul Technical University, but where is the University of Istanbul? Sharif University of Technology in Tehran is in 346th place, but what about the University of Tehran? The Indian Institute of Technology (IIT) Bombay is 302nd but none of the other IITs or the Indian institutes of management or science can be found. Thailand's Mahidol is in the rankings, but not Chulalongkorn.
I expect many observers will be baffled by the appearance of the National Taiwan Ocean University, Plymouth University, the National University of Ireland, Maynooth, the University of Crete, the University of Iceland, Georgia Health Sciences University and the University of Medicine and Dentistry of New Jersey among the world's best 400 universities. Creighton University, Nebraska, in 247th place, is a worthy institution. A Jesuit-run school, it offers great value for money, according to US News, and is among the top masters colleges in the US. But masters colleges offer few or no doctoral programmes so one wonders how it could do so well in a ranking that is supposedly concerned with evaluating the world's elite research universities.
Last year, THE claimed that only vested interests that had suffered from the new ranking methodology had any reason to complain. It looks as though there will be more complaints this year from another set of institutions. These rankings continue to emphasise citations and to measure them by only one indicator, which accounts for 30% of the total weighting, down a bit from last year. Research impact is assessed only by the number of citations per paper normalised by year and by field.
In principle, this seems fair enough. A few citations would be much more of an achievement in applied maths or philosophy than in medicine where even routine papers are cited frequently and often within months of publication. It seems only fair that academic authors should be assessed against the standards prevailing in their discipline. But we can ask whether all disciplines are equal. Does education really make the same cognitive demands as physics? Has sociolinguistics been of as much benefit to society as oncology or engineering? The implication of THE's choice of method is that the answer is in the affirmative, but not everyone will agree.
Another problem with normalisation is that the finer the distinctions that are made, the smaller the absolute numbers involved and the greater the probability that small fluctuations in data can have disproportionate effects. It appears that THE has overcome this problem to some extent with a few methodological fixes, but this does not solve it entirely. THE and Thomson Reuters have some hard decisions in front of them. To make further methodological changes accompanied by huge falls and rises could discredit the rankings as much as such changes sullied the public perception of the THES-QS rankings in their early days.
But even if they keep exactly the same methodology, there is still potential for further instability if universities direct their research funding and publication efforts to those fields that yield greater benefits in the rankings, if currency fluctuations lead to wild swings in the income-based indicators or streamlining or mergers in response to financial problems affect the indicators scaled by numbers of staff.
Above all, THE and Thomson Reuters will have to deal with continued scepticism about the weighting given to citations, their refusal to consider alternative ways of assessing research impact and the potential for gaming this indicator.
* Richard Holmes teaches at Universiti Teknologi in Malaysia, and is the author of the blog University Ranking Watch.
Commentaires
Newsletter
49 abonnés
Visiteurs
Depuis la création 2 785 805
Formation Continue du Supérieur
Archives