Richard Holmes. It is more than eight years since Shanghai Jiao Tong University produced its first Academic Ranking of World Universities. Since then international university rankings have multiplied. There are now two main competitors producing general rankings that include indicators other than research, Quacquarelli Symonds (QS) and Times Higher Education.
There are also web-based rankings, Webometrics and IC4U, and research-based rankings from Taiwan, Turkey and Australia, the last of which seems to have disappeared. Then we have rankings from Russia and France. Nor should we forget the European U-Multirank project, which has just moved out of the pilot stage, or regional rankings for Asia and Latin America or the various disciplinary sub-rankings or the rankings of business schools. There are now quite a few things that we have learned about ranking universities.
Measuring research is the easy bit
There are several ways of measuring research. You can count total publications, publications per faculty, total citations per faculty, citations per paper, h-index, international collaboration, money spent, reputation. All of these can be normalised in several different ways.
The result is that ranking is beginning to look like heavyweight boxing with no undisputed champion in sight. Cambridge is top of the QS rankings mainly because it has a good reputation for research, Harvard is first in the Shanghai rankings because it produces more of just about everything and Caltech leads in the new Times Higher Education World University Rankings because of an emphasis on quality rather than quantity.
Nobody has figured out how to measure teaching
QS has an indicator that measures student faculty ratio but this is, as they admit, a very crude instrument. For one thing, it includes academics who only do research and may never see the inside of a lecture hall. Times Higher Education has a cluster of indicators concerning teaching, but they only claim that these have something to do with the learning environment.
If anyone does try to seriously measure teaching quality, the best bet might be to use some sort of survey of student satisfaction, as has apparently been done successfully by the U-Multirank pilot project, or perhaps could go global.
In any case, for better students and better schools, teaching is largely irrelevant. Recruiters do not head for Harvard, Oxford and the grandes ecoles because they have heard about the enthusiasm with which lecturers jump through outcomes-based education hoops. They go there because that is where the smart people are and smart people are smart before they go to university.
Getting there first is important
The Academic Ranking of World Universities published by Shanghai Jiao Tong University is not noticeably better than the Performance Ranking of World Scientific Papers produced by the Higher Education Evaluation and Accreditation Council of Taiwan. But it still gets a great deal more publicity. A very good research-based ranking has been produced by the Middle East Technical University in Ankara, but hardly anybody knows about it: the niche has already been occupied.
Brand names matter
If anyone else but a magazine with the word ‘Times’ in it and an association with Thomson Reuters had produced a ranking with Alexandria University in the top 200 in the world, or for that matter even put it first in Egypt, they would have been laughed out of existence. The QS rankings have flourished partly because they are linked to a successful graduate recruitment enterprise.
Beware of methodology
The QS rankings are well known for a fistful of methodological changes that have sent universities zooming up and down the tables. Although the methodology has officially stabilised, there have still been unannounced changes. In 2010, something happened to the curve for citations per faculty (a mathematician could explain exactly what) that boosted the scores for high fliers except, of course, for the universities in joint first place, but lowered those for the less favoured ones. One result of this was a boost for Cambridge, no doubt to everyone’s astonishment. Between 2010 and 2011, Times Higher Education made so many changes that talking about improvements over the year was quite pointless.
Weighting is not everything
Weighting is very important, though. It is increasingly common for rankings to have an interactive feature that allows readers to change the weightings and, in effect, to construct their own rankings. It is instructive to fiddle around with the indicators and see just how much difference changing the weighting can make.
The missing indicator
In the final analysis, the quality of a university is largely dependent on the average intelligence of its students, which is why the most keenly scrutinised section of US News’ Best Colleges is the ACT-SAT scores. International rankings have barely begun to tackle this question. I doubt if anyone is very interested in the score on QS’s employer survey or even the Paris Mines rankings, which counts the number of top bosses. It would probably be quite technically feasible to work out the relative selectivity of universities, but there are likely to be insurmountable political problems.
What next?
There will surely be more international rankings of one sort or another. It is unlikely, though, that any will ever achieve the dominant role that US News has achieved. We can expect more sophistication with increasingly complex statistical analysis, more regional rankings and more disciplinary rankings, perhaps also more silly rankings like a global version of American Best Universities for Squirrels.
But it is unlikely that there will ever be agreement on what makes a good or a great university.
* Richard Holmes is a lecturer at Universiti Teknologi MARA in Malaysia and author of the University Ranking Watch.