Sponsored by the European Commission, the prototype was developed by a consortium led by the Center for Higher Education Policy Studies at the University of Twente and Germany’s Center for Higher Education Development. It uses metrics in five areas – teaching and learning, research, knowledge transfer, international orientation, and regional engagement – to allow users, whether students, universities, or employers, to use their own weightings to rank universities. This build-your-own approach was born of frustration with the poor showing of countries on the European continent in established global rankings.
To give this new system its due, it has the great advantage of being consumer-oriented, focusing on a range of measures that are certainly plausibly related to university excellence. Given longstanding complaints about existing rankings, which certainly have many flaws, there is something appealing about U-Multirank’s approach. It accepts the inevitability of comparisons between institutions, while creating a transparent, multifaceted, user-controlled model that could be called the rankings version of direct democracy.
And yet. Rankings foes often argue that judging different universities is inherently problematic because so many unique factors distinguish them. But most academics in, say, the physics department of a major research university can instantly recite a list of the top departments in the nation — or the world. University presidents, too, have a keen sense of where they stand in the pecking order of institutions with which they compete. So while an employer might want to rank universities based on their graduation rates — an important measure, to be sure — shouldn’t some measure of overall academic excellence also play a role? Similarly, a university might legitimately boast of its wonderful record of regional engagement – yet it might be quite undistinguished in a broader sense.
Here’s another way to look at this. The London Symphony Orchestra is surely one of the best in the world – ranked number four by Gramophone magazine. Yet a critic opposed to the hopeless subjectivity of such assessments might propose an alternative methodology. Under this approach, consumers could be asked to rank the orchestra’s quality based on their views of the relative importance of numerous factors: the prowess of its percussion section, the comfort of its concert-hall seats, the affordability of its tickets, and so forth. But would any of those granular characteristics, while certainly worth knowing about, really tell the broader world something meaningful about its excellence?
I don’t mean to condemn the U-Multirank experiment, which I believe has promise – particularly if its creators can find a way over time to add some kind of AHELO-like measure of student-learning outcomes. A proliferation of thoughfully constructed ranking systems is healthy, it seems to me. But a radically democratic approach to university assessment comes at a price. Consumers may not be the best judges of overall university quality. That’s why the application of thoughtful judgement by rankers has value, particularly as rankings gradually become more sophisticated and robust. If handmade rankings become the preferred method of university assessment, useful and comparable information about overall institutional quality risks being lost in a childish world of prizes for all.
See also on our blog Classement mondial des universités: La Commission européenne confie le projet au consortium Cherpa. A l’issue d’un appel d’offre lancé en novembre 2008 par la commission européenne, un consortium de sept entités européennes s’est vu confier le 2 juin 2009, la mise au point et l'essai de la faisabilité d'un classement multidimensionnel des universités à l'échelle mondiale.