Canalblog
Editer l'article Suivre ce blog Administration + Créer mon blog
Formation Continue du Supérieur
17 juin 2012

U-Multirank – The better approach for worldwide ranking?

http://uv-net.uio.no/wpmu/hedda/wp-content/themes/hedda/styles/blue/head-bg.jpgThis guest entry was written by Frank Ziegele. Frank Ziegele is the director of CHE (Centre for Higher Education), a German think tank for the higher education sector. In addition, he is also professor for higher education management at the University of Applied Sciences Osnabrueck. He was one of the leaders of the feasibility study-project on “U-Multirank”. In this post, he gives his insights about the project and a recent book on the topic, edited by Frank Ziegele and Franz van Vught
The impact of rankings – and their risks
Recently India’s University Grants Commission announced that foreign universities entering into agreement with their Indian counterparts for offering twinning programmes will have to be among the global top 500 in the Times Higher or Shanghai Ranking. The Commission says that the underlining objective is to ensure that only quality institutes are permitted for offering the twinning programmes to protect the interest of the students.
The assessment of quality based on research league tables – could this be too much impact?
Another proof that global rankings step by step gain influence going far beyond their initial ideas. And also a proof of how dangerous this development is: Could rankings help to protect students’ interests although they say almost nothing about teaching and provide insufficient information on the field level? Please keep in mind that there is no university in the world where all the fields perform equally good or bad. Students have to know what they could expect in the field they want to study. And I haven’t said anything about methodological problems of these rankings yet: field- and language biases of bibliometric databases, measurement of reputation instead of performance, outcomes sensitive to arbitrary weightings of indicators, just to mention a few of the problems.
U-Multirank as an alternative, elaborate ranking system.
Considering all these problems, I am deeply convinced that we need an alternative to the existing global rankings, and I am glad that I had the chance to be part of a challenging project to develop an alternative: the feasibility study for a new ranking system called “U-Multirank”, initiated by the European Commission. A new book is out now giving an overview on the results of this study: Van Vught, Frans & Ziegele, Frank, Eds. (2012): Multidimensional Ranking. The Design and Development of U-Multirank. Dordrecht, Springer.
The book presents an analysis of existing rankings and their effects, design principles, methods and instruments for a new ranking, the proposal for a set of indicators and the lessons learnt from a pilot test with 150 universities all over the world. It also develops first ideas for a long-term implementation of a new and unique ranking system.
The unique features of U-Multirank
Why is U-Multirank different from “orthodox” rankings? Let me briefly explain how this system is meant to function. The “core” of the ranking is an interactive web tool, leading to a user-driven instrument. The idea is a “democratisation” of rankings by allowing the users to adapt the ranking to their (very) own preferences. The user will first have an overview on the features of higher education institutions, shown by a number of descriptive indicators (such as degrees awarded, shares of bachelor/master, share of part time or mature students, expenditures on research, bachelors from the region, income structures, international students). This “mapping” leads to the identification of a set of comparable institutions, distinguishes apples from oranges.
Next the ranking is made within the apples and within the oranges, again on the self-selection of a number of performance indicators. The mapping deals with horizontal diversity and the ranking shows vertical differences. The ranking indicators cover five main dimensions: teaching and learning, research, knowledge transfer, regional engagement and international orientation. Each indicator is shown separately, the performance for each indicator becomes transparent in a multi-dimensional approach. There is neither theoretical nor empirical justification for assigning specific weights to individual indicators and aggregating them into composite indicators.
The data is available on the institutional and the field level, the multi-level approach offers stakeholders access to the level of information they are interested in. And last but not least, U-Multirank does not provide a league table because positions in such a table exaggerate differences. There is the story of the university that dropped 15 places in the Shanghai ranking just because their Nobel Prize winner became older…..Therefore, we distinguish 3-5 groups – within the groups the differences are small but between the groups we find significant differences in performance levels.
A challenging project
Of course this idea is discussed controversially. In debates after publishing the results of the feasibility study I often heard objections such as “this is a nice information system but not a ranking”, “people still want to know who is number one”, “universities will refuse to collect all the data you need”, “this is too complex for lay users” and “this system is too soft because everyone performs well somewhere, so every university will claim to be the best”.
We can deal with these concerns on the basis of empirical evidence because a ranking system with similar features already exists: The CHE Ranking for German, Dutch, Austrian and Swiss higher education institutions, following exactly the approach of field-based, multi-dimensional and group-oriented ranking. In Germany 95% of the faculties take part in the ranking, people perceive it as a real ranking and most of the orthodox league tables have disappeared from the German market. The associated web tool is used by more than 200.000 people per month, with up to 17 Million clicks per year. U-Multirank-type systems are accepted and being used, also by lay users!
Indeed, the burden of data collection for the universities is high, but they become more and more professional (and elaborate methods of data verification, which have also been tested for U-Multirank, deal with the danger of manipulation). And the system still reveals bad performance clearly, so it is not a soft one. Of course universities will pick their good results for advertising, this is beneficial for the public perception of the HE sector as a whole. So I would argue against all the objections mentioned: U-Multirank is a challenging project but it is worthwhile and will definitely change the scene of worldwide rankings (we could already see classic rankings adapting to our ideas, for instance by also introducing field-based information). It has the potential to promote diversity in missions and profiles of universities and for this reason it helps to get an understanding of multiple excellence instead of supporting the mere reputation race for the world-class research university.
Implementation phase to start soon
The three major challenges I see are long-term funding of the system, worldwide comparability of some of the data and openness of some regions to our model (especially China and the US). It will be the task of the implementation phase to cope with these issues. The European Commission has made a call for tender for this implementation and plans to start by September. Make up your mind about multi-dimensional and user-driven ranking by looking at the U-Multirank website (www.u-multirank.eu) and the aforementioned Springer book.

Commentaires
Newsletter
49 abonnés
Visiteurs
Depuis la création 2 786 266
Formation Continue du Supérieur
Archives