The project is funded by the Commission and has been a highly profiled instrument in the context of European higher education area. Commissioner for Education, Culture, Multilingualism and Youth, Androulla Vassiliou, argued that U-Multirank is “a modern and sophisticated ranking, capturing the full diversity of higher education“. Of course, the Commission is known for their big words and the full diversity has nevertheless to be taken in the context of measurable indicators. The general feedback has been somewhat more cautious, while optimistic. For instance, while the European Students Union (ESU) has been involved in the development of U-Multirank, their recent press release on the occasion of the kick-off was still somewhat modest and indicated not assurance but hope that U-Multirank can avoid the pitfalls of previous rankings. Nevertheless, the potential of such instrument to provide information to students was highlighted.
Furthermore, earlier last year the Times Higher Education was reporting that in the UK the House of Lord’s committee raised a number of concerns about the instrument and whether it can deliver its promise. The committee was “not convinced that it would add value if it simply resulted in an additional European rankings system alongside the existing international ranking systems”, and it was highlighted that the success was dependent on institutional engagement. While the THE article extensively reporting on criticisms about their (potential) competitors and specifically highlighting that the committee also brought out the success of THEs own ranking is to be expected, the point about the success being dependent on institutions seeing the value added is something to be taken seriously.
Frank Ziegele from CHE, who is the co-leader of U-Multirank, has earlier written on the Hedda blog about the unique features of U-Multirank that include five groups of measurement (teaching and learning, research, knowledge transfer, regional engagement and international orientation), and focus on users and stakeholders. The ranking would not end up in a definite Top100 list, but rather provide the opportunity for the user to develop groupings of institutions based on selected indicators. The indicators were developed in cooperation with the stakeholders, including the institutions themselves. Frans van Vught, the project leader of U-Multirank argued that this assured the “criterion of ‘relevance’ in the process of indicator selection“.
However, there are a number of league tables and rankings already, and while the methodology and approach is indeed different, the question is whether the users see this difference and added value in the already saturated market of rankings. In addition, the successful institutions have a vested interest in terms of being able to show that they are “the best” or in the “top 10″. This most definitely has a broader appeal than being in the top group in terms of comparable institutions in the area of research. At the same time, there number of institutions in Europe is quite large and as the top 10 and 100 only can fit so many institutions, the rest of them should by all means be interested in knowing who their competitors are and how they are relatively doing in terms of comparable institutions. However, making this message clear to all of these institutions and getting them on board is where the U-Multirank team has their work cut out for the future.
The project is funded by the Commission, but is carried out by a consortia, including The Centre for Higher Education (CHE) in Germany and the Center for Higher Education Policy Studies (CHEPS) in the Netherlands. In addition, the partners include the Centre for Science and Technology Studies at Leiden University (CWTS), academic publishers Elsevier, the Bertelsmann Foundation and software firm Folge 3, in addition to various stakeholder organisations.