Canalblog
Suivre ce blog Administration + Créer mon blog
Formation Continue du Supérieur
10 août 2012

World's Best Universities: About the Rankings

http://www.usnews.com/dbimages/master/23684/worlds-best-2011-2.pngBy Robert Morse. These rankings show how American institutions compare with other institutions of higher learning. U.S.News & World Report is proud to publish its fourth annual version of the World's Best Universities rankings. These new 2011 rankings are based on data from the QS World University Rankings, which were produced in association with QS Quacquarelli Symonds. QS Quacquarelli Symonds, one of the world's leading networks for careers and education, has been publishing international rankings since 2004.
These rankings have obtained increasing influence among academics worldwide and have a growing effect on prospective students and government policymakers. The rankings themselves are the same as QS publishes on its website. The new 2011 rankings once again include the top 400 universities worldwide. New this year are the top 100 Latin American universities and the top 100 Asian university rankings. Also, for the first time, there are global rankings in 24 subject areas:
Arts and humanities: English language and literature; geography and area studies; history; linguistics; modern languages; and philosophy.
Engineering and technology: chemical engineering; civil engineering; computer science; electrical engineering; and mechanical, aeronautical, and manufacturing engineering.
Life sciences: biological sciences; psychology
Natural sciences: chemistry; earth and marine sciences; environmental sciences; mathematics; metallurgy and materials; and physics and astronomy.
Social sciences: accounting and finance; economics and econometrics; politics and international studies; sociology; and statistics and operational research. [See the methodologies used in the World's Best rankings.]
The 2011 U.S. News World's Best Universities rankings enable our readers to more fully understand how American institutions are performing when compared with other institutions of higher learning. The bottom line is that U.S.-based universities perform very well: Eighty-five of the Top 400 universities worldwide, or 21 percent, are in the United States. The United Kingdom comes in second place with 43 universities, or 11 percent of the worldwide total. Germany was third with 36 universities, or 9 percent; Australia was fourth with 21 universities, or 5 percent; and France was fifth with 18 schools, or 5 percent.
Canada was in sixth place with 17 universities or 4 percent; Japan came in seventh with 16 universities, or 4 percent; Netherlands finished eighth with 12 universities, or 3 percent; South Korea was in ninth place with 10 schools, or 3 percent; and China and Italy were tied at 10th place with 9 schools, at 2 percent each. These top 11 countries accounted for 69 percent of the top 400, or 276 schools. In total, there are schools from 45 different countries represented in the top 400. [See which U.S. universities performed the best in the World's Best rankings.]
The world is rapidly changing. More students and faculty are eager to explore the higher education options that exist outside their countries. Universities worldwide are competing for the best and brightest students, the most highly recognized research faculty, and coveted research dollars. Countries at all levels of economic development are trying to build world-class universities to serve as economic and academic catalysts. And more universities are seeking world-class status to become players on the global academic stage. In other words, the world of higher education is becoming increasingly "flat."
The major research universities in the United States are aware of these global trends and have been expanding and competing internationally for several years. In fact, American higher education's large research-doctoral-granting university model is now being copied by universities and higher education systems in many other countries. The new World's Best Universities top 400 rankings help put these global trends in context. When U.S. News started publishing Best Colleges rankings more than 25 years ago, no one predicted the influence these lists would acquire as both a consumer tool and a force for accountability in American higher education. What began with little fanfare has spawned college rankings in countries around the world. Global institutional ranking systems like the one we are publishing here are variations on the original idea of our national rankings.
With these variations come differences in methodology. First, none of the data used in the Best Colleges and Best Graduate Schools rankings are used to compute any of the World's Best Universities rankings. As noted earlier, the international rankings are based on the QS World University Rankings, which are produced in association with QS, who does all the data collection and calculations for the rankings. We publish the same World's Best Universities rankings that QS does. Additionally, the methodology used to compute the World's Best Universities rankings is different in most key areas from what we use in the U.S. News Best Colleges and U.S. News Best Graduate Schools. It's true that both the Best Colleges and the World's Best Universities rankings use peer surveys. However, the survey process used to calculate peer assessment and recruiter reviews in the World's Best Universities rankings are conducted very differently.
Because of the limitations and the availability of cross-country comparative data, the world ranking system relies heavily on research performance measured through citations per faculty member. The U.S. News rankings do not use citation analysis. The U.S. News Best Colleges and Best Graduate Schools rankings rely heavily on student and school-specific data—such as scores on admission tests, graduation rates, retention rates, and financial resources—that are not part of World's Best Universities because such student and school-specific data can't be compared internationally.
About our partner:
Founded in 1990, today QS Quacquarelli Symonds is the leading information and events company specializing in the higher education sector, worldwide. Through exclusive events, publications, research, and interactive Web tools, QS links undergraduate, graduate, M.B.A., and executive communities around the world with recruiters and education providers. QS's websites include: www.topuniversities.com, www.topgradschool.com, www.topmba.com, and www.qs.com. QS operates globally from offices in London, Paris, New York, Singapore, Stuttgart, Beijing, Shanghai, Sydney, Washington, D.C., Boston, and Johannesburg.
If you are interested in detailed methodologies and frequently asked questions about the U.S. News Education rankings, click on the links below. We have provided many in-depth articles that explain how and why we do each of the rankings.
About the Best Colleges Rankings/Methodologies

About the Best Graduate Schools Rankings/Methodologies

About the Best High Schools Rankings/Methodologies

About the Top Online Education Programs Rankings/Methodologies

About the World's Best Universities Rankings/Methodologies
.
7 août 2012

Poor ranking for Uruguay’s main university; Brazil and Chile top of Latam list

http://en.mercopress.com/web/img/en/mercopress-logo.gifUruguay’s government-financed national university came up in position 79 in the QS academic quality international ranking of the top 100 Latinamerican universities. Brazil, Chile, Mexico, Argentina, Colombia were far better ranked than Uruguay’s Universidad de la Republica, Udelar, which has caused deep concern among government officials.
The annual report from Quacquarelli Symonds, one of the most prestigious international companies in the assessment of universities academic level and achievements has been doing the ranking at global level since 2004, but on 2011 started a special branch dedicated to Latin America.
Brazilian and Chilean universities lead the pack followed by Mexico, Colombia and Argentina. Top of the list is the University of Sao Paulo; followed by the Catholic University of Chile; Campinas State university, Brazil; University of Chile; Mexico’s National Autonomous university; University of the Andes, Colombia; Monterrey Technology Institute, Mexico; the Federal university of Rio do Janeiro; University of Concepción, Chile; University of Santiago de Chile and the University of Buenos Aires.
The QS ranking take into consideration six basic indicators, with different percentages, to elaborate the listing: academic reputation from Global survey which accounts 40%; employer reputation from Global survey, 10%; citations per faculty from Sciverse Scopus, 20%: faculty/student ratio, 20%; proportion of international students, 5% and proportion of international faculty, 5%.
While the Uruguayan university achieved 46.1 points, the University of Sao Paulo, the first ranked had 100 points, according to QS. However the performance of Udelar climbed 15 points from 2011, when it ranked in post 94.
Of the six indicators Udelar is best positioned in the citations per faculty with 96.7 points, equivalent to post 12. It ranks the lowest in the faculty/student ratio, with only 3.9 points and ranks 151.
Earlier this year Scimago Journal & Country Rank which includes the journals and country scientific indicators developed from the information contained in the Scopus database, assessed 1.401 universities in 43 different countries and Udelar figured in position 70, but 32 if ranked only among Latinamerican universities.
One of the indicators considered for this evaluation was the number of scientific papers published between 2006 and 2010. Udelar figures with 2.664 papers, however Sao Paulo University recorded 44.610 and the Buenos Aires University, 10.555.
None of Uruguay’s private universities or tertiary centres figured in the QS ranking.
4 août 2012

Brazil tops 2012 Latin America rankings

http://enews.ksu.edu.sa/wp-content/uploads/2011/10/UWN.jpgBy María Elena Hurtado. Sixty five out of the 250 universities in the 2012 QS ranking on Latin America published late last month are Brazilian, with the University of São Paulo taking the top spot.
Brazil, Mexico, Colombia, Chile and Argentina make up 80% of the universities from the 19-country ranking.
Brazilian universities make up nine of the top 10 in the region for most research papers per faculty member. The top nine universities with the most academics with a PhD are also Brazilian.
“The dominance of Brazil reflects a focus on higher education as the key to unlocking its huge economic potential. The boom in research of the country’s universities follows major investments in research and development,” Danny Byrne, editor of http://TopUniversities.com, which publishes the rankings, told University World News.
However, if the top 200 universities in the QS University Ranking: Latin America were measured according to their countries’ populations, Brazil would fall behind Chile, Uruguay, Costa Rica, Panama, Argentina, Colombia and Peru.

28 juillet 2012

QS defends paid-for gold star addition to rankings

http://enews.ksu.edu.sa/wp-content/uploads/2011/10/UWN.jpgBy David Jobbins. Quacquarelli Symonds Limited, publisher of the QS World University Rankings, has defended the use of quality marks granted to universities that have paid to go through an audit process.
Universities apply to be audited and pay for a process that judges them across 51 criteria that can lead to the award of up to five QS 'stars' that are visible against the institution’s entry in the ranking. In contrast with the rankings, which draw on a small amount of globally available, largely public data, the QS stars system examines criteria such as facilities, access, engagement and innovation.
The stars appear seamlessly alongside the listing for each university on the World University Rankings, despite protestations from QS that the two are totally separate operations.

22 juillet 2012

Sind Rankings sinnvoll?

Mit seiner Kritik am CHE-Hochschulranking hat der Fachverband der Soziologen eine Debatte entfacht. Schadet der Leistungsvergleich oder nützt er? Ein Pro und Contra
Sind Rankings sinnvoll? Diese Frage stellt sich auch die ZEIT immer wieder – und bejaht sie. Einmal im Jahr erscheint das neue Ranking des Centrums für Hochschulentwicklung (CHE) als Teil des ZEIT-Studienführers. Nun hat die Deutsche Gesellschaft für Soziologie den soziologischen Instituten an deutschen Universitäten empfohlen, sich nicht mehr an dem Ranking zu beteiligen. Der Soziologe Stephan Lessenich erklärt, warum. Ihm entgegnet der CHE-Chef Frank Ziegele.
Nein!

Das gesellschaftspolitische Gestaltungsprinzip der Gegenwart heißt Wettbewerb. Auch wo es die öffentliche Hand ist, die Güter und Leistungen produziert, hat dieses Prinzip in den vergangenen Jahren verstärkt Einzug gehalten. Eine der Triebfedern der Inszenierung von Wettbewerb im Bildungswesen ist das regelmäßig vom Centrum für Hochschulentwicklung (CHE) durchgeführte Hochschulranking, das durch die Publikation in der ZEIT beziehungsweise im ZEIT-Studienführer ebenso regelmäßig öffentliche Aufmerksamkeit erlangt. Die Deutsche Gesellschaft für Soziologie (DGS) hat den soziologischen Instituten an deutschen Universitäten nunmehr empfohlen, sich an der Datenerhebung zu diesem Ranking nicht länger zu beteiligen. Warum?
Nach Auffassung der DGS kann das CHE Ranking den selbst erklärten Zweck, eine verlässliche Entscheidungshilfe für Studieninteressierte zu liefern, nicht erfüllen. Das hat insbesondere zwei Gründe: Zum einen gehen für die Studienqualität wesentliche Faktoren – von den jeweiligen Betreuungsrelationen über die inhaltlichen Schwerpunktbildungen und die faktischen Bemühungen um die Verknüpfung von Forschung und Lehre bis hin zur Funktionsfähigkeit von Prüfungsämtern – nicht in die Bewertung der Studienorte mit ein; zum anderen weist die für diese Bewertung maßgebliche Studierendenbefragung erhebliche methodische Schwächen auf, allen voran die ungeklärte Selektivität der Befragten.
Im Ergebnis werden ebenso unzureichende wie irreführende Aussagen über die vermeintliche Qualität soziologischer Ausbildung gemacht – und dabei komplexe soziale Gebilde (und nichts anderes sind die lokalen Lehr- und Lernzusammenhänge) auf einen einfachen, scheineindeutigen Nenner gebracht. Drei Punkte, so die schlichte Ampelsymbollogik des Studienführers, braucht der/die Studierende: grün, gelb, blau – fertig ist die Studienortwahl.
Als Orientierungshilfe für Studierende von zweifelhaftem Wert

Die Soziologie will nicht länger an einer Erhebungspraxis mitwirken, die sie aus fachlichen Gründen ablehnen muss und die als Orientierungshilfe für Studierende von durchaus zweifelhaftem Wert ist. Ebenso schwer als Begründung der nunmehr ergriffenen Initiative wiegt für die Fachgesellschaft aber auch ein wissenschaftspolitisches Argument: Während Studieninteressierte im CHE Ranking vergeblich nach belastbaren Qualitätshinweisen für ihre bildungsbiografischen Entscheidungen suchen, findet dieses seine faktischen Adressaten in bildungspolitischen Entscheidungsträgern auf der Ebene von Hochschulleitungen und Ministerialbürokratien. Was läge für die entsprechenden Akteure näher, als das Angebot zur Güte aus Gütersloh anzunehmen und die auf einen Blick eingängige Ranggruppenpositionierung »ihrer« Institute für bare Münze zu nehmen – sprich ein »gutes« oder »schlechtes« Abschneiden wahlweise zu honorieren oder zu sanktionieren?
Wer den an den Universitäten herrschenden Leistungs(indikatoren)druck kennt, der weiß, wozu Rankings gut sein können. Wer sich trotzdem offensiv dazu bekennt, kein bloßes Studieninformations-, sondern ein veritables Hochschulbewertungssystem sein zu wollen, der weiß, was er tut. Gern nimmt das CHE für sich in Anspruch, mit seiner Bewertungs- und Reihungspraxis nicht mehr und anderes zu tun, als die ohnehin bestehenden Differenzen zwischen den einzelnen universitären »Standorten« transparent zu machen: das Ranking als Abbild der Realität.
Das Ranking konstruiert Leistungsunterschiede

Aus soziologischer Perspektive ist an solch argloser Dienstleistungsattitüde zu zweifeln. Vieles spricht hingegen dafür, dass das CHE Ranking jene Qualitätsdifferenz selbst erst konstruiert und damit die Leistungsunterschiede, die es zu erheben vorgibt, tatsächlich mitproduziert. Was hier, auf fragwürdiger empirischer Basis, zu einem Ort »guter« oder »schlechter« Ausbildung erklärt wird, entwickelt sich womöglich auf lange Sicht auch realiter zu einem solchen – vermittelt über rankinginduzierte Strukturentscheidungen der Hochschulpolitik und die medial gelenkten Präferenzbildungen der Studierenden. Am Ende offenbart sich dann jene zwischen »Masse« und »Klasse«, »Provinz« und »Exzellenz« gespaltene Hochschullandschaft, deren Entwicklung gerne der unsichtbaren Hand des Qualitätswettbewerbs und den vielen Wahlentscheidungen rationaler Einzelner zugeschrieben wird – und doch alle Merkmale einer Selffulfilling Prophecy aufweist.
In Form und Inhalt, vorder- wie hintergründiger Logik schließt das CHE Ranking an den Wissensmodus der Gegenwart an und speist ihn mit ins Bildungswesen ein: jedes gesellschaftliche Feld ein Ort des Wettbewerbs um Positionen, jede Institution ein Konkurrent um knappe Ressourcen, jeder Akteur ein Sender und Empfänger von Marktsignalen.
Aufgabe der Soziologie ist es, diesen Wissensmodus wissenschaftlich zu beobachten – nicht hingegen, ihn institutionell zu reproduzieren. Das mag den einen oder anderen stören; für das Fach ist solch irritierende Praxis im strengen Sinne selbstverständlich.

4 juillet 2012

Power and responsibility – The growing influence of global rankings

http://www.universityworldnews.com/images/photos/photo_2062.gifBy Richard Holmes. A few years ago I remember a dean at a Malaysian university urging faculty to look out for potential external examiners. There was one condition. They had to be at universities in “the Times” top 200. The dean, of course, was referring to the then Times Higher Education Supplement-Quacquarelli Symonds World University Rankings.
Time has passed and the THE-QS rankings have now become two rather different league tables. More global rankings have appeared and a succession of spin-offs, regional, reputational, subject and young university rankings, have appeared. Rankings have become very big business and they are acquiring a prominent role in the policies of university administrators and national governments.
Times Higher Education used to be proud of the attention its rankings received. The THE ranking archives from 2004-09 contain this introduction:
“The publication of the world rankings has become one of the key annual events in the international higher education calendar. Since their first appearance in 2004, the global university league tables have been recognised as the most authoritative source of broad comparative performance information on universities across the world. “They are now regularly used by undergraduate and postgraduate students to help select degree courses, by academics to inform career decisions, by research teams to identify new collaborative partners and by university managers to benchmark their performance and set strategic priorities.
“As nations across the globe focus on the establishment of world-class universities as essential elements of economic policy, the rankings are increasingly employed as a tool for governments to set national policy.”
Arbiters of excellence
Rankings have indeed become arbiters of excellence. They are cited endlessly in advertisements, prospectuses and promotional literature. They influence government strategy in some countries and getting into the top 50, 100 or 200 is often a target of national policy, sometimes attracting as much attention as grabbing medals at the Olympics or getting into the World Cup quarter-finals.
There have even been proposals to use rankings as an instrument of immigration policy, presumably to ensure that only smart people are added to the workforce. In 2010, politicians in Denmark suggested using graduation from one of the top 20 universities as a criterion for immigration to the country. The Netherlands has gone even further. Take at a look at this page from the Dutch government’s London embassy website:
To be considered a ‘highly skilled migrant’ you need:
“A masters degree or doctorate from a recognised Dutch institution of higher education listed in the Central Register of Higher Education Study Programmes (CROHO) or a masters degree or doctorate from a non-Dutch institution of higher education which is ranked in the top 150 establishments in either the Times Higher Education 2007 list or the Academic Ranking of World Universities 2007 issued by Jiao Ton Shanghai University [sic] in 2007.
“The certificate or diploma must also be approved by the Netherlands Organisation for International Cooperation in Higher Education (NUFFIC). To obtain this approval, you need to send your document(s) to: NUFFIC, Postbus 29777, 2502 LT Den Haag, The Netherlands.”
In another document those who meet the above criteria are described as “highly educated persons”.
Admission to The Netherlands under this scheme is not automatic. There are additional points for speaking English or Dutch, being between 21 and 40 or graduating from a university that has signed up for the Bologna declaration. So no job and a poor masters in humanities from the university in 149th place in the 2007 THES-QS world university rankings (City University of Hong Kong)?
I suspect you would have problems getting a job in Hong Kong – but you could still be eligible to be a highly skilled migrant to The Netherlands, provided you spoke English and were in your twenties or thirties.
City University of Hong Kong graduates are fortunate. If the Dutch government had picked the 2006 rankings as the benchmark, the university would not have been on the list. And too bad for those with outstanding doctorates in physics, engineering or philosophy from Tel Aviv university. In 2007 their university would not have been on the list, having fallen from 147th place in 2006 to 151st in 2007. Also, perhaps someone should tell The Netherlands government about what one has to do to turn a bachelor of arts degree into a master of arts from Oxford or Cambridge.
Recently, Russia indicated that it will make a placing in the major rankings a condition for recognition of foreign degrees, and India has stated that local universities can only enter into agreements with those universities in the Shanghai rankings or THE rankings – to be precise only those in the top 500 of those rankings. There is something odd about this. THE prints the top 200 universities and has another 200 on an iPhone app. So where are the other 100 coming from? Or was it just a journalistic misunderstanding?
Choice of rankings is disturbing
To some extent, all this appears to be another example of the pointless bureaucratisation of modern universities, where the ability to write proposals or list learning outcomes is more highly valued than actual research or teaching.
Most academics, left to their own devices, could surely judge the suitability of potential collaborators, external examiners or contributors to journals just as well as the THE or Shanghai or QS rankers. As for using rankings to select immigrants, if the idea is to pick smart people, then the Wonderlic test would probably be just as good. After all, it worked very well for the US National Football League.
More disturbing perhaps is the choice of rankings. Few people would argue with using the Shanghai ARWU rankings to evaluate universities. Their reliability and methodological stability make them an obvious choice. But the THE rankings are only two years old and underwent drastic methodological changes between the first two editions. Is India proposing to consider the 2010 or the 2011 rankings? If there are more changes in methods in years to come, what will happen to an agreement negotiated with a university that is the top 500 one year but not next?
Phil Baty, head of the THE ranking, has just published an article in University World News accepting that rankings are inherently crude and that they should be used with care. This is most welcome and it is certainly an improvement on those previous pronouncements. Let us hope that the THE rankings do become more transparent, starting with breaking up the clusters of indicators and reducing dependence on Thomson Reuters and its normalised citation indicator. Another dangerous thing about the Indian government proposals is that Thomson Reuters and ISI are the source for two of the Shanghai indicators, publications and highly cited researchers, and they collect and analyse data for THE. The idea of a single organisation shaping higher education practices and government policy around the world, even deciding who can live in prosperous countries, is not an attractive one.
How to respond
So what can be done? The International Rankings Expert Group has been getting ready to audit rankings, but so far there seems to be no sign of anyone actually being audited. Regulation does not seem to be the answer then. Perhaps what we need is healthy competition between ranking, and constant criticism. It would help perhaps if governments, universities and the media paid some attention to other rankings such as Scimago, HEEACT from Taiwan, and URAP from the Middle East Technical University, not just to the big two or the big three. These could be used to assess the output and quality of research since they appear to be at least as good as the Shanghai Rankings, although they are not as broadly based as other world rankings.
But above all, Phil Baty’s admission that there are aspects of academic life where rankings are of little value is very helpful. For, things like collaboration and recognition, common sense and disciplinary knowledge and values should be just as valid, maybe more so.
* Richard Holmes is a lecturer at Universiti Teknologi MARA in Malaysia and author of the University Rankings Watch blog.
29 juin 2012

Rankings – ‘Multi-dimensional’, ‘user-driven’ are the magic words

http://enews.ksu.edu.sa/wp-content/uploads/2011/10/UWN.jpgByFrank Ziegele and Gero Federkeil. In a recent article in University World News Phil Baty, editor of the Times Higher Education World University Rankings, warned that rankings needed to be handled with care. If we consider the impact international rankings have today, we can only agree with Baty’s notion that “authority brings responsibility”.
In more and more countries – Baty cited examples – a good league table position in the major global rankings plays a decisive role in policies of cooperation of universities with foreign institutions, as well as with regard to the recognition of foreign degrees and the portability of loans and scholarships. These are clear signs of a dangerous overuse of rankings. No ranking has been introduced for these purposes and – hopefully – most producers of rankings would reject this role. But we want to argue that ranking providers should not only object to misuses: it is more important to design rankings in a way that makes misuse difficult and guides users to apply rankings in an appropriate and meaningful way.
The ‘composite indicator’ problem
One of the major mistakes of rankings is the use of a ‘composite indicator’. A more or less broad variety of indicators is weighted and aggregated into an overall score for the whole university. One number is thus intended to measure the complex performance of a university!
If rankings provide information in this way, they seduce users into making decisions based on that one number. This is surely an oversimplification of quality in higher education. Rankings can provide some quantitative information on particular aspects of the performance of universities – teaching and learning, research, international orientation and others. To do this, they have to focus on a limited number of selected dimensions and indicators, which means no ranking is able to reflect the full complexity of universities. Some global rankings, which focus on reputation, measure nothing more than the strength of the universities’ global brand, which might not correlate to their performance. Yet their results are actually influencing that very reputation.
Other specialised rankings, for example Webometrics rankings, only measure the success of university policies in attaining web presence, but not their teaching or research performance. Despite this, the user is lured into believing s/he will be able to identify the best universities in the world with these kinds of rankings.
U-Multirank
How can we change this? The magic words are ‘multi-dimensional’ and ‘user-driven’ ranking.
The U-Multirank project, initiated by the European Commission, developed and tested the feasibility of such a system. Different stakeholders and users have different ideas about what constitutes a high quality university and hence have different preferences and priorities with regard to the relevance of indicators. There are neither theoretical nor empirical arguments to assign a particular pre-defined weight to an indicator.
U-Multirank takes these points seriously by leaving the decision about the relevance of indicators to the users of the ranking. It presents a separate ranking list for every single indicator and suggests using an interactive internet tool, which allows people to choose the indicators that are most relevant to them.
Moreover, the set of indicators is not restricted to bibliometric research performance, but also includes dimensions such as teaching and learning, knowledge transfer, regional engagement and international orientation. This multi-dimensional approach is able to make the different institutional profiles and the particular strengths and weaknesses of universities transparent.
In combination with its grouping approach (building three to five performance groups instead of calculating a pseudo-exact league table), U-Multirank avoids the lure of oversimplification inherent in the attempts to crown the ‘best’ university in the world. The provision of more differentiated and, admittedly, more complicated information decreases the pressure to change the methodology just to come up with a different list than in the year before. Since a major quality criterion for rankings is the stability of their methodology, this point further increases the value of the multi-dimensional approach.
The development of the U-Multirank model and the response to it from within higher education and among stakeholders has already stimulated a number of changes in the traditional global rankings. Some now also work on field-based rankings and some have started to include interactive elements to allow for user-driven elements. However, they still stick to league tables and composite indicators instead of providing a really multi-dimensional and user-driven ranking. Let’s start the democratisation of rankings by leaving the choice completely to the user.
U-Multirank also looks for a broader and stakeholder-oriented approach in generating ranking data: the idea, which was tested in the feasibility study, is to combine international (bibliometric and patent) databases with the outcomes of institutional, student and alumni surveys. This allows the comparison of, for instance, facts about study programmes (as U-Multirank provides a field-based ranking) with student satisfaction surveys, leading to a differentiated picture of performance. If you only know the student-staff ratio, you can’t say if a high ratio means high quality in small groups or just a lack in demand due to bad quality. As soon as you can correlate the ratios to the students’ judgment of their contact with teachers, you will have a better impression of performance.
We have heard the objections against U-Multirank – “is this still a ranking?” or “will users understand this?” or “people still want to know who is number one!”
We would answer: as U-Multirank still shows vertical diversity by measuring performance, it is a ranking system. To make it understandable despite the complexity, the user-friendliness of the web portal will be of major importance. And, last but not least, we believe in intelligent users.
The next phase of the European Commission’s project has to demonstrate that all this can be implemented as a stable system.
* Professor Frank Ziegele is managing director and Gero Federkeil is manager in charge of rankings at the Centre for Higher Education Development in Germany.
27 juin 2012

Rankings don't tell the whole story – Handle them with care

http://www.universityworldnews.com/By Phil Baty. In Russia, Prime Minister Dmitry Medvedev recently signed an order awarding official recognition to degrees from 210 leading universities from 25 countries – determined in large part by their presence in the top global university rankings.

The thousands set to benefit from study-abroad scholarships under Russia’s five-billion rouble (US$152 million) Global Education programme will also have to attend a top-ranked university.
A similar scholarship project in Brazil, the £1.3 billion (US$2 billion) Science without Borders programme for 100,000 students, also draws heavily on the Times Higher Education and other rankings to select the host institutions.
And in India this month, the government’s Universities Grants Commission set out new rules to ensure that only 500 universities ranked by two global rankings including Times Higher Education are allowed to run joint degree or twinning courses with Indian partners.
Such high-level official endorsement is, of course, gratifying and since 2009 when we joined forces with Thomson Reuters, we have worked hard to listen to critics of global rankings and consulted widely to develop a new, more balanced, comprehensive and rigorous ranking system.
We argue that Times Higher Education’s global rankings are the only ones in the world to examine all core missions of the modern global research university – research, teaching, knowledge transfer and international activity.
They are the only rankings to fully reflect the unique subject mix of each and every institution across the full range of performance indicators and to take proper account of excellence in the arts, humanities and social sciences, so badly neglected by other rankings, we believe. And they are the only global rankings to employ a rigorous, invitation-only survey of experienced, expert academics – with no volunteers and certainly no nominations from universities themselves.
Authority brings responsibility
But we are aware that such authority brings with it great responsibility. A reputation for integrity must be earned and maintained through open and honest discussion about both the uses and the abuses of global rankings. All global university ranking tables are inherently crude, as they reduce universities and all their diverse missions and strengths to a single, composite score. Anyone who adheres too rigidly to rankings tables risks missing the many pockets of excellence in narrower subject areas not captured by institutionwide rankings, or in areas of university performance – such as community engagement – that are simply not captured well by any ranking.
One of the great strengths of global higher education is its extraordinarily rich diversity and this can never be captured by any global ranking, which judges all institutions against a single set of criteria. In this context, a new declaration from a consortium of Latin American university rectors must be welcomed. The declaration, agreed at a two-day conference at the National Autonomous University of Mexico, titled “Latin American Universities and the International Rankings: Impact, scope and limits”, noted with concern that “a large proportion of decision-makers and the public view these classification systems as offering an exhaustive and objective measure of the quality of the institutions”.
No university ranking can ever be exhaustive or objective. The meeting, which drew together rectors and senior officials from 65 universities in 14 Latin American countries, issued a call to policy-makers to “avoid using the results of the rankings as elements in evaluating the institution’s performance, in designing higher education policy, in determining the amount of finance for institutions and in implementing incentives and rewards for institutions and academic personnel”.
I would – to a large extent – agree. Responsibly and transparently compiled rankings can, of course, have a very useful role in allowing institutions to benchmark their performance and to help them plan their strategic direction. They can inform student choices and help faculty make career decisions.
They can help governments to better understand some of the modern policy challenges of mass higher education in the knowledge economy, and to compare the performance of their very best research-led institutions to those of rival nations.
And yes, they can play a role in helping governments to select potential partners for their home institutions and determine where to invest their scholarships.
But they can only play a helpful role if those of us who rank are honest about what rankings do not – and can never – capture, as much as what they can, and as long as we encourage users to dig deeper than the composite scores that can mask real excellence in specific fields or areas of performance.
Times Higher Education is working hard to expand the range of data that it releases, and to allow more disaggregation of the ranking results and more nuanced analysis.
Rankings can be a valuable tool for global higher education – but only if handled with care.
* Phil Baty is editor of the Times Higher Education World University Rankings.
18 juin 2012

Please rank responsibly

http://www.timeshighereducation.co.uk/magazine/graphics/mastheads/mast_blank.gifBy Phil Baty. Phil Baty reports on a declaration on world university league tables from a consortium of Latin American university rectors agreed in Mexico City.
It was a rare spectacle: a senior administrator of a leading international university, speaking at a conference of peers, issued a public “thank you” to those who compile university rankings. The rankers – me included – more typically face criticism of the power and influence we wield.
But Chen Hong, director of the office of overseas promotion at China's Tsinghua University, told the World 100 Reputation Network conference in Washington in May: “We should thank those organisations who publish these indicators. At least we can find something for comparison and benchmark our own performance.”
Reflecting the approach that my magazine, Times Higher Education, has taken to disaggregate the overall composite ranking scores in our publications, she explained: “What is useful for us is the detailed indicators within those rankings. We can find out comparable data, benchmarking various universities and use them for planning.”
Indeed, there is growing evidence that global rankings – controversial as they are – can offer real utility. But those of us who rank must also be outspoken about the abuses, not just the uses, of our output.
There is no doubt that global rankings can be misused.
It was reported recently, for example, that a $165 million Russian Global Education programme would see up to 2,000 Russian students each year offered “very generous” funding to attend institutions around the world – but that qualification for the generous scholarships will be dependent on the students attending an institution in the top 300 of the Times Higher Education World University Rankings. Brazil’s hugely ambitious Science Without Borders scholarship programme to send 100,000 Brazilian students overseas similarly links the scholarships to THE-ranked institutions.
While such schemes offer a welcome endorsement of the rigor of THE’s rankings data (provided by Thomson Reuters) and its ranking methodology, speaking as the (rather flattered) editor of the THE rankings I'd still suggest that they are ill-advised.
Global university ranking tables are inherently crude, as they reduce universities to a single composite score. Such rigid adherence to the rankings tables risks missing the many pockets of excellence in narrower subject areas not captured by institutionwide rankings, or in areas of university performance, such as knowledge transfer, that are simply not captured well by any ranking.
One of the great strengths of global higher education its extraordinarily rich diversity, which can never be captured by the THE World University Rankings, which deliberately seek only to compare those research-intensive institutions competing in a global marketplace and which include less than 1 percent of the world’s higher education institutions.
In this context, a new declaration from a consortium of Latin American university rectors agreed in Mexico City last week must be welcomed as a sensible and helpful contribution to the rankings debate. The declaration, agreed at a two-day conference at the National Autonomous University of Mexico, titled Latin American Universities and the International Rankings: Impact, Scope and Limits, noted with concern that “a large proportion of decision makers and the public view these classification systems as offering an exhaustive and objective measure of the quality of the institutions.”
The rectors’ concern is of course well-placed – no ranking can ever be objective, as they all reflect the subjective decisions of their creators as to which indicators to use, and what weighting to give them. Those of us who rank need to work with governments and policy makers to make sure that they are as aware of what rankings do not – and can never – capture, as much as what they can, and to encourage them to dig deeper than the composite scores that can mask real excellence in specific fields or areas of performance. That is why I was delighted to be in Mexico City last week to joint the debate.
The meeting, which drew together rectors and senior officials from 65 universities in 14 Latin American countries, issued a call to policy makers to “avoid using the results of the rankings as elements in evaluating the institution’s performance, in designing higher education policy, in determining the amount of finance for institutions and in implementing incentives and rewards for institutions and academic personnel.”
I would – to a large extent – agree. Responsibly and transparently compiled rankings like THE’s can of course have a very useful role in allowing institutions, like Tsinghua and many, many others, to benchmark their performance, to help them plan their strategic direction. They can help governments to better understand some of the modern policy challenges of mass higher education in the knowledge economy, and to compare the performance of their very best research-led institutions to those of rival nations. The rankings can help industry to identify potential investment opportunities and help faculty member make career and collaboration decisions.
But they should inform decisions – never drive decisions.
The Mexico declaration said: “We understand the importance of comparisons and measurements at an international level, but we cannot sacrifice our fundamental responsibilities in order to implement superficial strategies designed to improve our standings in the rankings.”
Some institutional leaders are not as sensible as those in Latin America.
Speaking at the same Washington conference where Chen Hong gave thanks to the rankers, Pauline van der Meer Mohr, president of the executive board at Erasmus University, Rotterdam, confirmed frankly that proposals for a merger between her institution and Dutch counterparts the University of Leiden and the Delft University of Technology were “all about the rankings.”
The three Dutch institutions calculated, she explained, that merged as one, they would make the top 25 of world rankings, while separately they languish lower down the leagues. “Why would you do it if it doesn't do anything for the rankings?” she asked.
But the merger did not take place. It was dropped because of a mix of political unease, fierce alumni loyalty to the existing “brands,” and an “angry” response from research staff. Researchers at all three institutions, van de Meer Mohr admitted, had asked: “You are not going to merge universities just to play the rankings game?” To do so, they had concluded, would be “ridiculous.”
I believe that those Dutch academics were quite right.
• This article first appeared in Inside Higher Ed.
17 juin 2012

U-Multirank – The better approach for worldwide ranking?

http://uv-net.uio.no/wpmu/hedda/wp-content/themes/hedda/styles/blue/head-bg.jpgThis guest entry was written by Frank Ziegele. Frank Ziegele is the director of CHE (Centre for Higher Education), a German think tank for the higher education sector. In addition, he is also professor for higher education management at the University of Applied Sciences Osnabrueck. He was one of the leaders of the feasibility study-project on “U-Multirank”. In this post, he gives his insights about the project and a recent book on the topic, edited by Frank Ziegele and Franz van Vught
The impact of rankings – and their risks
Recently India’s University Grants Commission announced that foreign universities entering into agreement with their Indian counterparts for offering twinning programmes will have to be among the global top 500 in the Times Higher or Shanghai Ranking. The Commission says that the underlining objective is to ensure that only quality institutes are permitted for offering the twinning programmes to protect the interest of the students.
The assessment of quality based on research league tables – could this be too much impact?
Another proof that global rankings step by step gain influence going far beyond their initial ideas. And also a proof of how dangerous this development is: Could rankings help to protect students’ interests although they say almost nothing about teaching and provide insufficient information on the field level? Please keep in mind that there is no university in the world where all the fields perform equally good or bad. Students have to know what they could expect in the field they want to study. And I haven’t said anything about methodological problems of these rankings yet: field- and language biases of bibliometric databases, measurement of reputation instead of performance, outcomes sensitive to arbitrary weightings of indicators, just to mention a few of the problems.
U-Multirank as an alternative, elaborate ranking system.
Considering all these problems, I am deeply convinced that we need an alternative to the existing global rankings, and I am glad that I had the chance to be part of a challenging project to develop an alternative: the feasibility study for a new ranking system called “U-Multirank”, initiated by the European Commission. A new book is out now giving an overview on the results of this study: Van Vught, Frans & Ziegele, Frank, Eds. (2012): Multidimensional Ranking. The Design and Development of U-Multirank. Dordrecht, Springer.
The book presents an analysis of existing rankings and their effects, design principles, methods and instruments for a new ranking, the proposal for a set of indicators and the lessons learnt from a pilot test with 150 universities all over the world. It also develops first ideas for a long-term implementation of a new and unique ranking system.
The unique features of U-Multirank
Why is U-Multirank different from “orthodox” rankings? Let me briefly explain how this system is meant to function. The “core” of the ranking is an interactive web tool, leading to a user-driven instrument. The idea is a “democratisation” of rankings by allowing the users to adapt the ranking to their (very) own preferences. The user will first have an overview on the features of higher education institutions, shown by a number of descriptive indicators (such as degrees awarded, shares of bachelor/master, share of part time or mature students, expenditures on research, bachelors from the region, income structures, international students). This “mapping” leads to the identification of a set of comparable institutions, distinguishes apples from oranges.
Next the ranking is made within the apples and within the oranges, again on the self-selection of a number of performance indicators. The mapping deals with horizontal diversity and the ranking shows vertical differences. The ranking indicators cover five main dimensions: teaching and learning, research, knowledge transfer, regional engagement and international orientation. Each indicator is shown separately, the performance for each indicator becomes transparent in a multi-dimensional approach. There is neither theoretical nor empirical justification for assigning specific weights to individual indicators and aggregating them into composite indicators.
The data is available on the institutional and the field level, the multi-level approach offers stakeholders access to the level of information they are interested in. And last but not least, U-Multirank does not provide a league table because positions in such a table exaggerate differences. There is the story of the university that dropped 15 places in the Shanghai ranking just because their Nobel Prize winner became older…..Therefore, we distinguish 3-5 groups – within the groups the differences are small but between the groups we find significant differences in performance levels.
A challenging project
Of course this idea is discussed controversially. In debates after publishing the results of the feasibility study I often heard objections such as “this is a nice information system but not a ranking”, “people still want to know who is number one”, “universities will refuse to collect all the data you need”, “this is too complex for lay users” and “this system is too soft because everyone performs well somewhere, so every university will claim to be the best”.
We can deal with these concerns on the basis of empirical evidence because a ranking system with similar features already exists: The CHE Ranking for German, Dutch, Austrian and Swiss higher education institutions, following exactly the approach of field-based, multi-dimensional and group-oriented ranking. In Germany 95% of the faculties take part in the ranking, people perceive it as a real ranking and most of the orthodox league tables have disappeared from the German market. The associated web tool is used by more than 200.000 people per month, with up to 17 Million clicks per year. U-Multirank-type systems are accepted and being used, also by lay users!
Indeed, the burden of data collection for the universities is high, but they become more and more professional (and elaborate methods of data verification, which have also been tested for U-Multirank, deal with the danger of manipulation). And the system still reveals bad performance clearly, so it is not a soft one. Of course universities will pick their good results for advertising, this is beneficial for the public perception of the HE sector as a whole. So I would argue against all the objections mentioned: U-Multirank is a challenging project but it is worthwhile and will definitely change the scene of worldwide rankings (we could already see classic rankings adapting to our ideas, for instance by also introducing field-based information). It has the potential to promote diversity in missions and profiles of universities and for this reason it helps to get an understanding of multiple excellence instead of supporting the mere reputation race for the world-class research university.
Implementation phase to start soon
The three major challenges I see are long-term funding of the system, worldwide comparability of some of the data and openness of some regions to our model (especially China and the US). It will be the task of the implementation phase to cope with these issues. The European Commission has made a call for tender for this implementation and plans to start by September. Make up your mind about multi-dimensional and user-driven ranking by looking at the U-Multirank website (www.u-multirank.eu) and the aforementioned Springer book.

Newsletter
49 abonnés
Visiteurs
Depuis la création 2 784 949
Formation Continue du Supérieur
Archives