Canalblog
Suivre ce blog Administration + Créer mon blog
Formation Continue du Supérieur
6 février 2012

Eight years of ranking: What have we learned?

http://www.universityworldnews.com/images/photos/photo_2062.gifBy Richard Holmes. It is more than eight years since Shanghai Jiao Tong University produced its first Academic Ranking of World Universities. Since then international university rankings have multiplied. There are now two main competitors producing general rankings that include indicators other than research, Quacquarelli Symonds (QS) and Times Higher Education.
There are also web-based rankings, Webometrics and IC4U, and research-based rankings from Taiwan, Turkey and Australia, the last of which seems to have disappeared. Then we have rankings from Russia and France. Nor should we forget the European U-Multirank project, which has just moved out of the pilot stage, or regional rankings for Asia and Latin America or the various disciplinary sub-rankings or the rankings of business schools. There are now quite a few things that we have learned about ranking universities.
Measuring research is the easy bit
There are several ways of measuring research. You can count total publications, publications per faculty, total citations per faculty, citations per paper, h-index, international collaboration, money spent, reputation. All of these can be normalised in several different ways.
The result is that ranking is beginning to look like heavyweight boxing with no undisputed champion in sight. Cambridge is top of the QS rankings mainly because it has a good reputation for research, Harvard is first in the Shanghai rankings because it produces more of just about everything and Caltech leads in the new Times Higher Education World University Rankings because of an emphasis on quality rather than quantity.
Nobody has figured out how to measure teaching
QS has an indicator that measures student faculty ratio but this is, as they admit, a very crude instrument. For one thing, it includes academics who only do research and may never see the inside of a lecture hall. Times Higher Education has a cluster of indicators concerning teaching, but they only claim that these have something to do with the learning environment.
If anyone does try to seriously measure teaching quality, the best bet might be to use some sort of survey of student satisfaction, as has apparently been done successfully by the U-Multirank pilot project, or perhaps http://ratemyprofessors.com could go global.
In any case, for better students and better schools, teaching is largely irrelevant. Recruiters do not head for Harvard, Oxford and the grandes ecoles because they have heard about the enthusiasm with which lecturers jump through outcomes-based education hoops. They go there because that is where the smart people are and smart people are smart before they go to university.
Getting there first is important
The Academic Ranking of World Universities published by Shanghai Jiao Tong University is not noticeably better than the Performance Ranking of World Scientific Papers produced by the Higher Education Evaluation and Accreditation Council of Taiwan. But it still gets a great deal more publicity. A very good research-based ranking has been produced by the Middle East Technical University in Ankara, but hardly anybody knows about it: the niche has already been occupied.
Brand names matter
If anyone else but a magazine with the word ‘Times’ in it and an association with Thomson Reuters had produced a ranking with Alexandria University in the top 200 in the world, or for that matter even put it first in Egypt, they would have been laughed out of existence. The QS rankings have flourished partly because they are linked to a successful graduate recruitment enterprise.
Beware of methodology
The QS rankings are well known for a fistful of methodological changes that have sent universities zooming up and down the tables. Although the methodology has officially stabilised, there have still been unannounced changes. In 2010, something happened to the curve for citations per faculty (a mathematician could explain exactly what) that boosted the scores for high fliers except, of course, for the universities in joint first place, but lowered those for the less favoured ones. One result of this was a boost for Cambridge, no doubt to everyone’s astonishment. Between 2010 and 2011, Times Higher Education made so many changes that talking about improvements over the year was quite pointless.
Weighting is not everything
Weighting is very important, though. It is increasingly common for rankings to have an interactive feature that allows readers to change the weightings and, in effect, to construct their own rankings. It is instructive to fiddle around with the indicators and see just how much difference changing the weighting can make.
The missing indicator
In the final analysis, the quality of a university is largely dependent on the average intelligence of its students, which is why the most keenly scrutinised section of US News’ Best Colleges is the ACT-SAT scores. International rankings have barely begun to tackle this question. I doubt if anyone is very interested in the score on QS’s employer survey or even the Paris Mines rankings, which counts the number of top bosses. It would probably be quite technically feasible to work out the relative selectivity of universities, but there are likely to be insurmountable political problems.
What next?
There will surely be more international rankings of one sort or another. It is unlikely, though, that any will ever achieve the dominant role that US News has achieved. We can expect more sophistication with increasingly complex statistical analysis, more regional rankings and more disciplinary rankings, perhaps also more silly rankings like a global version of American Best Universities for Squirrels.
But it is unlikely that there will ever be agreement on what makes a good or a great university.
* Richard Holmes is a lecturer at Universiti Teknologi MARA in Malaysia and author of the University Ranking Watch.

31 janvier 2012

France on fast track: The 2012 Campus Admission Tour promises instant admission to 200 students in 25 top French institutions

http://i.dailymail.co.uk/i/sitelogos/logo_mol.gifBy Sangeeth Sebastian. Ever dreamed of enrolling at an international university in less than half an hour? Drop in at the French Embassy, Shanti Path, this weekend, and you could be the lucky one.
The two-day Campus France Admission Tour 2012 opens on Saturday with the promise to offer admission to the brightest students in just 20 minutes - and the good news is that knowledge of French is not mandatory to make the grade.
Campus France is a public policy initiative of the French government to promote higher education abroad and the Admission Tour 2012 will see as many as 25 top French institutions hitting the road (from Delhi they'll got to Bangalore and Mumbai) to spot the right candidates and make admission offers to them.
Candidates will be selected on the basis of their academic merit and indepth interviews by the visiting university officials. 'We have plenty of options for every student, from engineering to management, language studies to hospitality,' says Bedojyoti Bhattacharjee, National Coordinator, Campus France India.
Though in the public eye France is famous for its haute fashion and gastronomy, the admission tour will focus on the strengths of French institutions in such specialist areas as nanotechnology, aeronautics, embedded systems, water management and pure sciences.
'France has achievements to show in a number of little-known but important areas and we want to promote them,' says Renaud Viley, Deputy Attaché for University Cooperation at the Embassy of France in India.
'The French institutions sending their top officials to India are keen on recruiting the best Indian students to enhance their intellectual capital,' Viley adds.
To make the invitation sweeter, the attaché points out that most of the management, business and technical programmes in French institutions of higher learning are conducted in English.
Students also can avail of a number of higher education scholarships offered by the French government via the embassy. 'We give away 362 scholarships in 2011,' says Viley. 'This year too we will disburse around the same number of scholarships.'
Campus France expects around 300 students from the Capital to attend the recruitment drive. And similar numbers in Mumbai and Bangalore before the delegation heads back to France on February 13.
'We will consider our mission to be a success if we manage to recruit at least 200 students from India by the end of this tour,' Viley says. Watch this space to see if they have done it.
    The CampusFrance Admission Tour 2012 road show will take place on February 4-5 (Saturday & Sunday) at the Embassy of France, Shanti Path, New Delhi. To download the event PDF, go to www.inde.campusfrance.org
7 janvier 2012

Bibliometrics and the Leiden Ranking

http://www.significancemagazine.org/SpringboardWebApp/userfiles/sig/image/Adverts/Dec2011Banner.gifBy Graham Wheeler. The field of ‘Bibliometrics’ – derived from the Greek ‘biblion’ (meaning book) and the Latin ‘metricus’ (relating to measure) – is defined as the statistical analysis of a body of literature. Although the name for the field has been around for several decades, it is only until recently that with well-managed computer databases and clever citation maps (see Figure 1), researchers are better enabled to measure the impact of academic literature that universities publish.
In December 2011, the Centre for Science and Technology Studies (CWTS) at Leiden University published figures for the 2011/2012 Leiden Ranking. The Leiden Ranking is a scoring system designed to measure the impact of academic scientific research undertaken in the world’s 500 most research-intensive universities. In addition, the Leiden Ranking also looks at the collaborative research published by several institutions and considers the networks that are formed between different universities. With numerous output measures and a vast amount of bibliometric data, this new ranking system helps to paint a more accurate picture of which universities are really making an impact on the world in terms of research output.
Methodology and Data

The researchers at CWTS used bibliometric data from over 2.5 million journal articles, letters and reviews published between 2005 and 2009. The articles et cetera studied included both English and non-English language publications, with separate analyses being undertaken on English-only and all-language publications. Only publications in the sciences and social sciences were included in the research; papers from the arts and humanities were excluded since according to the lead researchers, the bibliometric indicators obtained from the data “do not have sufficient accuracy”.
The primary indicators used to measure the impact of a university’s research include the number of publications (P), the Mean Citation Score (MCS – the average number of citations for a publication from that university), the Mean Normalized Citation Score (MNCS – the MCS adjusted for field differences, publication year and document type) and the Proportion Top 10% Publications (PPtop 10% – the proportion of publications from that university that, compared with similar publications, belong to the top 10% most frequently cited). As an example, if Princeton University had a MNCS score of 3, then on average, publications from Princeton are being cited 3 times more often than the world average.
In terms of collaborative networks, the main indicators of interest were deemed to be the proportion of publications that were collaborative works (PPcollab), the proportion of collaborative publications co-authored between two or more countries (PPint collab), the mean geographical collaboration distance (MGCD) and the proportion of collaborative publications that have a geographical distance of over 1000km between two of the universities (PP>1000km). CWTS undertook analyses where full counting and also fractional counting of collaborative publications were considered. For a hypothetical publication written by 3 scientists at ETH Zurich and 1 scientist at McGill University, under full counting both ETH’s and McGill’s publication counts (P) would increase by one. Under fractional counting, ETH’s P-number would increase by 0.75 and McGill’s P-number would increase by 0.25.
Results

An article published on the Leiden rankings by the Times Higher Education Supplement highlighted several interesting results obtained from the data. However, I wish to consider the much broader picture of what is implied by the data via graphical illustrations and some non-parametric statistical testing. Here I consider all publications (regardless of language) and assume fractional counting for collaborative papers. I have also added a new categorical variable to the data indicating the region that each university’s country belongs to.
Evident disparities are observed between countries and between geographic regions. Africa’s only entries are four universities in South Africa; Cape Town, Pretoria, Stellenbosch and Witwatersrand. All of these institutions produced less than 5,000 research publications between 2005 and 2009 in the sciences; furthermore, Cape Town University had the highest PPtop 10% value of the African entrants, with a score of 10%. As for the universities in South America (countries include Brazil (light blue), Chile (yellow) and Argentina (black)), all universities score between 4% and 6% for PPtop 10%, regardless of P. The University of Sao Paulo has a PPtop 10% of 5%, which is very similar to the other universities considered here, yet published over 17,300 papers in the 5-year period considered.
Perhaps the most interesting results come from the large clusters in the plots marked “Asia”, “Europe” and “North America”. The black dots in the “North America” plot indicate universities in the United States, the purple ones from Canada; the highest point on the y-axis corresponds to the Massachusetts Institute of Technology (MIT) and the right-most point on the x-axis is Harvard University. The “Europe” plot shows Swiss and UK institutions performing highest; the two blue points are (from left to right) the École Polytechnique Fédérale de Lausanne (EPFL) and ETH Zurich. The grey points represent UK institutions and the highest points on the y-axis include the London School of Hygiene & Tropical Medicine, Durham, Imperial College London, Cambridge and Oxford (the latter two having almost identical PPtop 10% and P values). Whilst we can identify high fliers with respect to this criterion, are the genuine differences on average between one nation’s universities and another’s?
To answer this question we may perform a statistical test called the Mann-Whitney U Test, which tests whether one of two samples of independent observations tends to have larger values than the other. This test is non-parametric, meaning that we do not need to make distributional assumptions about the data we are using; we only assume that the observations are independent of one another and that the distribution of the bibliometric indicator of interest is continuous.
Using this statistical test to compare the null hypothesis (that a bibliometric indicator is distributed identically in country A and country B) against the alternative hypothesis (that there is a difference in the median of the distribution between country A and country B’s bibliometric indicator), we find that in some cases there is evidence for genuine median differences, and in other cases, not so much evidence.
Table 1 shows the p-values (not to be confused with the number of publications, P) obtained from applying the Mann-Whitney U test to compare the PPtop 10% scores of 4 nations; China, Germany, United Kingdom and United States. The p-values in the table each represent the probability of obtaining a difference in medians of PPtop 10% scores between two nations at least as great as that observed previously, assuming that the null hypothesis is in fact true. So the smaller the p-value, then the more in favour we are of rejecting the null hypothesis. As we can see, the comparisons of China versus the United Kingdom or United States, and Germany versus the United Kingdom or United States show that there is very strong evidence against the null hypothesis; i.e. that there is a genuine difference between the medians of the distributions of PPtop 10% scores. However, when considering the universities of China versus the universities of Germany, or the universities of the United Kingdom versus those of the United States, there is little evidence against the null hypothesis. In the example of UK versus US, whilst we may observed several extreme high-achievers for the US, there is little evidence to suggest that the average (median) PPtop 10% score differs significantly between the two nation’s universities.
With the plethora of data from CWTS, one can spend a great deal of time conducting further analyses and compare universities on all sorts of grounds. Perhaps the nicest thing of all, regardless of the analysis one may conduct, is that it is quite pleasant to have an academic league table that is based purely on objective bibliometric data, rather than other tables that use heavily-subjective measures or arbitrary weighting systems for certain assessment criteria.
References

The Leiden Ranking Webpage.
Data used in the Leiden Ranking research.
"UK fails to shine in citations league" - Times Higher Education Supplement, 22nd December 2011.
7 décembre 2011

Reworking the university rankings for a new higher education market

The Guardian homeNow tuition-sensitive students are buyers on the global market of higher education, a very useful element in the rankings would be price, says AnaMaria Dutceac Segesten.
Higher education, like so many other sectors, is currently undergoing change. Global trends demonstrate the increased "marketisation" of universities. This means, in some cases, that higher education institutions are seen as for-profit businesses, run on business logic.
A milder form of marketisation is the introduction of tuition fees in countries traditionally known for their open access to education (United Kingdom, and for some specific student categories: The Netherlands, Denmark and Sweden to mention a few). I am sure that the desire of people working for this transformation is to improve the flaws of the current ways in which HE is structured, to make it more efficient, more responsive to the needs of the society, or simply, more modern.
I discussed elsewhere the potential benefits of this era of transformation, for example through the expansion of open access practices. This moment of change is therefore to be embraced for all the opportunities it latently may hold.
At the same time, the transformation we are currently experiencing in the academia may also be perceived as confusing, perplexing even, perhaps because we lack the definition of what we should aim for. The question "what is a good university?" is a legitimate one in this context, and one that many tried their hand at answering. Among those who may claim to hold the key to the puzzle are those measuring the success of universities in the global arena through the creation of indexes and rankings.
There are several such measures circulating today, each with a specific methodology and therefore with (slightly) different results. I will not endeavor here to discuss comparatively the merits or the pitfalls of each of them. The most famous ones are the Times Higher Education Rankings, the Shanghai Jiao Tong University Rankings, or the QS World University Rankings which can be found on this network, (while the Europeans are trying to push for their own index, U-Multirank). What these different measures share is an explosive potential that is activated every time a new ranking is made public, making them repeatedly the target of criticism. But what is the purpose of these global rankings? Do they give us a standard of a "good university" that all should attempt to follow?
According to the recent report Global University Rankings and their Impact made for the European University Association, the major benefits of the idea of measuring and comparing academic quality globally are that they foster accountability and that they scan the field for more information. At the same time, the report criticises the oversimplification of university quality and performance. In other words, the picture of the "best" university in the world that emerges from these rankings is not giving us a multifaceted and complex view.
I agree with this criticism and feel that here lies the most significant part of the problem. These rankings, as popular and spread as they may be, give to the societies and to universities themselves a distorted view. This slanted depiction may be the result of the specific measurement methodologies employed, in particular the preference in most cases for research productivity over teaching and learning achievements. The "best" universities then are those who produce the largest number of articles in the most prestigious publications, those who allow most freedom for research, those who offer the best facilities and most modern technologies to push forward our common knowledge.
This leaves aside though the other, equally significant, aspect of higher education, namely didactics. Universities must produce new knowledge, I completely agree, but they also must transmit knowledge and encourage the most suitable students to become part of this discovery process in their turn. University rankings tend to reveal a bias in favor of research over teaching, and the definition of the good university is therefore least said incomplete.
Moreover, the claim that many such ranking organisations are making is that they help students make better choices, serving as orientation guides in an increasingly complex world of opportunities. But if the rankings as a general rule are leaning towards the research power of universities, what is the direct use of this classification for an undergraduate student. For this person, a measure of university performance that would emphasise the quality of teaching would be far more rewarding. If now tuition-sensitive students are buyers on the global market of higher education, a very useful element in the rankings would be price, spelling out which one is the best and most expensive university in the world, or which other is the best quality for the money.
If universities are sellers of certified knowledge then the university rankings may very well be a good guide for the prospective student who is out shopping. However, it is not where we should look for the definition of "a good university".
Dr Anamaria Dutceac Segesten, research fellow at the
Center for Modern European Studies, University of Copenhagen and co-founder of the University of Venus, a blog for Generation X females in HE.
7 décembre 2011

New university ranking aims for objectivity

http://www.universityworldnews.com/layout/UW/images/logoUWorld.gifBy David Jobbins. A new university ranking seeks to use a sophisticated set of bibliometric indicators to rate scientific performance to establish the world's top 500 research universities.
The Leiden Ranking 2011/2012 aims to provide highly accurate measurements of the scientific impact of universities and of universities' involvement in scientific collaboration.
Of the top 20 universities, 18 are from the United States and two from Switzerland (École Polytechnique Fédérale de Lausanne and ETH Zurich).
The Massachusetts Institute of Technology heads the table, followed by Princeton with Harvard in third place. Cambridge is the top UK university in 31st place, with the Hong Kong University of Science and Technology the highest placed university outside the US and Europe at 58th.
The US has 127 universities in the 500, followed by Germany (39), the UK (36), China (31), Italy (25) and Japan (24).
The ranking, based on more than 25 years of bibliometric experience at the Centre for Science and Technology Studies (CWTS) at Leiden University in the Netherlands, is claimed to offer more advanced indicators of scientific impact and collaboration and uses a much more transparent methodology than other rankings.
Unlike the Times Higher Education or QS World University Rankings, it does not draw on reputational surveys or on data provided by the universities themselves, which it dismisses as subjective.
Among the improvements in the 2011-12 ranking are an impact indicator based on the proportion of top 10% publications, collaboration indicators based on geographical distances, fractional counting of collaborative publications, and the possibility of excluding non-English language publications.
"Comparing the impact of non-English language publications with the impact of publications written in English may not be considered fair," the compilers say. "Non-English language publications can be read only by a small part of the scientific community, and therefore these publications cannot be expected to receive similar numbers of citations as publications written in English."
The ranking also makes use of a statistical technique known as bootstrapping to smooth out variations in the data.
Its reliance on bibliometrrics places the Leiden Ranking closer to the Shanghai Jiao Tong University Academic Ranking of World Universities in approach. An exclusive focus on research metrics opens the ranking to the criticism of painting a partial picture of a university.
29 novembre 2011

Asia: How to soar up the world university rankings

http://www.universityworldnews.com/layout/UW/images/logoUWorld.gifBertil Andersson, president of Nanyang Technological University in Singapore, is one of several non-Singaporean university leaders in the city-state. YOJANA SHARMA spoke to him as he headed for the QS-APPLE conference in Manila, on Singapore's attractions as a higher education hub, its willingness to import the best from the West, and whether Asian institutions might eventually overtake the West.
UWN: Why did you move from Europe to Singapore?
Andersson: I had been head of the European Science Foundation in Strasbourg and first came here as provost of Nanyang Technological University (NTU). At the time the media asked why I was leaving a prestigious job in Europe. My answer was: maybe in Europe we talk too much, in Singapore they act. Now I've been here in Singapore almost five years and that statement to a journalist, which then was partly a joke, I now believe is actually true. It's fantastically rewarding to work in the Singapore higher education system because it is doing a quantum leap. If you have ambition to do good academic research and education, you can do it here in Singapore. Unfortunately we cannot do as much in Europe.
UWN: Singapore sees building up its higher education system as a national mission. How do you fit into that mission if you are not Singaporean or even from Asia?
Andersson: Singapore is open for people coming here from all around the world, from other parts of Asia, from Europe and the United States, to contribute to building the knowledge society here. And Singapore is a small country - five million people - but it has such a huge ambition. We know that building academia in the UK, in Sweden or the United States, has been done over hundreds of years in a very systematic and slow way. But Singapore wants to do this very fast and the pace of change is very impressive. I lived in many countries before coming to Singapore: Sweden, UK, France, Australia, Israel, and it has never has it been so easy as to come and work here because the country is so open to international inputs and the culture is very global. We talk a lot about being international but here in Singapore we really feel that this is a global hub.
UWN: You are one of the keynote speakers at the QS-APPLE conference in Manila. What's the message you want to get across?
Andersson: I want to say how fast NTU been able to change in the last five years. Five years ago NTU was not a research-intensive university. Today it is very high ranking in the QS rankings and we made a very big leap this year into 58th position, and we have been emulated by others with evaluation teams coming and saying NTU must be one of the fastest developing universities in the world. What I want to do in Manila is give a picture of how this happened, what have we done in a very systematic way: how we've been able to get a lot of money from the Singapore government, how we have invested that in very interesting research areas, how we've been able to recruit really top-notch scientists from all over the world, how we've been strategising, how we are setting up a new medical school, how we are changing the education for our students, how we work with industry. But I also want to say: are the international university rankings systems really fit for these fast-moving universities? I feel the rankings are very much aligned to the established universities.
UWN: What does it take to move as fast as NTU?
Andersson: You must have governmental support and I think you need determination. I always say you have to walk the talk and change really means change. Even as provost before I became president, and the previous president running NTU, we were determined to change NTU. We knew we had to be a university for the coming century, and not for the past century. And so we have really looked under every stone at the university and said, 'how can we modernise this, how can we change that'? This is what we have been doing and many people who come here are amazed that we could do so much in a short time. But I think if you are determined, and you have the resources, you can do it fast. It is also very encouraging. If you look at Europe you would say academics are so slow, they can never change. On the other hand, we have shown that academia can move fast, it's not only the private sector that can move fast. The public sector and universities can also move fast, so I'm quite proud about that.
UWN: Was there a model you could follow in this modernisation?
Andersson: I don't think we have followed any special university. What partly we have done is we have introduced the best international practice. I have been shopping from the best universities in the UK, the best universities in the US, and also from my Swedish and European experiences. I had experience from all around the world so I had the opportunity to make a synthesis of this and take the best from each system.
UWN: If you bringing in so much talent from the West, are countries like Singapore following something that they can't do better? Is there a major indigenous strength that is not copied from the West?
Andersson: It has to do with culture. In Sweden we say we should do it our way, we know best. But in Singapore they don't say we know best, but what they do is take good examples from everywhere and learn from that and then move fast. That I think is the strength of the system. Not only copying but being inspired by others and then remolding. I'm looking at Malaysia through the window, it's very close [to Singapore]. Malaysia and other countries in the region have not come that far in terms of economy and development as Singapore. But many of these countries, like Malaysia and Vietnam, are doing quite well at the moment and I think that actually they are inspired by Singapore. I've also been invited to give talks in India because many politicians and academics in India are quite inspired by the progress of NTU and they say: 'Look, here is an Asian university that can move very fast, we also want to move fast and learn from these Asian universities in Singapore how to move fast'. I get many invitations to give such talks. I can't accept all of them but this is a trend.
UWN: How can Singapore's education system become great when it is such a small system, compared to much larger countries like China and India?
Andersson: More Singaporeans need to do higher education and go into research, but Singapore is open to recruit top talent that is willing to work here. One should never confuse the issue, that small countries cannot make an impact. The Financial Times recently had a ranking of the three smartest economies in the world. Number one is Switzerland, number two is Singapore and number three is Sweden. These are all countries (with a population) below 10 million. We can make a much bigger impact. Finland may be another country in that category. So we see that small, smart countries can have an impact far bigger than their size.
UWN: Will ASEAN (Association of South East Asian Nations) harmonisation lead to a movement of Asia's talent to Singapore?
Andersson: That has already started. We used to have from the UK a brain drain to the US. But I can see already today how this is changing. We have been looking at a bipartite movement of brains to the US. But what we are seeing today is a tripartite movement or brain circulation between the US, Europe and Asia. One of the success stories for NTU and Singapore as a whole is how we have been able to attract a lot of top researchers from all over the world. When I talk about this at home [in Sweden] they tend to think it's the guys who did not make it in Europe, who did not get tenure, and who cannot do anything else so they go to Singapore. That's absolutely wrong, it's the crème de la crème that comes to Singapore because they see the opportunities.
Maybe scientists who are very ambitious, maybe they went to Europe and the US in the 1970s and 1980s. Today people with the same mindset are going to Singapore and Hong Kong. Instead of moving west, they are moving east. The same with students, so many students want to come to NTU and many governments want to send their students to Singapore because they realise that the future will be Asia-dominant. Many leaders want to send their university students who are going to be the leaders of tomorrow so that from an early age they can build an Asia network like my generation went to the US to create a US network.
UWN: With such rapid change, will we soon see Asian universities overtake Western universities in research?
Andersson: It's only a matter of time. Asia has only been on the research map for 10, maybe 15 years, with the exception of Japan. If you build a Google or Microsoft company it can go very fast in the business world. In the academic world things happen much more slowly and it's an evolution rather than revolution.
That's what I'm saying about NTU. We have jumped up to place 55 in the rankings in just five years and that's an enormous achievement. I would predict in 10 to 15 years we will see several Asian universities in the top 20 ranks. I hope NTU will be one of them.
20 novembre 2011

IREG-6 - Academic Rankings and Advancement of Higher Education

http://www.ireg-observatory.org/ireg-6/Taiwan_head2.jpgIREG-6 Conference: Academic Rankings  and Advancement of Higher Education Lessons from Asia and other Regions
Background and context

IREG-6 Conference will be an important venue for representatives of the ranking organizations, experts on quality assurance and academic excellence as well as stakeholders and interested parties to meet and discuss various topics concerning the academic rankings and other type of assessment of performance of higher education institutions.
On previous occasions, IREG conferences met in Warsaw, Washington, Berlin, Shanghai, Astana, and again in Berlin, demonstrating a growing interest and relevance of a relatively new phenomenon which is academic rankings. As already indicated by title of the meeting, without neglecting relevant developments in other regions, the IREG-6 conference will concentrate on higher education in Asia Region. This region made tremendous progress in expanding access to higher education and as observes, Richard C. Levin, President of Yale University:
“The leading countries of Asia are focused on an even more challenging goal: building universities that can compete with the finest in the world. The governments of China, India, Singapore and South Korea are explicitly seeking to elevate some of their universities to this exalted status because they recognise the important role that university-based scientific research has played in driving economic growth in the United States, Europe and Japan.”
It is also a region in which university rankings found acceptance and became important for governments, universities, students and other stake holders. The reason seems to be that rankings, with all due limitations, are perceived as “mirror” of their performance. Like in other parts of the world, Asian countries hope that a funding concentration policy will lead to creation of several top ranked institutions. In fact, there has been continuous debate over the direct and indirect effects of these policies. Reflection of these discussions will surely resonate during this meeting.
Like the previous meetings, IREG-6 will discuss new developments in university rankings with a special attention to their reliability and quality enhancement. In April 2011, IREG Observatory on Ranking and Excellence has adopted the rules and procedures that will be used in assessing the quality of rankings. The purpose of an audit, conducted by independent academic teams, will be to verify if a ranking under review was done professionally, and observes good practices, providing students, their parents and employers with information allowing them to compare and assess programs offered by higher education institutions (more information can be accessed at www.ireg-observatory.org). At present, the audit process is ongoing, but it starts to have an impact on the development of existing ranking systems, and successfully drive rankers to make a self-examination according to these principles.
This two-day conference, will provide the participants not only a good insight into recent developments in academic rankings but will give an opportunity to interact directly with rankers of leading international and national rankings, researchers, university leaders, policy makers and other stake-holders from various regions to discuss major developments related directly and indirectly to higher education. Those who are interested are invited to take part in the event and are advised to register early due to the limited availability.
Last but not least, IREG-6 Conference and post conference (optional) programs will provide a great opportunity to experience the beauty, dynamism, history, exquisite cuisine and hospitality of Taiwan and its people.
See also on the blog: IREG-5: National University Rankings on the Rise, Rapport sur les classements mondiaux d'universités et leur impact, Examining The World Bank’s Papers on Higher Education Since 1994, IREG-Ranking Audit: Purpose, Criteria and Procedure, Les classements d'universités pointés du doigt, IREG Ranking Audit Rules adopted, Conference on university rankings, IREG-5:The Academic Rankings: From Popularity to Reliability and Relevance.
16 octobre 2011

Why do we bother so much with rankings?

http://profile.ak.fbcdn.net/hprofile-ak-snc4/50235_161806250531786_2705875_q.jpgBy Mari Elken in Higher Education News. A couple of days ago, the new Times Higher Education ranking was published. It appears that the yearly launches of the various rankings have become newsworthy events and they always attract a great deal of attention, and the Times ranking itself “the global authority on higher education performance“. Indeed. Why do we follow them so much, and on this note – why am I writing this to start with?
In national contexts, all of these launches of the various rankings are followed by the dissatisfied gasps from the ones who have lost some of their position, and the content of those who have survived yet another round or even improved their position (and the joy of finally making it to the distinguished list).
While there seems to be so wide-spread agreement that the popular rankings are not an adequate measure of overall university quality, everyone still appears to be engaged in checking the rankings once they are published. Almost like bad television that no-one admits to watch, but yet there seems to be a surprising oversight of the content and a generally high viewer ratings. So if they are so flawed, why do we bother so much with rankings?
Of course we can turn the argument to be one about global competition and the need for more transparency and information, and for this to be about accountability  - but provided the  small number of universities actually ranked in most of the top rankings, the various methodological issues reported in research, the skewedness of measurement in terms of available indicators – what do they really tell us about higher education as such?
This far, it seems that one of the consequences has been a general understanding that the US higher education system is leading in the world. Indeed, a large bulk of the top universities according to most rankings come from the US. Of course the first basic question is whether a few elite institutions represent a whole system, but also in terms of whole systems, the newest Times ranking in fact also  did a “value for money” analysis and there UK and Switzerland came out as the leaders, with US on 16th place. Well, one might almost ponder how come they only now came up with this ‘value for money’ analysis, provided that it has been such a focus in all other debates.
There seems to be quite wide-spread agreement in the research and academic community that rankings only show certain aspects of quality, depending on methodological choices and in most cases – the availability of indicators (which obviously cannot measure everything). This does not have to be a problem as such and has a value as well, as long as one is clear on the fact that this is what the rankings really are doing. However, the way they are promoted, this is not as clear. The promotional video of new Times ranking explicitly argues that they can in fact measure “university greatness”. With a jolly melody and a light popular media inspired presentation it will surely appeal to a number of people. Wanna know what are the top ten greatest institutions? Simple.
Well, we argue that the rankings do not measure it all. So what do I do? I go check the ranking of my own institution once a new ranking is being published. In part because I know that this can have some consequences for certain decisions further on, in part due to sheer curiosity. There is this intrinsic appeal to see things neatly categorised, in some sort of hierarchy – even when we know that this cannot ever represent the messy reality and that they can at best show a little aspect of reality.
One cannot underestimate the symbolic value of these rankings in practice. With the multitude of rankings around at this point, surely everyone can find one where their particular institution does well. How about rankings guiding some policy decisions (for example, they have arguably led to mergers in some countries)? While care has to be shown in assuming causal links without solid empirical evidence, if rankings with all their faults and flaws are even to some degree informing policy decisions of higher education as a whole – there might be a problem.
What is the solution – better rankings? It has now been steadily argued that rankings are here to stay, so various suggestions have emerged. Perhaps. While it is difficult to argue with the suggestion that they are indeed here to stay (there are too many vested interests in keeping this wagon going now) – the question nevertheless is what we should use the rankings for and how. Thus, it becomes extremely important to be clear on the specific purposes of each ranking and their use in practice.  To what extent should they guide decisions on institutional strategies? About national policy?
If they are used semi-consciously as some sort of indicators for overall success without reflecting over the actual content and implications, it might just be that it ends up in universities become well trained in jumping through hoops of specific performance indicators and the real focus on quality and improvement of performance (and what sort of performance do we talk about anyway?) gets forgotten somewhere on the way. Yes, I know I am not saying anything scandalously new – but we have to keep on repeating this again and again, just in case.
16 octobre 2011

Despite ranking changes, questions persist

http://www.universityworldnews.com/layout/UW/images/logoUWorld.gifBy Richard Holmes*. The international university ranking scene is starting to look like the heavyweight boxing division. Titles are proliferating and there is no longer an undisputed champion of the world. Caltech has just been crowned top university by Times Higher Education and Thomson Reuters, their data collectors and analysts, while QS have put Cambridge in first place. Over at Webometrics, MIT holds the number one spot. But Harvard has the consolation of remaining the top university in the Scimago and HEEACT rankings as well as the Academic Ranking of World Universities, ARWU, published by Shanghai Jiao Tong University.
Does this discredit the entire idea of rankings? Not necessarily. We all have different ideas of what a university is about and there is no reason why university rankings should be unanimous about what makes a great or even a good university. The Shanghai rankers are concerned with the natural sciences, with research and with distinctions among the world's research elite. ARWU is driven ultimately by the need to emulate the West and beat it at its own game. It also measures output rather than input. The QS rankings emphasise reputation rather than bibliometrics and are unique in including an assessment of graduate quality by employers.
The Times Higher Education (THE) World University Rankings 2011-12 are now unashamedly focused on the world's elite and have little to say about teaching quality. But unlike the Shanghai rankings, they do make an attempt to measure expertise in the arts and humanities and to give due weight to the social sciences. Last year's THE rankings were greeted with astonishment when they showed mediocre universities getting unbelievably high scores, in many cases mainly because of an apparently outstanding performance for research impact. Alexandria University was the most obvious case, but it was not the only one.
Thomson Reuters have gone to considerable lengths to ensure that similar anomalies did not occur this year. In addition, they have tweaked the relative weighting given to the various indicators and introduced several methodological refinements. One of these is extending normalisation by field to yet more indicators, not just citations. Dozens of universities have signed up for the first time, the University of Texas at Austin and the Hebrew University of Jerusalem being the best known. On top of all this, there is the unpredictable effect of exchange rate fluctuations on those indicators that involve university income.
The result of all this is that the 2011 rankings are very different from those of 2010. Any attempt to compare performance over the two rankings is pointless. That includes rather strained attempts to claim that Irish universities are collapsing because academics are voting them down in the reputational surveys in response to budget cuts. There are so many changes that it is extremely difficult to determine exactly what contributed to the rise or fall of any particular university. It is, however, noticeable that changes in the research and teaching indicators often go in the same direction, suggesting that fluctuations in the academic survey, which features in both sets of indicators, may have been at least partly responsible. These fluctuations might be a by-product of the introduction of a logarithm in calculating the scores for the teaching and research surveys.
The rankings that came out on 6 October did not have the obvious absurdities of last year. Alexandria, Bilkent and Hong Kong Baptist University are way down, although they probably did not go down far enough. They still have improbably high scores for the citations indicator, but perhaps not outrageously so. There have been some remarkable changes since last year, however. Some universities have, despite the presence of new competitors, risen dramatically. Many of them are in Europe, although there are also a few American state universities, such as Penn State, the University of Minnesota and the University of California at Davis.
Dutch universities seem to have done particularly well and Irish ones badly, along with two of the French grandes écoles. This could lead to public criticism of the sort that undermined the rankings produced by THE and QS until 2009. Once we venture outside the top 100 or so there are quite a few oddities that will be regarded as suspicious by those familiar with local higher education.
Alexandria is in 330rd place, but not Cairo University. Bogazici University in Istanbul is there and so is Istanbul Technical University, but where is the University of Istanbul? Sharif University of Technology in Tehran is in 346th place, but what about the University of Tehran? The Indian Institute of Technology (IIT) Bombay is 302nd but none of the other IITs or the Indian institutes of management or science can be found. Thailand's Mahidol is in the rankings, but not Chulalongkorn.
I expect many observers will be baffled by the appearance of the National Taiwan Ocean University, Plymouth University, the National University of Ireland, Maynooth, the University of Crete, the University of Iceland, Georgia Health Sciences University and the University of Medicine and Dentistry of New Jersey among the world's best 400 universities. Creighton University, Nebraska, in 247th place, is a worthy institution. A Jesuit-run school, it offers great value for money, according to US News, and is among the top masters colleges in the US. But masters colleges offer few or no doctoral programmes so one wonders how it could do so well in a ranking that is supposedly concerned with evaluating the world's elite research universities.
Last year, THE claimed that only vested interests that had suffered from the new ranking methodology had any reason to complain. It looks as though there will be more complaints this year from another set of institutions. These rankings continue to emphasise citations and to measure them by only one indicator, which accounts for 30% of the total weighting, down a bit from last year. Research impact is assessed only by the number of citations per paper normalised by year and by field.
In principle, this seems fair enough. A few citations would be much more of an achievement in applied maths or philosophy than in medicine where even routine papers are cited frequently and often within months of publication. It seems only fair that academic authors should be assessed against the standards prevailing in their discipline. But we can ask whether all disciplines are equal. Does education really make the same cognitive demands as physics? Has sociolinguistics been of as much benefit to society as oncology or engineering? The implication of THE's choice of method is that the answer is in the affirmative, but not everyone will agree.
Another problem with normalisation is that the finer the distinctions that are made, the smaller the absolute numbers involved and the greater the probability that small fluctuations in data can have disproportionate effects. It appears that THE has overcome this problem to some extent with a few methodological fixes, but this does not solve it entirely. THE and Thomson Reuters have some hard decisions in front of them. To make further methodological changes accompanied by huge falls and rises could discredit the rankings as much as such changes sullied the public perception of the THES-QS rankings in their early days.
But even if they keep exactly the same methodology, there is still potential for further instability if universities direct their research funding and publication efforts to those fields that yield greater benefits in the rankings, if currency fluctuations lead to wild swings in the income-based indicators or streamlining or mergers in response to financial problems affect the indicators scaled by numbers of staff.
Above all, THE and Thomson Reuters will have to deal with continued scepticism about the weighting given to citations, their refusal to consider alternative ways of assessing research impact and the potential for gaming this indicator.
* Richard Holmes teaches at Universiti Teknologi in Malaysia, and is the author of the blog University Ranking Watch.
10 octobre 2011

UT reaches "bubbling under" list in THE University Rankings

http://www.utrecht-network.org/en/images/42Although not in the Top 200 institutions yet, Estonia, Czech Republic, Poland, Turkey, Iran and India all show promise in the "Bubbling Under" Section Just Outside the Top 200. This puts UT in the top 3% of worlds best universities' list for the first time. The annual rankings, which are the most sophisticated and carefully calibrated rankings ever published, provide a definitive list of the world’s top 200 universities.
While the top 200 is dominated by US and British institutions, there are many countries with institutions that are ‘bubbling under’, sitting in bands just outside the elite 200. With higher scores in two or three areas they could see themselves in the top 200 in the coming years.
Estonia is one such country, with University of Tartu (placed in the 350-400 band), together with Czech Republic and Poland make up the Eastern European contingent in the "bubbling under" section. These institutions appear to be the most promising candidates to achieve world class status in the region.
The data, which are supplied by Thomson Reuters, judge universities on 13 performance indicators, making these the only world rankings to examine all core missions of a modern global university - research, teaching, knowledge transfer and international activity. They include the world’s largest academic reputation survey and an analysis of 50 million citations which are compared with the world average from the same field.
This year’s methodology has been slightly refined to ensure that universities with particular strength in the arts, humanities and social sciences are placed on a more equal footing with those with a speciality in science subjects, which in the past may have been given an artificial boost as they tend to attract more funding. Such sophisticated methodology has established the Times Higher Education World University Rankings as the most respected and citied rankings system amongst universities worldwide.
It is this levelling of the playing field that has led Oxford, with its arts bias, to move ahead of Cambridge, which is particularly known for its natural science faculties.
Newsletter
49 abonnés
Visiteurs
Depuis la création 2 783 445
Formation Continue du Supérieur
Archives