Canalblog
Suivre ce blog Administration + Créer mon blog
Formation Continue du Supérieur
23 août 2011

Matters of refinement

http://pagead2.googlesyndication.com/pagead/imgad?id=CJephoj1jtDPShDGAxiDATIIBZ4GKQARyasBy Phil Baty. Phil Baty discusses the changes that will make the 2011-12 rankings even more accurate. The most important performance indicator in the Times Higher Education World University Rankings is the one that uses journal article citations to evaluate “research influence”. For the forthcoming 2011-12 rankings, our data partners Thomson Reuters looked at about 50 million citations to more than six million papers published over a five-year period.
We are satisfied that across a university the number of citations that peer-reviewed journal papers receive from other scholars provides a robust and widely accepted indication of the significance and relevance of research. Thomson Reuters, which owns the citations database used, performs sophisticated analyses to ensure the data are properly normalised to take into account the differences in publication habits and hence citation levels between different fields. This ensures that all universities are treated fairly.
So we are happy that this performance indicator receives the highest weighting of the 13 employed by the rankings (it was worth just under a third of total scores last year). But of course, it is not without controversy. Some object to the reliance on citations data in principle; others have more specific objections to how the data are analysed. The biggest concern with the indicator last year centred on the influence of exceptionally highly cited papers on the overall performance of smaller universities. Exceptionally high “research influence” scores for Alexandria University in particular caught the eye, and helped it to do well in the rankings. It was not alone.
We drew attention to such anomalies in the interests of transparency and to open a debate on potential improvements for 2011-12. I’m delighted to say that the debate was highly productive, and we can now confirm that we have been able to refine the way we examine the citations data to address these concerns. The indicator counts citations from all indexed journals published during a five-year window (2005-2009) for this year’s rankings. But an important change is that we have extended the window within which the citations from those publications are counted by an additional year, to 2010.
Citations usually take time to accumulate, but some exceptional papers can pick up a high volume in the year of publication. This will often mean that when benchmarked against the year in their subject, they can become extreme statistical outliers. In small institutions with a relatively low volume of publications, having your name affiliated with such papers can push up your overall research influence score disproportionately. The additional year will help to reduce the disproportionate impact of such papers.
In a further move to reduce the outlier effect, we have also raised the minimum publication threshold below which institutions are excluded from the rankings. For the 2011-12 tables, only universities that have published at least 200 research papers a year (up from 50) are included. Another area for improvement concerns how we moderate the research influence score to take into account institutional location. Last year, on the advice of our expert advisers, we sought to acknowledge excellence in research among institutions in developing nations with less-established research networks and lower innate citation rates.
To achieve this we applied a regional modification to the data. Simon Pratt, project manager of Institutional Research at Thomson Reuters, which collects and analyses the data, said: “While this was effective in identifying regionally excellent research, the approach unduly favoured those countries with developing economies and a focus on applied science. This year we have improved the regional modification to take into account the subject mix of the country. “The result is that some institutions in countries with a focus on subjects with low citation rates, such as engineering and technology, will still have their citation impact raised by the modification, but less than last year. Correspondingly, some institutions in countries focusing on highly cited subjects, such as medical and biological sciences, may find that the regional modification will lower their citation impact to a lesser extent.”
These refinements have been made after detailed consultation and careful consideration. They mean that direct comparisons with last year must be made with caution. But they also mean that the 2011-12 World University Rankings will be the most sophisticated, and carefully calibrated, ever published. Reference: Phil Baty is editor, Times Higher Education World University Rankings.
22 août 2011

Les classements d’universités

http://www.cpu.fr/fileadmin/img/signature.jpgLe jeudi 15 septembre 2011 la CPU tiendra son 1er séminaire de l’année 2011-2012 sur le thème des classements d’universités. Salle de conférence - Conférence des présidents d’université - 103 boulevard Saint-Michel - 75005 Paris - RER Luxembourg.
La question des classements (rankings) nationaux et internationaux des universités est de plus en plus présente dans l’élaboration de la politique des universités et des États et dans les médias. L'objet de ce séminaire est de faire le point sur ce que qu’il en est réellement. Pourquoi les rankings? Quelles sont les raisons de leur succès? Comment sont-ils construits? Quels sont leurs qualités et leurs défauts? A quoi peuvent-ils servir? Ce séminaire sera également l’occasion d’une présentation du projet U-Multirank (Multi-dimensional Global ranking of Universities), le classement européen et multicritères des universités mondiales. Consulter le pré-programme.
Pré- programme
14h30 : Ouverture du séminaire par Louis Vogel, président de la CPU et présentation des objectifs du séminaire par Nadine Lavignotte et Jean-Pierre Finance, coresponsables du Comité Qualité Evaluation et Rankings de la CPU.
14h40 : Les classements mondiaux d’université: concepts, contextes politiques et culturels par Jamil Salmi, Chargé de l’enseignement supérieur auprès de la Banque Mondiale.
15h15 : Que sont réellement les rankings? Quel est leur usage? Présentation du rapport de l’EUA « Global University Rankings and their impact » (« les classements mondiaux d’université et leurs impacts ») par Jean-Pierre Finance.
16h00 Les perspectives offertes par l’approche multidimensionnelle : le projet U-Multirank Introduction par Jean-Richard Cytermann, Président de l’OST (Observatoire des Sciences et Techniques). Présentation du projet U-Multirank par Ghislaine Filliatreau, Directrice de l’OST.
16h50 Clôture du Colloque par Nadine Lavignotte et Jean-Pierre Finance, coresponsables du Comité Qualité Evaluation et Rankings de la CPU.
http://www.cpu.fr/fileadmin/img/signature.jpg On Thursday, September 15, 2011 the CPU will hold its first seminar of the year 2011-2012 on the subject of university rankings. Conference Room - Conference of University Presidents - 103 Boulevard Saint-Michel - 75005 Paris - RER Luxembourg.
The issue of rankings (rankings), national and international universities are increasingly present in policy development and state universities and the media.
The purpose of this seminar is to take stock of what it really is. Why are rankings? What are the reasons for their success? How are they constructed? What are their strengths and shortcomings? What do they serve? This will also be an opportunity for presentation of the project U-Multirank (Multi-dimensional Global Ranking of Universities), the ranking of European universities and multi world. See the preliminary program. More...
17 août 2011

Shanghai rankings reshuffled, Middle East up

http://www.universityworldnews.com/layout/UW/images/logoUWorld.gifThere are few changes in the upper echelons of the 2011 Academic Ranking of World Universities, published on Monday by Shanghai Jiao Tong University, with the same eight American and two British universities making the top 10. But the ranking reports "remarkable" progress by institutions in the Middle East.
The ninth global ranking from the Center for World-Class Universities places Harvard again at the top. There was some reshuffling among the next three places, with Stanford second (up from third in 2010), the Massachusetts Institute of Technology third (up from fourth) and University of California - Berkeley fourth (down from second). The next six slots run as in 2010, with fifth place going to the Cambridge followed by the California Institute of Technology, Princeton, Columbia, University of Chicago and Oxford. There are once again 17 American universities in the top 20, five of them Californian. The other three places go to the UK, with University College London squeezing out the University of Tokyo to come it at number 20.
Continental Europe's top institution is ETH Zurich (23) in Switzerland, followed by France's Paris-Sud (40) and Pierre and Marie Curie (41). Asia's highest-ranked institutions are Japan's University of Tokyo (21) and Kyoto University (27).
The Shanghai ranking uses six indicators: the number of alumni and staff winning Nobel Prizes and Fields Medals; number of highly cited researchers selected by Thomson Scientific; number of articles published in Nature and Science; number of articles in the Science Citation Index-Expanded and Social Sciences Citation Index; and per capita performance with respect to the size of an institution. These indicators make for little year-on-year change but the ranking's stability is respected. Still, this year three universities made it to the top 100 for the first time: Switzerland's University of Geneva (73), Australia's University of Queensland (88) and Germany's University of Frankfurt (100). Germany now has six universities in the top 100, and Switzerland and Australia four each.
Also, 10 universities entered the top 500 for the first time including Malaysia's University of Malaya and Croatia's University of Zagreb. This year there are 42 countries with universities in the top 500. The number of Chinese universities rose to 35 in 2011, and three made it to the top 200 - National Taiwan University, the Chinese University of Hong Kong and Tsinghua University.
Perhaps the biggest news, though, is the "remarkable" progress of Middle East universities. King Saud University in Saudi Arabia appears for the first time in the top 300 institutions. Saudi Arabia's King Fahd University of Petroleum and Minerals, Turkey's Istanbul University and Iran's University of Teheran in Iran moved into the top 400. "Cairo University in Egypt is back in the top 500 after five years of staggering outside," said the Center in a statement. Africa now has four institutions in the top 500: Cairo, and South Africa's universities of Cape Town, the Witwatersrand and KwaZulu-Natal.

The Academic Ranking of World Universities 2011 also ranked the top 100 universities in five broad subject fields and in five selected subject fields. Harvard leads the lists in four of the five fields, with the best five universities:
* Natural sciences and mathematics: Harvard, Berkeley, Princeton, Caltech and Cambridge.
* Engineering-technology and computer sciences: MIT, Stanford, California - Berkeley, Illinois at Urbana-Champaign, and Georgia Institute of Technology.
* Life and Agriculture Sciences: Harvard, MIT, California - San Francisco, Cambridge and Washington (Seattle).
* Clinical medicine and pharmacy: Harvard, California - San Francisco, Washington (Seattle), Johns Hopkins and Columbia.
* Social Sciences: Harvard, Chicago, MIT, Berkeley and Columbia.
The Academic Ranking of World Universities rates more than 1,000 universities worldwide but only publishes the list of the top 500. In January the Center for World-Class Universities kicked off the Global Research University Profile project, which will develop a database on around 1,200 global research universities. The data gathered, it says, will be used to design more indicators and users will be able to compare universities with a variety of indicators of their choice.
Voir aussi sur le blog la catégorie "Classement", en particulier les articles suivants: The Futility of Ranking Academic Journals, Are rankings driving university elitism?, Do rankings promote trickle down knowledge?, « Hit-parade des universités: la France stagne au huitième rang du classement de Shanghai » et « Nous récoltons les fruits des efforts enclenchés dans l'enseignement supérieur », Les universités françaises à la peine dans le classement de Shanghai, New International Ranking System Has a DIY Twist, Les classements des chercheurs en question, Questions Abound as the College-Rankings Race Goes GlobalInternational Group Announces Audit of University Rankings.
17 août 2011

The Futility of Ranking Academic Journals

http://chronicle.com/img/banner_promo.jpgBy Ian Wilhelm. The following is a guest post by Ellen Hazelkorn, vice president for research and enterprise and head of the Higher Education Policy Research Unit at the Dublin Institute of Technology. Her book Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence (Palgrave Macmillan) was published in March.
Ranking academic journals is one of the more contentious aspects of research assessment, and a foundation stone for university rankings. Because people’s careers and aspirations are on the line, it was only a matter of time before someone challenged the findings. Their implications go far beyond recent events in Australia.Thomson Reuters ISI Web of Science, Elsevier’s Scopus, and Google Scholar have become dominant players in a rapidly expanding and lucrative global intelligence information business. The first has identified another opportunity, the Global Institute Profile Project: collecting institutional profile information, and then monetarizing it by selling it back to the institutions for strategic planning purposes or on to third-parties to underpin policy/decision-making or classification systems – similar to the way in which financial data was turn into a commodity by Bloomberg. The Times Higher Education (THE) has transformed itself from a purveyor of (objective) information about higher education to a promoter of global rankings. Along with Quacquarelli Symonds Ltd, THE organizes events around the world, marketing its deep knowledge of ranking methodologies to universities striving to be at the top of global rankings; there is even an iPhone app!
Ranking journals involves hierarchically categorizing scholarly journals according to their perceived quality. The ranking of scientific journals has been an implicit aspect of research assessment for years but has now become very explicit. Mind you, there is a critical issue about academic quality and productivity that the academy needs to respond to. Simply writing the occasional article is arguably not sufficient evidence of scholarship. In response to the perceived lack of clarity and/or reluctance by academe to provide evidence, the process has now become quite formalized in many countries. In addition to Australia, Denmark, Norway, France, Spain, U.K., and Sweden, amongst others, also assign points to different journals on the basis of citation impact or whether the influence and scope is local, national or worldwide. More recently the European Science Foundation has produced its next iteration of the European Reference Index for the Humanities. The practice benefits elite universities and their researchers who dominate such publications. Others claim the process aids visibility of newer disciplines – more likely, many others have grinned and endured it.
Quality can be a subjective measurement; just because the ranking exercise is conducted by groups of noteworthy academics, usually in private, doesn’t make it otherwise. Then there is the problem of the databases which hold only a proportion of the over 1.3 million articles published annually. The main beneficiaries are the physical, life, and medical sciences, due to their publishing habits. This means other important sources or publication formats, such as books and conference proceedings, contribution to international standards or policy reports, electronic formats or open source publications, etc., are all ignored. The Shanghai Academic Ranking of World Universities, which has become the gold-standard used by governments around the world, gives bonus points to Nature or Science – but on what basis?
Nationally-relevant research also loses out; usually this criticism refers to the humanities or social sciences but is equally relevant to the “hard” sciences. I was reminded of this fact when I met a group of women from developing countries pursuing their Ph.D’s. They came from Pakistan, the Philippines, and Nigeria, and were pursuing problems of water quality, flood control, and crop fertility – goal-oriented research of real relevance to their communities – which means the language was not English and the publication outlet was nationally-oriented. Faculty I interviewed in Japan during 2008 voiced similar concerns that international journals in English were more highly regarded than Japanese journals.
There is an over-reliance on peer-review as a measure of quality and impact. But, there may be many reasons for a high-citation count: the field may be very popular or the paper seriously questioned; neither means high quality. This problem accounted for the controversially high ranking of the University of Alexandria, Egypt, in the Times Higher Education World University Rankings 2010.
While academe has questioned the trend away from curiosity driven and towards application-focused research, there is a responsibility on publicly-financed research(ers). Yet, this is not what ranking journals measures. In other words, using policy’s own objectives, ranking journals simply measures what one academic has written and the other has read rather than its impact and benefit on/for society. Where is the evidence the research is helping resolve society’s major challenges or benefit students?
Governments have adopted this practice because it appears to be a scientific method for resource allocation. But, given all the questions about its methodology, it’s unlikely they could withstand legal scrutiny. The implications are likely to be long-term. There is already evidence it is leading to distortions in research focus and research management: encouraging academics to write journal articles rather than reflective books or policy papers, discouraging intellectual risk taking, favoring particular disciplines for resource allocation, and informing hiring and firing.
Rather than quantification as a measure of quality, an E.U. Expert Group recommended a combination of qualitative and quantitative methodologies. This is because journals, their editors, and their reviewers can be extremely conservative; they act as gatekeepers and can discourage intellectual risk-raking at a time when society worldwide needs more, not fewer, critical voices.
Voir aussi sur le blog: Are rankings driving university elitism?, Do rankings promote trickle down knowledge? (by Ellen Hazelkorn), « Hit-parade des universités: la France stagne au huitième rang du classement de Shanghai » et « Nous récoltons les fruits des efforts enclenchés dans l'enseignement supérieur »
, Les universités françaises à la peine dans le classement de Shanghai, New International Ranking System Has a DIY Twist, Les classements des chercheurs en question, Questions Abound as the College-Rankings Race Goes Global (by Ellen Hazelkorn),  International Group Announces Audit of University Rankings.
16 août 2011

« Hit-parade des universités: la France stagne au huitième rang du classement de Shanghai » et « Nous récoltons les fruits »

http://www.lesechos.fr/images/les-echos.pngPar Isabelle Ficek : « Hit-parade des universités: la France stagne au huitième rang du classement de Shanghai ». Trois universités françaises demeurent dans le top 100 du classement de Shanghai 2011, mais c'est désormais l'université Paris-Sud qui décroche le premier rang français. Le palmarès chinois reste largement dominé par les établissements anglo-saxons.
Très attendu, très décrié mais très redouté. L'édition 2011 du classement de Shanghai des universités mondiales a été dévoilée ce week-end. Comme l'an dernier, les universités américaines trustent le haut du classement, raflant 17 des 20 premières places. Avec, toujours, la suprématie de l'université américaine de Harvard, au 1er rang, suivie de Stanford, qui reprend la deuxième place cédée en 2010 à Berkeley, classée, elle, quatrième derrière le Massachusetts Institute of Technology (MIT). Le Royaume-Uni n'est pas en reste avec 3 universités dans le top 10 (Cambridge, Oxford et le University College de Londres). La France, elle, ne compte que 3 établissements parmi les 100 premiers mondiaux, comme les années précédentes. Et elle piétine, dans le top 500, au 8e rang mondial avec 21 établissements classés, contre le 6e rang avec 22 établissements l'an dernier, le classement distinguant Aix-Marseille-1 et Aix-Marseille-2, alors qu'il prend en compte pour l'édition 2011 la future fusion.
Si les résultats globaux de la France sont à peu près stables, ils cachent quelques surprises. D'abord pour ses établissements les mieux classés. Pour la première fois, l'université Paris-Sud (Orsay Paris-11) décroche la première place (au 40e rang) et détrône l'université Pierre-et-Marie-Curie (UPMC, Paris-6), qui perd deux places, au 41e rang. Un nouveau leadership gagné a priori grâce à l'attribution en 2010 de la médaille Fields au mathématicien Ngo Bao Chau, qui a obtenu à sa thèse à Orsay.
Or parmi les critères du classement figure le nombre de médailles Fields ou de prix Nobel parmi les anciens élèves (10% de la note) ou les chercheurs (20%). C'est aussi l'une des raisons qui explique le bond de l'université Paris Dauphine du top 400 au top 300, l'autre médaillé Fields français de 2010, Cédric Villani, ayant soutenu son doctorat à Dauphine. L'ENS Ulm reste troisième mais progresse de deux places.
Enfin, les regroupements semblent payer puisque Aix-Marseille se hisse dans la tranche 102-150 du palmarès, alors que Aix-Marseille-1 et 2 étaient dans le top 300 (entre 201 et 300). Même chose pour l'université de Lorraine, dont la fusion, malgré quelques remous, est en cours. Elle atteint le top 300, alors que Nancy-1 était dans le top 400. Déception, l'Ecole polytechnique et l'ESPCI Paris Tech descendent du top 300 au top 400.
Lundi, dans une réaction au vitriol, l'UPMC a rappelé qu'elle conservait malgré tout le 7e rang mondial et le 1er rang français en mathématiques et qu'elle était la « seule institution française classée (entre les 50e et 75e rangs) pour l'ingénierie ». Défendant au passage le modèle de l'université qui offre « une formation et une sélection progressive », contre celui des grandes écoles avec « les étudiants les plus subventionnés de France ».
Nuances et critiques

Reste que ce classement, si influent qu'il soit, est à manier avec délicatesse. Le choix de ses six critères, qui privilégient la recherche, notamment en sciences exactes, au détriment de la formation, est critiqué depuis sa création en 2003. En juin dernier, l'Association européenne des universités (EUA) avait mis en garde contre les différents classements mondiaux, truffés « de défauts, failles et autres biais ». D'où l'impatience des acteurs du secteur à voir émerger le projet européen U-Multirank de cartographie de l'enseignement supérieur qui devrait prendre en compte plus d'établissements et de critères. Peut-être en 2013. Retrouvez l''intégralité du classement de Shanghai sur le site officiel de Shanghai Ranking.
Propos recueillis par Isabelle Ficek. Laurent Wauquiez Ministre de l'Enseignement Supérieur et de la Recherche: « Nous récoltons les fruits des efforts enclenchés dans l'enseignement supérieur »
Quels enseignements tirez-vous du classement 2011 ? Nous récoltons les fruits des efforts enclenchés, des investissements, de la qualité de nos enseignants et de nos chercheurs, des réformes de structures qui ont été faites avec la loi LRU, avec, cette année, des progrès sensibles pour la France. Le pays était très mal positionné dans le Top 200 avec 6 établissements en 2006. Nous sommes revenus à 8 cette année. D'autre part, nos établissements ont progressé : Paris-Sud gagne 5 places et l'Ecole normale supérieure, 2. Cela est d'autant plus significatif que l'Allemagne et le Royaume-Uni ont une tendance au recul ou à la stabilité. Une des vraies nouveautés résulte aussi de notre politique de rapprochement des universités, qui donne une taille critique pour s'imposer.
C'est-à-dire? Nous avons créé les pôles de recherche et d'enseignement supérieur, les PRES, et demandé à Shanghai de les reconnaître. Les auteurs du classement, à ma demande, ont réalisé une simulation et proposent de les prendre en compte à l'avenir s'ils poursuivent leur rapprochement. Les résultats sont extraordinaires. Quatre regroupements pourraient intégrer directement le Top 50, avec les établissements de Saclay, ceux de Paris Sciences et Lettres Etoile (ENS Ulm, Dauphine...), et les PRES Sorbonne Universités [Paris-2, 4, 6, NDLR] et Paris Cité [Paris-3, 5, 7, 13, NDLR]. Toulouse 3, lui, a gagné 53 places, de 278 à 225, c'est un bon exemple de progrès dans les zones moins regardées du classement. Les réformes nous ont permis de recoller à la compétition mondiale. Si nous persistons, nous allons faire un saut important.
Allez-vous accélérer le mouvement des regroupements?
C'est évidemment ma logique. Mais maintenant, ce sont les présidents d'université qui s'emparent de cette dynamique, comme Bordeaux, qui a décidé d'accélérer son calendrier de fusion. Nous les encourageons. Tous les projets retenus dans le cadre du grand emprunt donnent une prime aux rapprochements. J'espère que cette dynamique se prolongera, notamment sur le plateau de Saclay.
Ne va-t-on pas vers un système à deux vitesses avec des pôles bien classés concentrant les moyens?
Je serai très attentif à cette question. Ce que nous essayons de faire, ce ne sont pas des pôles élitistes pour quelques élèves sur un désert global. C'est de hisser vers l'excellence l'ensemble de l'enseignement supérieur. Les premiers résultats des Initiatives d'excellence l'illustrent avec la distinction de Bordeaux et de Strasbourg. C'est le cas aussi à La Rochelle, qui, grâce aux partenariats noués sur les métiers de la mer avec son tissu industriel, se positionne sur un créneau d'excellence, sans pour autant être Saclay. C'est le cas à Clermont-Ferrand, qui a obtenu de bons résultats aux investissements d'avenir, notamment en vulcanologie.
Encouragez-vous le projet européen de classement?
Dès la rentrée, je verrai avec la Commission européenne comment accélérer le projet U-Multirank, qui est une cartographie exacte de ce qu'est l'enseignement supérieur européen et comment réaliser rapidement, à partir de cette cartographie, un classement européen, qui permettra de pallier certaines limites du classement de Shanghai.
L'Etat peut-il poursuivre l'effort sur les universités en 2012?
L'enseignement supérieur et la recherche sont une priorité. Nous préparons notre compétitivité et nos emplois de demain. Ils font l'objet d'une attention toute particulière de la part du président et du Premier ministre. Dans un contexte budgétaire exigeant, ils doivent, comme les autres domaines, faire la preuve de leurs efforts et de leur meilleure gestion. C'est bien l'ensemble de la sphère publique qui doit faire des efforts. Propos recueillis par I. F., Les Echos.
Voir l'article du blog: Les universités françaises à la peine dans le classement de Shanghai. Extraits:
Voici ci-dessous les 21 Universités françaises présentes dans le classement. De facto, le chiffre de 21 est discutable. En effet, Aix-Marseille University n'existera qu'au 1er janvier 2012 et est aujourd'hui encore actuellement constituée de trois Universités. La fusion prévue le 1er janvier 2012 a sans doute contribué à améliorer le classement de la future Université unique d'Aix-Marseille. Les deux Universités University of Provence (Aix-Marseille 1) et University of the Mediterranean (Aix-Marseille 2) étaient respectivement dans la catégorie 201-300 et 301-401. Aix-Marseille University est propulsé dans les rangs 102-150, ce qui est une grande amélioration. Le PRES d'Aix-Marseille n'a donc pas été visité par Shangai en vain. L'Université de Nice Sophia Antipolis étant toujours classée, la Région PACA place donc quatre de ses Universités sur six dans le classement. Il en est de même pour l'Université de Lorraine qui n'existe pas encore et qui regroupe 4 établissements. L'an dernier, seul Henri Poincare University (Nancy 1) était classé en 301-400, alors que University of Lorraine passe dans la catégorie 201-300. Ce sont donc 27 Universités classées, ce qui est une très nette amélioration par rapport à l'an dernier, d'autant que les deux PRES, Aix-Marseille University et University of Lorraine, ont obtenu un meilleur classement qu'université par université, alors que University of Strasbourg qui résulte de la fusion des trois anciennes Universités de Strasbourg, reste à sa place. Fusion et PRES semblent être de bons moyens de grimper dans le classement de Shangai.

http://www.lesechos.fr/images/les-echos.png Autor Isabelle Ficek. Kolm Prantsuse ülikoolide jääda top 100 of 2011 Shanghai järjekohale, kuid nüüd on ülikooli Paris-Sud, mis võitis top prantsuse keeles. Hiina püügiprotsent valdavalt anglosaksi institutsioonidega.
Palju oodatud, palju maligned kuid väga kardetud.
2011 Shanghai pingerida maailma ülikoolides olnud avalikustas sel nädalavahetusel. Nagu eelmisel aastal, Ameerika ülikoolide monopoliseerida ülaosas paremusjärjestusest, võites 17 top 20 kohas. Mis jällegi ülimuslikkust Ameerika Harvardi ülikooli, esimese koha, järgnesid Stanford, mis võtab teise koha müüdi 2010 Berkeley, hindas seda neljanda maha Massachusetts Institute of Technology (MIT).Suurbritannias ei ole erand kolme ülikoolide top 10 (Cambridge, Oxford ja University College London). Prantsusmaa, ta on ainult 3 top 100 üle maailma, nagu ka varasematel aastatel. Ja see seiskusid top 500, 8. maailma suurim 21 rajatised klassifitseeritud vastu kuues koht 22 kooli eelmisel aastal, ranking eristada Aix-Marseille, Aix-Marseille 1 ja-2, samas kulub arvesse 2011 ühinemine tulevikus. Lõpuks rühmad näivad maksma Aix-Marseille ronib sisse 102-150 erinevaid graafikuid, kuid Aix-Marseille 1 ja 2 olid top 300 (vahemikus 201 ja 300). Vaata blog: Prantsuse ülikoolide lause Shanghai pingerida. Väljavõtteid: pelgalt alla 21-Prantsuse ülikoolide kohal edetabel. De facto on see näitaja 21. küsitav. Tõepoolest, Aix-Marseille Ülikooli olemas 1. jaanuaril 2012 ja on veel praegu koosneb kolmest ülikoolid. kavandatud ühinemine 1. jaanuaril 2012 võib olla kaasa aidanud parandada reastust tulevase ühtse Ülikool Aix-Marseille. Kahe ülikooli Ülikooli Provence (Aix-Marseille 1) ja ülikooli Vahemerel (Aix-Marseille 2) vastavalt kategooria 201-300 ja 301-401. Aix-Marseille Ülikooli pannakse liikuma auastmed 102-150, see mis on suur samm. Lähedal PRES Aix-Marseille ei ole külastas Shanghai asjata. Ülikooli Nice Sophia Antipolis on alati suletud, seejärel asetatakse PACA piirkonnas neli kuue ülikoolide pingeread. See on sama ülikooli Lorraine veel ei ole ja mis hõlmab neli kooli. Eelmisel aastal ainult Henri Poincaré Ülikool (Nancy 1) klassifitseeriti 301-400, samas Ülikooli Lorraine toimub 201-300 kategooriasse. 27 ülikoolides on järjestatud, mis on paranenud võrreldes eelmise aastaga, eriti kuna kaks EIR, Aix-Marseille University ja University of Lorraine , sai kõrgema reitingu, ülikooli kolledž samas Strasbourg'i Ülikool tõttu ühinemine kolm endist Strasbourg'i Ülikool, jääb püsima. PRES ja Fusion ja tundub, et häid võimalusi ronida pingerida Shanghai. Veel...

15 août 2011

Les universités françaises à la peine dans le classement de Shanghai

leParisien.frDans le classement de Shangai des universités du monde entier, la France garde trois établissements dans le top 100 et continue de perdre des places dans le top 500.
Chaque année, le classement de Shanghai des universités est scruté par les présidents des plus prestigieuses facultés de la planète et chaque année, la France mesure un peu mieux l'écart qui la sépare de ses confrères. La France garde trois établissements dans le top 100 et continue de perdre des places dans le top 500. Cette année, encore une fois, les premières places sont trustées par les universités américaines et britanniques. Mais les détracteurs de ce véritable hit-parade mettent les critères retenus. Ne sont en effet pris en compte que la recherche, le nombre de publications dans les grandes revues internationales (toutes anglo-saxonnes) et les récompenses (prix Nobel, médaille fields en mathématiques) et non pas la qualité de l'enseignement, difficilement quantifiable.
Masse critique

La France souffre dans ce classement notamment à cause de la taille de ses universités, trop petites pour lutter à armes égales avec les mastodontes anglo-saxons. Elle en pâtit d'autant plus que la recherche dépend dans notre pays à la fois des universités et des organismes de recherche. Or la méthode de Shanghai partage les points obtenus entre l'université et les organismes associés.
Une des ambitions du gouvernement Fillon et de Valérie Pécresse, ancienne ministre de l'Enseignement supérieur, à travers le plan de réforme de l'université était de favoriser les regroupements pour permettre aux universités hexagonales d'atteindre une masse critique. Les «enquêteurs» chinois ont d'ailleurs rencontré des représentants de ces nouveaux pôles à Paris lors de leur visite entre le 23 et le 30 juillet dernier. La délégation avait été reçue par les responsables de quatre pôles: l'Université de Lorraine, Aix-Marseille Université, "Paris Sciences et Lettres" (PSL) et l'Université de Bordeaux.
Harvard, Stanford, MIT : le tiercé gagnant

Comme en 2010, les universités américaines dominent, s'arrogeant 17 des vingt premières places, selon ce classement mondial de 500 universités mis en ligne dimanche soir par l'université des communications de Shanghai. L'université américaine Harvard reste numéro un, Stanford reprenant la deuxième place cédée l'an dernier à Berkeley, qui est cette fois quatrième derrière le Massachusetts Institute of Technology (MIT). Trois universités britanniques se glissent dans les dix premières places, Cambridge (5e) et Oxford (10e) étant rejointes par le University College de Londres (20e). L'université de Tokyo perd une place pour se classer 21e.
Paris-Sud Orsay (Paris XI) gagne cinq places

Le premier établissement français n'apparait qu'au 40e rang, et seulement trois établissements français continuent de figurer dans le top 100: Paris-Sud Orsay (Paris XI) à la 40e place (5 places gagnées), Pierre-et-Marie-Curie à la 41e place (deux rangs perdus) et l'Ecole normale supérieure (ENS-Ulm) à la 69e (deux places de mieux).
Dans le top 500, les Etats-Unis restent premiers avec 151 établissements, suivis de l'Allemagne (39) et du Royaume-Uni (37). Avec 35 établissements contre 22 l'an dernier, la Chine gagne deux places à la 4e. Suivent le Japon (23 établissements), le Canada et l'Italie ex-aequo (22), tandis qu'avec 21 établissements, la France passe du 6e au 8e rang. Elle était 5e en 2009.
Trois universités se classent dans le Top 100 pour la première fois depuis la création du classement en 2003: celles de Genève (73e), Queensland (88e) et Francfort (100e). Dix universités font leur entrée dans le classement des 500, comme celles de Malaya (Malaisie) et de Zagreb (Croatie).
NDLR du Blog. Voici ci-dessous les 21 Universités françaises présentes dans le classement. De facto, le chiffre de 21 est discutable. En effet, Aix-Marseille University n'existera qu'au 1er janvier 2012 et est aujourd'hui encore actuellement constituée de trois Universités. La fusion prévue le 1er janvier 2012 a sans doute contribué à améliorer le classement de la future Université unique d'Aix-Marseille. Les deux Universités University of Provence (Aix-Marseille 1) et University of the Mediterranean (Aix-Marseille 2) étaient respectivement dans la catégorie 201-300 et 301-401. Aix-Marseille University est propulsé dans les rangs 102-150, ce qui est une grande amélioration. Le PRES d'Aix-Marseille n'a donc pas été visité par Shangai en vain. L'Université de Nice Sophia Antipolis étant toujours classée, la Région PACA place donc quatre de ses Universités sur six dans le classement. Il en est de même pour l'Université de Lorraine qui n'existe pas encore et qui regroupe 4 établissements. L'an dernier, seul Henri Poincare University (Nancy 1) était classé en 301-400, alors que University of Lorraine passe dans la catégorie 201-300. Ce sont donc 27 Universités classées, ce qui est une très nette amélioration par rapport à l'an dernier, d'autant que les deux PRES, Aix-Marseille University et University of Lorraine, ont obtenu un meilleur classement qu'université par université, alors que University of Strasbourg qui résulte de la fusion des trois anciennes Universités de Strasbourg, reste à sa place. Fusion et PRES semblent être de bons moyens de grimper dans le classement de Shangai.
40University of Paris Sud (Paris 11).
41 Pierre and Marie Curie University - Paris 6.
69 Ecole Normale Superieure - Paris.
102-150 Aix-Marseille University.
102-150 University of Paris Diderot (Paris 7).
102-150 University of Strasbourg.
151-200 Joseph Fourier University (Grenoble 1).
151-200 University of Paris Descartes (Paris 5).
201-300 Claude Bernard University Lyon 1.
201-300 Paul Sabatier University (Toulouse 3).
201-300 University of Lorraine.
201-300 University of Montpellier 2.
201-300 University of Paris Dauphine (Paris 9).
301-400 Ecole Polytechnique.
301-400 Industrial Physics and Chemistry Higher Educational Institution - Paris.
301-400 University of Bordeaux 1.
301-400 University of Rennes 1.
401-500 Ecole National Superieure Mines - Paris.
401-500 Ecole Normale Superieure - Lyon.
401-500 University of Nice Sophia Antipolis.

401-500 University of Versailles.
Voici ci-dessous les 22 Universités françaises qui figuraient l'an dernier dans le classement 2010. Il faut noter que la moitié des universités de la Région PACA y étaient: University of Provence (Aix-Marseille 1), University of the Mediterranean (Aix-Marseille 2) et University of Nice Sophia Antipolis.
39 Pierre and Marie Curie University - Paris 6.
45 University of Paris Sud (Paris 11).
71 Ecole Normale Superieure - Paris.
101-150 University of Paris Diderot (Paris 7).
101-150 University of Strasbourg.
151-200 Joseph Fourier University (Grenoble 1).
151-200 University of Paris Descartes (Paris 5).
201-300 Claude Bernard University Lyon 1.
201-300 Ecole Polytechnique.
201-300 Industrial Physics and Chemistry Higher Educational Institution - Paris.
201-300 Paul Sabatier University (Toulouse 3).
201-300 University of Montpellier 2.
201-300 University of the Mediterranean (Aix-Marseille 2).

301-400 Henri Poincare University (Nancy 1).
301-400 University of Bordeaux 1.
301-400 University of Nice Sophia Antipolis.

301-400 University of Paris Dauphine (Paris 9).
301-400 University of Provence (Aix-Marseille 1).

301-400 Ecole National Superieure Mines - Paris.
301-400 Ecole Normale Superieure - Lyon.
301-400 University of Rennes 1.
301-400 University of Versailles.
Voir aussi 4th International Conference on World-Class Universities - WCU-4 et Le PRES d'Aix-Marseille visité par Shangai. Voir le classement 2010 et le classement 2009.

leParisien.frIn the Shanghai ranking of universities around the world, France keeps three schools in the top 100 and continues to lose seats in the top 500.
Each year, the Shanghai university is scanned by the presidents of the most prestigious faculties in the world and every year in France as a little better the gap separating it from his colleagues. France still three schools in the top 100 and continues to lose seats in the top 500. This year again, the first places are trustees for the American and British universities. But critics of this veritable hit parade are the criteria. Are indeed considered as research, the number of publications in major international journals (all Anglo-Saxon) and awards (Nobel Prize, Fields Medal in mathematics), not the quality of education, difficult to quantify.
Ed's Blog.
 Below are the 21 French universities present in the ranking. De facto, the number of 21 is questionable. Indeed, Aix-Marseille University will exist only on 1 January 2012 and is still currently consists of three Universities. The planned merger on 1 January 2012 may have contributed to improve the ranking of the future single University of Aix-Marseille. The two Universities University of Provence (Aix-Marseille 1) and University of the Mediterranean (Aix-Marseille 2) respectively in the category 201-300 and 301-401. Aix-Marseille University is propelled into the ranks 102-150, this which is a big improvement. The PRES Aix-Marseille has not been visited by Shanghai in vain. The University of Nice Sophia Antipolis is always closed, then place the PACA region four of its six universities in the rankings. It is the same for the University of Lorraine does not yet exist and which includes four schools. Last year, only Henri Poincare University (Nancy 1) was classified 301-400, while the University of Lorraine happening in the 201-300 category. So 27 french universities are ranked, which is a marked improvement over last year, especially since the two PRES, Aix-Marseille University and University of Lorraine, received a higher ranking for a university college, while the University of Strasbourg as a result of the merger of three former University of Strasbourg, remains in place. PRES and Fusion seem to be good ways to climb in the ranking of Shanghai. More...

14 août 2011

Are rankings driving university elitism?

http://www.universityworldnews.com/layout/UW/images/logo.gifBy Danny Byrne (Editor of TopUniversities.com). Ellen Hazelkorn's article "Do rankings promote trickle down knowledge?" makes an interesting case for the link between international university rankings and the concentration of resources by governments in a handful of elite institutions. There is no doubt that having the best universities and attracting the best minds is a common goal of governments around the world. But governments' reasons for harbouring this goal are bound up in forces much more powerful and far-reaching than the annual exercises that compare international university performance. The motivations driving nations to increase their participation rates, and those factors motivating them to create world-class universities, are very different. The former is driven primarily by an exponential increase in demand for skilled labour in a variety of industries.
Anthony P Carnevale and Stephen J Rose of Georgetown University's Centre of Education and the Workforce have released a report, entitled The Undereducated American, which argues that in the US the rise in the number of graduates has long been smaller than the increase in the number of skilled jobs. An educated mass workforce is required to keep feeding economic expansion. This trend is even more pronounced in a country like China, in which the demand created by accelerated industrial development led to a quintupling of the university participation rate in the decade following 1998. However, the motivating factor driving what Hazelkorn calls 'elitist' policies of prioritising investment in a small number of research-intensive universities is the need to innovate. This is largely separate from the drive to expand participation.
The ring-fencing of selective, research-intensive institutions is a fairly uniform policy among primarily state-funded systems. China has its C9 League, Japan its Global 30, Australia its Group of Eight, Canada its Group of Thirteen, the UK its Russell Group, and France its grandes ecoles. All but one of these groups was established long before the advent of international university rankings. In reference to Asia, the economist Richard Levin has identified the creation of world-class universities as a secondary phase that follows expansion, and he gives two primary motivations for doing so: "First, these rapidly developing nations recognise the importance of university-based scientific research in driving economic growth, especially since the end of the Second World War," Levin has said.
"Second, world-class universities provide the ideal context for educating graduates for careers in science, industry, government and civil society, who have the intellectual breadth and critical thinking skills to innovate and to lead." Governments may rightly or wrongly be hedging their bets on the assumption that a concentration of the world's brightest people is likely to drive innovation and stimulate economic growth (what Hazelkorn calls the 'trickle down' theory). Time will presumably tell, though it doesn't seem to have worked out too badly for the US. But even if it is an incorrect assumption and they would be better advised to adopt a more egalitarian funding model, this is still not an argument against university rankings.
Ranking performance might be viewed as a symptom of the success or otherwise of government higher education policy. But to make the logical leap from this to inferring that rankings themselves are the cause of elitist policies is to create a straw man and thereby ignore the far bigger global economic forces at play. QS World University Rankings® are about empowering prospective students to make informed choices, not dictating long-term government economic policy. Of course, as they have become more established and generated huge levels of interest, compilers of rankings need to take responsibility for the kinds of incentives we are creating, particularly for individual upwardly mobile institutions.
There is no getting away from the fact that if they prioritise prescriptive measures too heavily, rankings run the risk of exerting an unhelpful influence on universities' strategic planning. In September last year The Chronicle of Higher Education argued that packing a ranking full of prescriptive measures risks creating perverse incentives. I would briefly argue that in the case of the QS, the way we avoid this scenario is by being comparatively non-prescriptive. Our QS World University Rankings use six clearly defined and mutually distinct indicators, and our emphasis on academic and employer views means that we avoid dictating a rigid model to which a university must adhere in order to be successful.
Insofar as our rankings make use of qualitative data, our emphasis is on outcomes, in the form of recognised research excellence and the quality of graduates being produced, rather than the model that creates these outcomes. The job of rankings is to reflect existing excellence, not dictate the form that it should take. Of course, we realise that these six indicators do not cover every aspect of university performance. Our response to the demand for greater detail has been to create new, more targeted exercises rather than cramming more indicators into a given table - which in our view simply creates confusion.
This is why QS has this year for the first time released QS World University Rankings by Subject in 26 narrow disciplines, and why in the coming months we will be launching the QS Stars rating system. This will rate an unlimited number of universities in 30 areas, and is devised to overcome the obvious limitation that rankings can only assess a small, elite portion of the global higher education system. These innovations are unlikely to have any significant effect on long-term governmental policy or the global economy. But they might help students make more informed decisions about their study destinations.
See also: "L'élitisme républicain" and University Mergers Sweep Across Europe.
10 août 2011

IREG-Ranking Audit: Purpose, Criteria and Procedure

http://www.volitve.marhl.com/slike/dosez/ireg.pngINTRODUCTION
Academic rankings are an entrenched phenomenon around the world and as such are recognized as source of information as well as methods of quality assessment. There is also empirical evidence that rankings are influencing individual decisions, institutional and system-level policy-making areas. Consequently, those who produce and publish ranking are growingly aware that they put their reputation on the line in case they ranking tables are not free of material errors or they are not carried out with a due attention to basic deontological procedures. In this context important initiative was undertaken by ad-hoc expert group - the International Ranking Expert Group (IREG) which came up in May 2006 with a set of guidelines – the Berlin Principles on Ranking of Higher Education Institutions. Download .PDF IREG Ranking Audit.
In October 2009, on the basis of IREG was created the IREG Observatory on Academic Ranking and Excellence [in short “IREG Observatory”]. One of its main activities relates to collective understanding of the importance of quality assessment of its own work – rankings. The new IREG Ranking Audit initiative is based on the Berlin Principle and is expected to:
* enhance the transparency about rankings;
* give users of rankings a tool to identify trustworthy rankings; and
* improve the quality of rankings.  
Users of rankings (i.e. students and their parents, university leaders, academic staff, representatives of the corporate sectors, national and international policy makers) differ very much in their knowledge about higher education, universities and appropriate ranking methodologies. Particularly, the less informed groups (like prospective students) do not have a deep understanding of the usefulness and limitations of rankings, thus, an audit must be a valid and robust evaluation. This will offer a quality stamp which is easy to understand and in case of positive evaluation rankings are entitled to use the quality label “IREG approved”.
I.    STRUCTURE AND PROCEDURE

1. The ranking audit lies in the responsibility of the Executive Committee of the IREG Observatory [further on referred to as “the Executive Committee”]. The decision about approval of a ranking is made by the Executive Committee by simple majority of its members. Members of the Executive Committee do not participate in decisions about their own ranking. Decisions about approval are reported to the General Assembly of the IREG Observatory. A list of approved rankings will be published on the IREG website.
2.  Audits will be carried out by Audit Teams consisting of three to five members. Members are nominated by the Executive Committee. The chairs of audit teams are not formally associated with an organisation that is doing rankings. At least one member of an audit team has to be a member of the Executive Committee.
3.  The Audit Team prepares a written report which is submitted directly to the Executive Committee. 
4. Eligible for the audit are rankings in the field of higher education and research that have been published at least twice within the last four years. If a ranking organisation produces several rankings based on the same basic methodology they can be audited in one review.
5. The level of audit fee is set by the decision of the Executive Committee.
6. Rankings audited positively are entitled to use the label “IREG approved”. The label and the audit decision will be valid for three years in first audits, for five years in follow-up audits.
PROCEDURE

1. Information of rankingRanking organisations who apply for IREG-Ranking Audit will be informed about the audit procedure and the criteria.
2. Self-reportIn a first step the audited ranking produces a report based on a questionnaire including basic information about the ranking and the criteria set for auditing (cf. II.). The self-report has to be delivered within 2 months.
3. Interaction between the  Audit Team and ranking organisation
      a. The Audit Team will react on the self-report within 6 weeks by written questions and comments; it can require additional information and/or materials.
      b. The ranking organisation has to answer to the additional questions within 5-6 weeks.
      c. An on-site visit at the ranking organisation is possible upon invitation by the ranking organisation, preferably after the additional questions have been sent to the ranking organisation.
4. Audit Report
      a. Based on the self-report and the interaction between the Audit Team and ranking organisation, the team drafts an audit report within 6 weeks after the completion of the Audit Team – ranking organization interaction. The Audit Report includes:
      ·  a description of the ranking (based on information provided in the Fact Sheet, see Appendix),
      ·  an evaluation of the ranking based on the IREG audit criteria, and
      ·  a suggestion on the audit decision (yes/no).
      b. The Audit Report is sent to the ranking organisation which can formulate a statement on the report within three weeks.
      c. The Audit Report is submitted to the Executive Committee. The Executive Committee is testifying that the report applies the criteria for ranking audit.
5. Decision by the Executive CommitteeThe Executive Commission decides about the approval of the ranking on the basis of the Audit Report delivered by the audit team and the statement on the Audit Report submitted by the audited ranking. Decision is made by simple majority of the members of the Executive Committee.
6. PublicationThe audit decision and a summary report are published on the website of the IREG Observatory. Only positive audit decisions will be made public. The detailed report can be made public by agreement between IREG Observatory and the audited ranking organization. The audit will not produce a ranking of rankings and hence the audit scores will not be published.
II.    CRITERIA
PURPOSE, TARGET GROUPS, BASIC APPROACH

Rankings are only one of a number of diverse approaches to the assessment of higher education inputs, processes, and outputs (see Berlin Principles, 1). This should be communicated by rankings.
Criterion 1: The purpose of the ranking and the (main) target groups should be made explicit. The ranking has to demonstrate that it is designed with due regard to its purpose (Berlin Principles, 2). This includes a model of indicators that refers to the purpose of the ranking. 
Criterion 2:  Rankings should recognize the diversity of institutions and take the different missions and goals of institutions into account.  Quality measures for research-oriented institutions, for example, are quite different from those that are appropriate for institutions that provide broad access to underserved communities (Berlin Principles, 3). The ranking has to be explicit about the type/profile of institutions which are included and those who are not.
Criterion 3: Rankings should specify the linguistic, cultural, economic, and historical contexts of the educational systems being ranked. International rankings in particular should be aware of possible biases and be precise about their objectives and data (Berlin Principles, 5). International rankings should adopt indicators with sufficient comparability across relevant nations.
METHODOLOGY
Criterion 4:
Rankings should choose indicators according to their relevance and validity. The choice of data should be grounded in recognition of the ability of each measure to represent quality and academic and institutional strengths, and not availability of data. Rankings should be clear about why measures were included and what they are meant to represent (see Berlin Principles, 7).
Criterion 5: The concept of quality of higher education institutions is multidimensional and multi-perspective and “quality lies in the eye of the beholder”. Good ranking practice would be to combine the different perspectives provided by those sources in order to get a more complete view of each higher education institution included in the ranking. Rankings have to avoid presenting data that reflect only one particular perspective on higher education institutions (e.g. employers only, students only). If a ranking refers to one perspective/one data source only this limitation has to be made explicit.     
Criterion 6: Rankings should measure outcomes in preference to inputs whenever possible. Data on inputs and processes are relevant as they reflect the general condition of a given establishment and are more frequently available.  Measures of outcomes provide a more accurate assessment of the standing and/or quality of a given institution or program, and compilers of rankings should ensure that an appropriate balance is achieved (see Berlin Principles, 8).     
Criterion 7: Rankings have to be transparent regarding the methodology used for creating the rankings. The choice of methods used to prepare rankings should be clear and unambiguous (see Berlin Principles, 6). It should also be indicated who establishes the methodology and if it is externally evaluated. Ranking must provide clear definitions and operationalizations for each indicator as well as the underlying data sources and the calculation of indicators from raw data. The methodology has to be publicly available to all users of the ranking as long as the ranking results are open to public. In particular methods of normalizing and standardizing indicators have to be explained with regard to their impact on raw indicators.
Criterion 8: If ranking are using composite indicators the weights of the individual indicators have to be published. Changes in weights over time should be limited and have to be due to methodological or conceptional considerations. Institutional rankings have to make clear the methods of aggregating results for a whole institution. Institutional rankings should try to control for effects of different field structures (e.g. specialized vs. comprehensive universities)  in their aggregate results (see Berlin Principles, 6).
Criterion 9: Data used in the ranking must be obtained from authorized, audited and verifiable data sources and/or collected with proper procedures for professional data collection following the rules of empirical research (see Berlin Principles, 11 and 12). Procedures of data collection have to be made transparent, in particular with regard to survey data. Information on survey data has to include: source of data, method of data collection, response rates, and structure of the samples (such as geographical and/or occupational structure).
Criterion 10: Although rankings have to adapt o changes in higher education and should try to enhance their methods, the basic methodology should be kept stable as much as possible. Changes in methodology should be based on methodological arguments and not be used as a means to produce different results than in previous years. Changes in methodology should be made transparent (see Berlin Principles, 9).
PUBLICATION AND PRESENTATION OF RESULTS

Rankings should provide users with a clear understanding of all of the factors used to develop a ranking, and offer them a choice in how rankings are displayed. This way, the users of rankings would have a better understanding of the indicators that are used to rank institutions or programs (see Berlin Principles, 15).
Criterion 11: The publication of a ranking has to be made available to users throughout the year   either by print publications and/or by an online version of the ranking.
Criterion 12: The publication has to deliver a description of the methods and indicators used in the   ranking. That information should take into account the knowledge of the main target   groups of the ranking.
Criterion 13: The publication of the ranking must provide scores of each individual indicator used to calculate a composite indicator in order to allow users to verify the calculation of ranking results. Composite indicators may not refer to indicators that are not published.
Criterion 14: Rankings should allow users to have some opportunity to make their own decisions about the relevance and weights of indicators (see Berlin Principles, 15).
TRANSPARENCY, RESPONSIVENESS

Accumulated experience with regard to degree of confidence and “popularity” of a given ranking demonstrates that greater transparency means higher credibility of a given ranking.
Criterion 15: Rankings should be compiled in a way that eliminates or reduces errors caused by the ranking and be organized and published in a way that errors and faults caused by the ranking can be corrected (see Berlin Principles, 16). This implies that such errors should be corrected within a ranking period at least in an online publication of the ranking.
Criterion 16: Rankings have to be responsive to higher education institutions included/ participating in the ranking. This involves giving explanations on methods and indicators as well as explanation of results of individual institutions.
Criterion 17: Rankings have to provide a contact address in their publication (print, online version) to which users and institutions ranked can direct questions about the methodology, feedback on errors and general comments. They have to demonstrate that they respond to questions from users.
QUALITY ASSURANCE
Criterion 18:
Rankings have to apply measures of quality assurance to ranking processes themselves. These processes should take note of the expertise that is being applied to evaluate institutions and use this knowledge to evaluate the ranking itself (see Berlin Principles, 13).
Criterion 19: Rankings have to document the internal processes of quality assurance. This documentation has to refer to processes of organising the ranking and data collection as well as to the quality of data and indicators.
Criterion 20: Rankings should apply organisational measures that enhance the credibility of rankings. These measures could include advisory or even supervisory bodies, preferably (in particular for international rankings) with some international participation (see Berlin Principles, 14).
ASSESSMENT OF CRITERIA
Criteria are assessed with numerical scores. In the audit process the score for each criterion is graded by the review teams according to the degree of fulfilment of that criterion. The audit will apply a scale from 1 to 6:
Not  sufficient                    1
Marginally applied              2
Adequate                           3
Good                                 4
Strong                               5
Distinguished                      6
Criteria will be divided into core criteria with a weight of two and regular criteria with a weight of 1 (see table). Hence the maximum score for each core criteria will be 12, for regular criteria 6. Based on the attribution of criteria (with 10 core and 10 regular criteria) the total maximum score will be 180. On the bases of the assessment scale described above, the threshold for a positive audit decision will be 50% of the maximum total score. This means the average score on the criteria has to be “adequate”. Audit can be with conditions if there are deficits with regard to core criteria. Rankings assessed from 40% to 50% can be audited with additional conditions/requirements that have to be fulfilled within one year after audit decision.
III. Weights of audit criteria
III. APPENDIX: FACT SHEET.
Download .PDF IREG Ranking Audit.
10 août 2011

We’ll support you ever more!

http://www.aur.org.au/themes/unireview/public/images/ui/header.jpgJoseph Gora. University of Ardnox. Feisty raconteur and journalistic scourge of politicians left and right, Mungo McCallum, recently described Australian Prime Minister Julia Gillard as a frame waiting for a picture. A similar observation was once made of the former British Prime Minister, the dour John Major, who was so bereft of personality that a Polaroid photograph of him failed to produce an image. This sort of representational vacuity reminds me of the reaction generated by the Times Higher Education (THE) World University Rankings.
To be sure, there was some level-headed commentary from the likes of commentators such as Steven Swartz, Simon Marginson and the Australian newspaper’s Julie Hare, but on the whole, the tenor of debate has been dismal, bordering on the banal. And why wouldn’t it, given that most public comment has come from university mandarins and academic apologists who believe that the ranking system has some empirical validity. I was heartened though to learn that many (perhaps most?) Australian academics consider ranking mania as, at best, a bad joke, and that some institutions in Canada have refused to participate in this farcical exercise. Hope springs eternal!
It’s not simply that the methodologies adopted by the main rankers (rhyming slang, surely!) – Times Higher Education (THE), QS and the Shanghai Jiao Tong University – are diverse and open to the usual interpretation, but there appears to be a significant leaning towards the Anglo-American scene with no fewer than 18 American and British universities figuring in the top twenty of the THE ranking, with the exceptions being the Swiss Federal Institute of Technology Zurich (Roger Federer must surely have something to do with this) and the unassuming but almost Anglo-American University of Toronto. The first Asian university, Hong Kong University, squeaks in at 21 followed by six other Asian institutions in the top 50 (and remember, Asia is a very big place!). The only other universities in Europe outside of the UK are the Ecole Polytechnique, (39) and Ecole Normale Superieure in France (42), the University of Göttingen, Germany (=43), and the Karolinska Institute (Sweden) (=43). Over half of the universities in the top fifty are American with the same country holding 72 spots in the world’s top 200. In short, no African, Middle Eastern, or Latin American universities are among the top 100 THE universities.
Now, if I were a Vice Chancellor at one of the leading universities in Iran, Iraq, Syria, Kenya, Morocco, India, Peru, Mexico, Costa Rica, Thailand, Malaysia, Cambodia, Vietnam or New Zealand I would want to know what is going on here. I would certainly be looking very closely at (and well beyond) the measures used to rank universities (namely; teaching, research, citations, industry income and international mix). I would also want to check out how Harvard got a near perfect score for its teaching (no one gets near perfect student feedback!) and who cites the published work of Harvard academics – the US has got hundreds of higher education institutions and a lorry load of journals which means, does it not, that self referential US academics have more scope to get their work published and cited than, say, scholars in Bangladesh or Finland. And then there’s the small matter of Harvard’s world’s largest $27.4 billion financial endowment, which is always handy when it comes to buying up high achieving scholars.
But hey, cashed up institutions, cultural preferences, linguistic imperialism (the English language) and the North-South divide aside, if you’re going to have a ranking system then make sure it works for you. The fact is that in the competitive marketplace that is international higher education, these things matter. When you’re trying to flog your wares to prospective students, reputation and image is everything. This is why universities go to extraordinary lengths to clamber up the greasy pole. It’s also why there is such panic when an institution falls short of expectations. The pathetic performance of Australian universities in the latest THE ranking headed by the University of Melbourne (36), Australian National University (43) (17 last year) and the University of Sydney (71) (36 last year), has for now at least, put the skids under the tertiary ‘education revolution’.
Perhaps a clue as to how our despondent universities can improve their standing on the global stage is to be found in the goings on at the predatory University of Technology, Sydney. Not satisfied with languishing in exile, the school of finance and economics has embarked on a mission to crank up its previously modest reputation. Ranked as the top economics outfit by a US ranking system, the school has successfully recruited a number of leading academics from; guess where, the US of A. How so? Well, first, so it is reputed, by beefing up the salaries as compared with other Aussie universities and then granting them almost total autonomy in an island-institute. It’s not the first time of course that a university has gone on the prowl in search of reputable scholars. But the way things are going this sort of tribal head-hunting is likely to increase, especially among those universities aspiring to be king-pins.
But in order to have a more open and competitive system that truly reflects the new culture of public transparency that is the ‘My University’ website, I suggest that Australia develops a more innovative approach to its own internal system of rankings by adopting the league table system of the English Football Association. I suggest a Foster’s Universities Premier League comprised of eight universities, and the rest placed in Austar Champion’s League, Coles-Myer Division One, and BHP Division Two. Each year two universities will be promoted and two relegated and the university topping the Foster’s Premier League will be declared champions and the respective vice-chancellors ensconced in Sudan chairs and paraded before an assembled House of Representatives. Points will be allocated on the basis of citations in respected journals, student evaluations and research grants. The system also allows for transfers of academics from one university to another, although a strict salary cap will have to be imposed to avoid the grossly inflated salaries offered by overly ambitious universities. Just think of the income generating possibilities! For instance, Universities Australia could establish an online gaming facility whereby bets could be placed on university performance and the proceeds used to pay for all those senior managers.
Yes, this is the way to go. I can already hear the chants on the terraces: ‘there’s only one JCU’, ‘oh Ballarat, we love you, ‘Ade, Ade Adelaide’, ‘we are the champions’, ‘old MacQuarie had a farm’... etc.
10 août 2011

The new ERA of journal ranking - The consequences of Australia’s fraught encounter with ‘quality’

http://www.aur.org.au/themes/unireview/public/images/ui/header.jpgSimon Cooper & Anna Poletti. Monash University. Ranking scholarly journals forms a major feature of the Excellence in Research for Australia (ERA) initiative. We argue this process is not only a flawed system of measurement, but more significantly erodes the very contexts that produce ‘quality’ research. We argue that collegiality, networks of international research, the socio-cultural role of the academic journal, as well as the way academics research in the digital era, are either ignored or negatively impacted upon by ranking exercises such as those posed by the ERA.
It has recently been announced that the Excellence in Research for Australia (ERA) initiative will remain largely unchanged in the coming year, and will remain as an instrument used by the Australian Government to determine the level of research funding available to Australian universities (Rowbotham 2010). While there has been some unease about the ERA amongst academics, many seem resigned to the process. Perhaps some have simply accepted the onset of the audit regime and have bunkered down. Others perhaps welcome the chance to operate within the competitive environment the ERA brings, having discarded (or perhaps never subscribed to) the older cultures of collegiality that, as we shall see, are hollowed out by cultures of audit. Others may simply believe that the ERA provides a relatively neutral way to measure and determine quality, thus accepting the benign, if somewhat unspecific assurances from Senator Kim Carr and Australian Research Council Chief Professor Margaret Sheil that academics who stick to what they are good at will be supported by the ERA.
The ERA represents a full-scale transformation of Australian universities into a culture of audit. While aspects of auditing have been part of the Australian context for some time, Australian universities have not faced anything like say, the UK situation where intensive and complex research assessment exercises have been occurring for over two decades. Until now that is, and a glance at the state of higher education in the UK ought to give pause. Responding to the ERA requires more than tinkering with various criteria for measuring quality. Instead we suggest the need to return to ‘basics’ and discuss how any comprehensive auditing regime threatens to alter and in all likelihood undermine the capacity for universities to produce innovative research and critical thought. To say this is not to argue that these things will no longer exist, but that they will decline as careers, research decisions, cultures of academic debate and reading are distorted by the ERA. The essential ‘dysfunctionality’ of the ERA for institutions and individual researchers is the focus of this article.
In discussing the pernicious impacts of auditing schemes we focus in particular on the journal ranking process that forms a significant part of the ERA. While the ERA will eventually rank other research activities such as conferences, publishers and so on, the specifics of this process remain uncertain, while journals have been ranked and remain the focal point of discussions concerning the ERA. In what follows we explore the arbitrary nature of any attempt to ‘rank’ journals, and examine the critiques levelled at both metrics and peer review criteria. We also question the assumption that audit systems are here to stay and the best option remains being attentive to the ‘gaps’ in techniques that measure academic research, redressing them where possible. Instead we explore how activities such as ranking journals are not only flawed but more significantly erode the very contexts that produce ‘quality’ research. We argue that collegiality, networks of international research, the socio-cultural role of the academic journal, as well as the way academics research in the digital era, are either ignored or negatively impacted upon by ranking exercises such as the ERA. As an alternative we suggest relocating the question of research quality outside of the auditing framework to a context once more governed by discourses of ‘professionalism’ and ‘scholarly autonomy’.
In 2008 the Australian Labor Party introduced the ERA, replacing the previous government’s RQF (Research Quality Framework), a scheme that relied upon a fairly labour intensive process of peer review, the establishment of disciplinary clusters, panels of experts, extensive submission processes and the like. In an article entitled ‘A new ERA for Australian research quality assessment’ (Carr 2008), Senator Kim Carr argued that the old scheme was ‘cumbersome and resource greedy’, that it ‘lacked transparency, and failed to ‘win the confidence of the university sector’. Carr claimed that the ERA would be a more streamlined process that would ‘reflect world’s best practice’. Arguing that Australia’s university researchers are ‘highly valued ... and highly respected’ Carr claimed that the ERA would enable researchers to be more recognised and have their achievements made more visible. If we took Senator Carr’s statements about the ERA at face value we would expect the following. The ERA would value Australian researchers by making their achievements ‘more visible’. The ERA would reflect ‘world’s best practice’ and reveal ‘how Australian university researchers stack up against the best in the world’. Finally the ERA would gain the confidence of researchers by being a transparent process. All this would confer an appropriate degree of respect for what academics do.
‘Respecting Researchers’: the larger context that drives visibility

According to Carr the ERA provides a measure of respect for academic researchers because it allows their work to be visible and thus measurable on the global stage. Given that academics already work via international collaboration and publishers and processes of peer-review already embed value, the questions remains: for whom is this process of visibility intended? Arguably it is not intended for members of the academic community. Nor the university, at least in a more traditional guise, where academic merit was regulated via processes of hiring, tenure and promotion. In other words the idea of ‘respect’ and ‘value’ already has a long history via institutional processes of symbolic recognition.
Tying respect to the ERA subscribes to an altogether different understanding of value. Demanding that research be made more visible subscribes to a more general culture of auditing that has come to frame the activities of not merely universities but also schools, hospitals and other public institutions (Apple 2005; Strathern 1997). Leys defines auditing as ‘the use of business derived concepts of independent supervision to measure and evaluate performance by public agencies and public employees’ (2003, p.70); Shore and Wright (1999) have observed how auditing and benchmarking measures have been central to the constitution of neoliberal reform within the university. Neoliberalism continually expects evidence of efficient activity, and only activity that can be measured counts as activity (Olssen & other forms of intellectual activity) that lies at the core of the ERA is not simply a process of identification or the reflection of an already-existing landscape, but rather part of a disciplinary technology specific to neoliberalism.
The ERA moves away from embedded and implicit notions of value insisting that value is now overtly measurable. ‘Outputs’ can then be placed within a competitive environment more akin to the commercial sector than a public institution. Michael Apple argues that behind the rhetoric of transparency and accuracy lies a dismissal of older understandings of value within public institutions. The result is: "a de-valuing of public goods and services… anything that is public is ‘bad’ and anything that is private is ‘good’. And anyone who works in these public institutions must be seen as inefficient and in need of the sobering facts of competition so that they work longer and harder" (2005, p.15).
Two things can be said here. First, rather than simply ‘reflect’ already existing activities, it is widely recognised that auditing regimes change the activities they seek to measure (Apple 2005; Redden 2008; Strathern 1997). Second, rather than foster ‘respect’ for those working within public institutions, auditing regimes devalue the kinds of labour that have been historically recognised as important and valuable within public institutions. Outside of critiques that link auditing to a wider culture of neo-liberalism more specific concerns have been raised concerning the accuracy of auditing measures.
The degree to which any combination of statistical metrics, peer or expert review, or a combination of both can accurately reflect what constitutes ‘quality’ across a wide spectrum has been subject to critique (Butler 2007). With the ERA, concerns have already been raised as to the lack of transparency of the ranking process by both academics (Genoni & Haddow 2009) and administrators (Deans of Arts, Social Sciences and Humanities 2008). Though there is no universally recognised system in place for ranking academic journals, the process is generally carried out according to a process of peer-review, metrics or some combination of these methods.
The ERA follows this latter approach combining both metrics and a process of review by ‘experts in each discipline’ (Australian Research Council 2010; Carr 2008). Both metrics and peer review have been subject to widespread criticism. Peer review is often unreliable. There is evidence of low correlation between the reviewer’s evaluations of quality with later citations (Starbuck 2006, 83-4). Amongst researchers there is recognition of the randomness of some editorial selections (Starbuck 2006) with the result that reviewers are flooded with articles as part of a strategy of repeated submission. Consequently, many reviewers are overburdened and have little time to check the quality, methodology or data presented within each submitted article (Hamermesh 2007). In an early study of these processes, Mahoney (1977) found that reviewers were more critical of the methods used in papers contradicting mainstream opinion.
The technical and methodological problems associated with bibliometrics have also been criticised in the light of evidence of loss of citation data pertaining to specific articles (Moed 2002), as well as geographical and cultural bias in the ‘counting process’ (Kotiaho et al. 1999). Beyond this there are recognised methodological shortcomings with journal ranking systems. The focus on journals, as opposed to other sources of publication ignores the multiple ways scholarly information is disseminated in the contemporary era. The long time frame that surrounds journal publication, where up to three years delay between submission and publication is deemed acceptable, is ill-suited to a context where ‘as the rate of societal change quickens, cycle times in academic publishing ... become crucial’(Adler & Harzing 2009 p.75). Citation counts, central to metrical systems of rank, do not guarantee the importance or influence of any one article. Simkin and Rowchowdhury’s (2005) analysis
of misprints in citations suggest that 70 to 90 per cent of papers cited are not actually being read. Moreover, there is no strong correlation between the impact factor of a journal and the quality of any article published in it (Adler & Harzing 2009; Oswald 2007; Starbuck 2006).
Neither peer review, nor metrics can accurately capture how academic research is carried out and disseminated.
Nor do they provide guarantees of quality. However, as Adler and Harzing observe, the privileging of any combination of these measures leads to different material outcomes: Each choice leads to different outcomes, and thus the appearance – if not the reality of arbitrariness ...whereas each system adds value within its own circumscribed domain, none constitutes an adequate basis for the important decisions universities make concerning hiring, promotion, tenure and grant making, or for the ranking of individuals and institutions (2009 pp.74-5).
Senator Carr’s hope that the ERA would ‘gain the trust’ of researchers is rendered problematic within a culture of audit. As Virno has observed ‘cynicism is connected with the chronic instability of forms of life and linguistic games’ (2004 p.13). The move within Australia from the RQF to the ERA, the lack of transparency as to the ranking process of journals within the ERA, the fact that there is no universal system of measurement, and that ranking bodies shuffle between the inadequate poles of metrics and peer-review, confirms the chronic instability of attempts to define and measure quality. The result can only be, at the very least, a distortion of research behaviour as academics recognise and cynically (or desperately) respond to quality measurement regimes. As we move from the RQF to the ERA with a change of government, the scope for ‘chronic instability’ is vast.
It is widely recognised that those subject to audit regimes change according to the perceived requirements of the regime, rather than the long-held understanding as to what intrinsic quality governs their work. Strathern (1997) and Power (1994) have persuasively argued that auditing regimes are not merely reflective but are transformative. Such regimes contribute to the production of different subjectivities, with different understandings and priorities Commenting on the reconstitutive capacity of auditing measures Cris Shore argues that ‘audit has a life of its own - a runaway character that cannot be controlled. Once introduced into a new setting or context, it actively constructs (or colonises) that environment in order to render it auditable’ (2008 p.292).
Recognising the transformative nature of auditing allows us to focus on the unintended consequences of the journal ranking process. Privileging journal ranking as an indication of quality fails to comprehend how academics work within a contemporary context, how they work as individuals and as colleagues, how they co-operate across national and disciplinary borders, and how they research within a digital culture that is well on the way to displacing paper-based academic publishing. Indeed even if all the issues pertaining to accurate measurement, inclusion and transparency were somehow to be resolved, the ERA and the journal ranking exercise would remain at odds with the aim of generating sustainable quality research. Nowhere is this clearer than with the object at the heart of the process – the journal itself.
Journal ranking and the transformation of journal publishing

Why privilege the journal as the site for academic value? Beyond the problems in trying to measure journal quality, the journal is undergoing a transformation. Journals are subject to a number of contradictory processes. On the one hand the journal as a place for disseminating research is partially undermined by alternative ways of circulating information. Adler and Harzing (2009) argue that academic research is no longer published just within the refereed journal but that books, book chapters, blog entries, conference papers and the like need to be taken as a whole as representative of contemporary research culture . Moreover to place such a heavy evaluative burden on the journal, as the ERA does, fails to reflect the changed status and meaning of the journal within academic culture. Journal articles have become increasingly uncoupled from the journal as a whole. The increasing centrality of electronic publishing means allows people to read individual articles rather than whole issues. In an observational study at three universities in Sweden, Haglund and Olsson (2008) found that researchers increasingly (and in many cases exclusively) rely on Google and other search engines for research information, bypassing libraries and traditional sources.
Many researchers use a ‘trial and error’ method (2008 p.55) for information searching, using a selection of keywords and evaluating the result. A flattening out of informational hierarchies results, where the content of individual articles becomes more significant than the journal that houses the articles. Electronic hyperlinks extend this shift where academic reading takes place beyond the pages of a (vertically ranked) individual journal to a horizontally network database of scholarly articles. This extends the trend identified by researchers such as Starbuck (2006), whereby little correlation exists between articles and citation impact measured by journal quality. Ranking journals frames a mode of quality assessment around an increasingly irrelevant institutional form.
Conversely the significance of a small number of journals has been enshrined through the auditing process. While academics know that there may be little correlation between the journal and the quality of individual articles, they also know that careers may now depend upon publishing in a journal whose value has been ‘confirmed’ by a process such as the ERA. In this sense, despite the decentring of journals via the information
mode, the journal is destined to survive; some will flourish. However, this is hardly cause for celebration given the general conservative approach to research taken by esteemed journals (Mahoney 1977), the knowledge that academics will tailor their work in order to fit in with the expectations of the journal in question (Reddon (2008) and finally, that many highly ranked journals are now products of transnational publishers, having long disappeared from the university departments that originally housed them and the community of scholars that sustained them (Cooper (2002, Hartley 2009).
This is not to dismiss the importance of the journal, but to argue that journals are socio-cultural artefacts whose most important work occurs outside of the auditing process. Ranking schemes like the ERA threaten to undermine the journal’s social and cultural importance. While journals are under threat by changes in publishing and digital modes of access and circulation, many continue to exist by reference to a (imagined and actual) community of readers and writers. The decision by a researcher to publish in a journal is often made in terms of the current topic being explored within the journal, the desire to discuss and debate a body of knowledge already in that journal, invitations or requests by the editors, or calls for papers based upon a theme of interest to the academic. In other words journal content or collegial networks frame decisions about where to publish as much as the perceived status of the journal (Cooper 2002; Hartley 2009).
The problem with rankings is that these relations are in danger of being overlaid by an arbitrarily competitive system so that scholars will no longer want, or be allowed to (by institutional imperative) publish in anything below a top ranked journal, as Guy Redden (2008) has observed with respect to the UK situation. We suggest that the transformative capacity of auditing measures such as the journal ranking scheme that constitutes the heart of the ERA threatens to produce a number of perverse or dysfunctional reactions within the academic community that threaten to undermine research quality in the long-term.
The ERA and its perverse effect upon scholars and institutions

Drawing on the above we want to focus specifically on some of the potential impacts of the journal ranking exercise. In particular, the potential for the mechanisms designed to measure ‘quality’ to create dysfunctional reactions and strategies within Australia’s research culture. Osterloh and Frey outline institutional and individual responses to research ranking systems, indicating that at the level of the individual, responses tend to follow the process of ‘goal displacement’, whereby ‘people maximise indicators that are easy to measure and disregard features that are hard to measure’ (2009 p.12). As others have observed, the primacy of journal rankings in measuring quality for the Humanities runs a very high risk of producing such responses (Genoni & Haddow 2009; Nkomo 2009; Redden 2008). In his article published prior to the development of the ERA, Redden drew on his experiences of the UK’s Research Assessment Exercise (RAE) system, to observe that narrowly defined criteria for research excellence can result in ‘academics eschew[ing] worthwhile kinds of work they are good at in order to conform’ (2008 p.12). There is a significant risk that a large proportion of academics will choose to ‘play the game’, given the increasing managerial culture in Australian universities and the introduction of performance management practices which emphasise short-term outputs (Redden 2008).
In what follows, we attempt to flesh out the impact that the dysfunctionality introduced by the ERA will have on the research culture in the Humanities in Australia. These points are based on our observations, discussions with colleagues both nationally and internationally, and review of the literature around research management systems. It is our argument that these impacts strike at the heart of collegiality, trust, the relations between academics at different levels of experience, how we find value in other colleagues, and how individuals manage their careers; all components fundamental to research practice and culture. The ERA displaces informal relations of trust and replaces them with externally situated forms of accountability that may well lead to greater mistrust and scepticism on the part of those subject to its auditing methods. This at least has been the experience of those subject to similar regimes in the UK (Power 1994; Strathern 1997). It should be noted that the potential for dysfunctional reactions has been acknowledged by both Professor Margaret Sheil, CEO of the Australian Research Council, and Professor Graeme Turner, who headed the development of the ERA for the Humanities and Creative Arts clusters (McGilvray 2010, Rowbotham 2010). In both cases, universities have been chastised for ‘misapplying’ the audit tool which, in Sheil’s words, “codified a behaviour that was there anyway” (Rowbotham 2010).
Impact on international collaboration and innovation
One impact of the ERA journal ranking system is the further complication it produces for international research collaboration. For many research practice is a globalised undertaking. The (limited) funds available for conference attendance, and the rise of discipline and sub-discipline based email lists and websites mean that many are networked within an internationalised research culture in their area of specialisation. In the best case scenarios, researchers are developing connections and relationships with scholars from a range of countries. Before the ERA, these connections would form a useful synergy with a researcher’s Australian-based work, resulting in collaborations such as joint publications, collaborative research projects, and knowledge exchange. Such projects can now be the cause of significant tension and concern; an invitation from an international colleague to contribute an article to a low ranked (or heaven forbid, unranked) journal, to become engaged in a collaborative research project which results in a co-edited publication (currently not counted as research activity in the ERA), or to present at a prestigious conference must be judiciously evaluated by the Australian academic for its ability to ‘count’ in the ERA. This can be determined by consulting the ERA Discipline Matrices spreadsheet. Projects such as those listed above will need to be defended at the level of the individual’s performance management as the ERA is bedded down in performance management (a process which has already begun, with the discourse of the ERA being adapted internally by Australian universities).
These unnecessary barriers restrict open and free collaboration, as Australian researchers are cordoned off within a system which evaluates their research outputs by criteria which affects only Australians. This seems even more perverse when we return to Senator Carr’s framing of the ERA process in global terms; seeing how Australian researchers ‘stack up against the rest of the world’ - that the ERA would represent ‘world’s best practice’. Instead the structural provinciality built into a purely Australian set of rankings cuts across global research networks. In all likelihood, scholars will feel compelled to produce work that can be published in highly-ranked journals. The result of this is a new form of dysfunctionality; the distortion of research and its transfer. Redden argues that: "Because of the valorising of certain kinds of output (single-authored work in prestigious form likely to impress an expert reviewer working in a specific disciplinary framework upon being speed read), researchers modify their behaviour to adapt to perceived demands. This means they may eschew worthwhile kinds of work they are good at in order to conform. Public intellectualism, collaboration, and interdisciplinary, highly specialised and teaching-related research are devalued" (2008 p.12).
If the ranking of journals narrows the possibility for innovative research to be published and recognised this situation may well be exacerbated by the uncertainty around new journals and emerging places of publication. The ERA seems unable to account for how new journals will be ranked, and arguably new journals are a place where new and innovative research might be published. Yet, it takes a number of years for new journals to even be captured by the various metrical schemes in place. For instance the ISI Social Science Citation Index has a three year waiting period for all new journals, followed by a further three year study period before any data on the journal’s impact is released (Adler & Harzing, 2009 p.80). Even for journals ranked by alternate measures (such as Scopus) a reasonable period is required to gain sufficient data for the ranking of new journals. Such protracted timelines mean it is unlikely that researchers will gamble and place material in new journals. Equally the incentives to start new journals are undercut by the same process. The unintended consequence of the ERA ranking scheme is to foreclose the possibility of new and creative research, and the outlets that could publish it.
Impact on career planning

Many early career researchers are currently seeking advice from senior colleagues on how to balance the tensions between the values of the ERA and their need to develop a standing in their field, especially in those discipline and sub-disciplines which have not had their journals advantageously ranked. The kind of advice being offered ranges from ‘don’t do anything that doesn’t count in the ERA’ to convoluted advice on how to spread one’s research output across a range of outcomes which cover both ERA requirements and the traditional indicators of quality associated with one’s area of specialisation. Professor Sheil has herself offered advice to younger academics, stating in a recent interview that: ‘You should get work published where you can and then aspire to better things’ (Robowtham 2010). Within a year of the ERA process commencing we already see evidence of academics being deliberately encouraged to distort their research activity. McGilvray (2010) reports that scholars are being asked ‘to switch the field of research they publish under if it will help achieve a higher future ERA rating’. Journalism
academics at the University of Queensland and the University of Sydney have already switched their research classification from journalism to other categories that contain more highly ranked journals. Similar examples are being cited in areas from cultural studies to psychology. Such practices distort both the work of the researcher and threaten to further marginalise any journals contained within the abandoned field. Given the degree of institutional pressure it would be a brave researcher who would follow the ARC’s chief executive Margaret Sheil’s advice to ‘focus on what you’re really good at regardless of where it is and that will win out’ (McGilvray 2010).
While some senior academics (including Professor Sheil) are encouraging early career researchers to go on as though the ERA isn’t happening, and maintain faith that audit techniques will adequately codify the ‘quality’ of their work, or at least retain confidence in the established practices of reputation and the power of the reference to secure career advancement, this remains a risky strategy. Others encourage a broader approach to publication, especially where a sub-discipline’s journals have been inaccurately ranked, and advocate re-framing research for publication in highly ranked journals in areas such as Education. A generation of early career researchers, then, are left to make ad hoc decisions about whether to value governmental indicators or the established practices of their field with little understanding of how this will impact on their future prospects of employment or promotion.
In her study of younger academics constructions of professional identity within UK universities, Archer noted a growing distance between older and newer generations of academics. Stark differences emerged in terms of expectations of productivity, what counted as quality research, whether managerial regimes ought to be resisted and so on. Evidence of intergenerational misunderstanding was found (2008 p.271) and while talk of academic tradition or a ‘golden age’ prior to neo-liberalism was sometimes used to produce a boundary or place to resist managerialism, in many cases the discourse of older academics was resented or was regarded as challenging the authenticity of younger researchers. Instead of the idea of research and scholarship as a culture to be reproduced, schemes such as the ERA threaten to drive a wedge between two very different academic subjectivities.
Performance management by ranking leaves the individual academic in a situation where they must assiduously manage the narrowly-defined value of their publication practice and history (Nkomo 2009; Redden 2008). When the 2010 ERA journal rankings were released, many academics woke up to discover that their status as researchers had been radically re-valued (see Eltham 2010 for a blogged response to this experience). Rather than contributing members of scholarly communities, individual researchers are now placed in direct competition with each other and must be prepared to give an account of their chosen publication venue in the context of performance management and University-level collation of data for the ERA. So too the journals, and editors of journals, who will strive to increase the ranking of their publications at the necessary cost of others in their field. As Redden points out, such a situation runs the risk of importing the limits and failures of the market into the public sector (2008 p.16) as any re-ranking of journals will have direct effects on people’s employment.
Lack of certainty about stability of rankings

While researchers are left to make ad hoc decisions about their immediate and future plans for research dissemination, and ponder their ‘value’, they do so in an environment where there is no certainty about the stability of the current journal rankings. Given the long turnaround times of academic publishing it is increasingly difficult for people to feel confident that the decisions they make today about where to send an article will prove to be the right ones by the time they reach publication. Given the increase in submissions one expects A* and A ranked journals will receive, turnaround times are likely to increase rather than decrease with the introduction of the ERA. The erratic re-rankings that occurred between the last draft version of the journal rankings and the 2010 finalised list (where journals went from A* to C, with some disappearing altogether) have left many researchers uncertain as to whether current rankings will apply in 2012 when their article comes out. No one (not Deans of Arts, Social Sciences and Humanities, nor senior researchers or other discipline bodies) seems able to provide certainty about the stability of the rankings, although many suspect that the current list will be “tweaked” in coming years. Again this has implications for career planning as well as internal accountability measures such as performance management, more importantly it unnecessarily destabilises the research culture by introducing the flux of market forces to evaluate what was traditionally approached as an open ended (or at least, ‘life’ (career) long) endeavour (see Nussbaum 2010; Redden 2008).
What is quality anyway?

Perhaps the most significant impact of attempts to quantify quality via a system of audit such as the ERA is that it works counter to the historical and cultural practices for determining quality that exist in academia. While these practices are in no way perfectly formed or without error, they do inform, sustain and perpetuate the production and distribution of knowledge within the sector internationally. As Butler has observed, any attempt to quantify quality via an audit system runs inexorably into the problem of how to define quality. Linda Butler, a leading scholar of research policy and bibliometrics, points out that research quality is, in the end, determined by the usefulness of a scholar’s work to other scholars, and that ‘quality’ is a term given value socially (2007, p.568). She quotes Anthony van Raan who argues: "Quality is a measure of the extent to which a group or an individual scientist contributes to the progress of our knowledge. In other words, the capacity to solve problems, to provide new insights into ‘reality’, or to make new technology possible. Ultimately, it is always the scientific community (‘the peers’, but now as a much broader group of colleague- scientists than only the peers in a review committee) who will have to decide in an inter-subjective way about quality" (van Raan (1996) in Butler, 2007 p.568).
The Australian Research Council, in defending the ERA journal ranking for the Humanities and Creative Arts Cluster, relied heavily on this understanding of quality, citing the review panels, expert groups and discipline representative bodies that were consulted in the determination of the rankings (ARC). Indeed, peer review and the sector’s involvement in determining what counts as ‘quality’ were central to Carr’s description of the ERA (Carr 2008). However, and somewhat ironically given the audit culture’s obsession with accountability, the lack of available information regarding the debates about quality and its constitution which occurred in the formation of the list disconnect the concept of ‘quality’ from its social, negotiated and debated context. As we have already noted, this lack of accountability does little to encourage academics to feel valued by the ERA process, nor does it support Australian academics in their existing practices of internationally networked research where the prevailing idea of quality, and how it is identified and assessed, is communal, collegial and plural. There is now, and will continue to be, a significant and unnecessary rift developing between international understandings of quality in research and the Australian definition.
Conclusion

In the concluding chapter of The Audit Explosion, Michael Power diagnoses a key problem resulting from the rise of audit culture: ‘we seem to have lost an ability to be publicly sceptical about the fashion for audit and quality assurance; they appear as ‘natural ‘solutions to the problems we face’ (1994 p.32). Many academics remain privately sceptical about research auditing schemes but are unwilling to openly challenge them. As Power observed sixteen years ago, we lack the language to voice concerns about the audit culture’s focus on quality and performance (1994 p.33), despite the fact that in the Higher Education sector we have very strong professional and disciplinary understandings of how these terms relate to the work we do which are already ‘benchmarked’ internationally.
In light of this and the serious unintended outcomes which will stem from dysfunctional reactions to the ERA, we suggest that rather than try and lobby for small changes or tinker with the auditing mechanism (Academics Australia 2008; Australasian Association of Philosophy2008;
Deans of Arts, Social Sciences and Humanities 2008; Genoni & Haddow’s data 2009), that academics in the Humanities need to take ownership of their own positions and traditions around the idea of professionalism and autonomy which inform existing understandings of research quality. Reclaiming these terms means not merely adopting a discourse of opposition or concern about the impact of procedures like the ERA (often placed alongside attempts to cooperate with the process) but adopting a stance that might more effectively contribute to the very outcomes of quality and innovation that ministers and governments (as well as academics) desire. Power’s suggestion is that ‘concepts of trust and autonomy will need to be partially rehabilitated into managerial languages in some way’ (1994 p.33) and we may well begin with a task such as this. As Osterloh and Frey (2009) demonstrate, if academics are permitted to work informed by their professional motivations – intrinsic curiosity, symbolic recognition via collegial networks, employment and promotion - governments will be more likely to find innovation and research that, in Kim Carr’s words, you could be ‘proud of’.
Simon Cooper teaches in the School of Humanities, Communications & Social Sciences and Anna Poletti teaches in the School of English, Communications & Performance Studies at Monash University, Victoria, Australia.

References

Academics Australia. (2010). The ERA Journal Rankings: Letter to the Honourable
Kim Carr, Minister for Innovation, Science and Research, 11 August 2008. Retrieved on 2 March 2010 from http://www.academics-australia.org/AA/ERA/era.html
Adler, N. & Harzing, A. (2009). When Knowledge Wins: Transcending the sense and nonsense of academic rankings. Academy of Management Learning & Education, 8(1), pp. 72-85.
Apple, M. (2005). Education, markets and an audit culture. Critical Quarterly 47(1-2), pp. 11-29.
Archer, L. (2008). Younger academics’ constructions of ‘authenticity’, ‘success’ and professional identity. Studies in Higher Education, 33(4), pp. 385-403.
Australasian Association of Philosophy (2008). Cover letter response to Submission to the Australian Research Council, Excellence in Research for Australia (ERA) Initiative. Retrieved on 3 March 2010 from http://aap.org.au/publications/
submissions.html
Australian Research Council. (2010). The Excellence in Research for Australia (ERA) Initiative. Retrieved on 4 July 2010 from http://www.arc.gov.au/era/default.htm
Butler, L. (2007). ‘Assessing university research: a plea for a balanced approach.’ Science and Public Policy, 34(8) pp. 565–574.
Carr, K. (2008). A new ERA for Australian research quality assessment. Retrieved on 3 July 2010 from http://minister.innovation.gov.au/carr/Pages/ANEWERAFORAUSTRALIANRESEARCHQUALITYASSESSMENT.aspx
Deans of Arts, Social Sciences and Humanities (2008). Submission to Excellence in Research for Australia (ERA). Retrieved on 14 June 2010 from http://www.dassh.edu.au/publications
Cooper, S. (2002). Post Intellectuality?: Universities and the Knowledge Industry, in Cooper, S., Hinkson, J. & Sharp, G. Scholars and Entrepreneurs: the University in Crisis. Fitzroy: Arena Publications, pp. 207-232.
Eltham, B. (2010). When your publication record disappears, A Cultural Policy Blog, Retrieved on 13 March 2010 from http://culturalpolicyreform.wordpress.com/2010/03/04/when-your-publication-record-disappears/
Genoni, P. & Haddow, G. (2009), ERA and the Ranking of Australian Humanities Journals, Australian Humanities Review, 46, pp 7-26.
Hamermesh, D. (2007). Replication in economics. IZA Discussion Paper No. 2760 Retrieved on 30 June 2010 from http://ssrn.com/abstract=984427
Haglund, L. & Olsson, P. (2008), The impact on university libraries of changes in information behavior among academic researchers: a multiple case study.Journal of Academic Librarianship. 34(1) pp. 52–9.
Hartley, J. (2009). Lament for a Lost Running Order? Obsolescence and Academic Journals. M/C Journal, 12(3). Retrieved on 3 March 2010 from http://journal.mediaculture.org.au/index.php/mcjournal/article/viewArticle/162
Kotiaho, J., Tomkins, J. & Simmons L. (1999). Unfamiliar citations breed mistakes. Correspondence. Nature, 400, p. 307.
Leys, C. (2003). Market-Driven Politics: Neoliberal Democracy and the Public Interest. Verso: New York.
Mahoney, M. (1977). Publication prejudices: An experimental study of confirmatory bias in the peer review system. Cognitive Therapy Research, 1(2), pp. 161-175.
McGilvray, A. (2010).Nervousness over research ratings, Campus Review, 27 September.
Moed, H. F. (2002). The impact factors debate; the ISI’s uses and limits. Correspondence.
Nature, 415, pp. 731-732
Nkomo, S. (2009). The Seductive Power of Academic Journal Rankings: Challenges of Searching for the Otherwise. Academy of Management Learning & Education, 8(1), pp. 106–112.
Nussbaum, M. (2010). The Passion for Truth: There are two few Sir Kenneth Dovers. The New Republic 1 April. Retrieved on 3 June 2010 from http://www.tnr.com/article/books-and-arts/passion-truth?utm_source=TNR+Books+&+Arts&utm_campaign=aff15dbfb8-TNR_BA_040110&utm_medium=email
Olssen, M. & Peters, M. (2005). Neoliberalism, higher education and the knowledge economy: from the free market to knowledge capitalism. Journal of Education Policy, 20(3), pp. 313-345.
Osterloh, M. & Frey, B. (2009). Research Governance in Academia: Are there Alternatives to Academic Rankings? Institute for Empirical Research in Economics, University of Zurich Working Paper Series, Working Paper no. 423. Retrieved on 30 June 2010 from http://www.iew.unizh.ch/wp/iewwp423.pdf
Oswald, A.J. (2007). An examination of the reliability of prestigious scholarl journals: Evidence and implications for decision-makers. Economica.74. pp. 21-31.
Power, M. (1994). The Audit Explosion. Demos: London.
Redden, G. (2008). From RAE to ERA: research evaluation at work in the corporate university. Australian Humanities Review 45 pp. 7-26.
Rowbotham, J. (2010). Research assessment to remain unchanged for second round. The Australian Higher Education Supplement. 3 November. Retrieved on 3 November 2010 from http://www.theaustralian.com.au/higher-education/research-assessment-to-remain-unchanged for-second-round/story-e6frgcjx-1225946924155
Shore, C. (2008). Audit Culture and Illiberal Governance. Anthropological Theory, 8 (3) pp. 278-298.
Shore, C. & Wright, S (1999), Audit Culture and Anthropology: Neo-Liberalism in British Higher Education. The Journal of the Royal Anthropological Institute 5(4) pp. 557-575.
Simkin, M. V. & Roychowdhury, V. P. (2005). Copied citations create renowned papers? Annals of Improbable Research, 11(1) pp. 24-27.
Starbuck, W.H. (2006). The Production of Knowledge: The Challenge of Social Science Research. Oxford University Press: New York.
Strathern, M. (1997). Improving ratings: audit in the British University system. European Review, 5 (3) pp. 305-321.
van Raan, A.F.J. (1996). Advanced Bibliometric Methods as Quantitative Core of Peer Review Based Evaluation and Foresight Exercises. Scientometrics 36, 397-420.
Virno, P. (2004). A Grammar of the Multitude: For an Analysis of Contemporary Forms of Life. Semiotext(e): New York.

Newsletter
49 abonnés
Visiteurs
Depuis la création 2 783 472
Formation Continue du Supérieur
Archives