Canalblog
Editer l'article Suivre ce blog Administration + Créer mon blog
Formation Continue du Supérieur
17 août 2011

The Futility of Ranking Academic Journals

http://chronicle.com/img/banner_promo.jpgBy Ian Wilhelm. The following is a guest post by Ellen Hazelkorn, vice president for research and enterprise and head of the Higher Education Policy Research Unit at the Dublin Institute of Technology. Her book Rankings and the Reshaping of Higher Education: The Battle for World-Class Excellence (Palgrave Macmillan) was published in March.
Ranking academic journals is one of the more contentious aspects of research assessment, and a foundation stone for university rankings. Because people’s careers and aspirations are on the line, it was only a matter of time before someone challenged the findings. Their implications go far beyond recent events in Australia.Thomson Reuters ISI Web of Science, Elsevier’s Scopus, and Google Scholar have become dominant players in a rapidly expanding and lucrative global intelligence information business. The first has identified another opportunity, the Global Institute Profile Project: collecting institutional profile information, and then monetarizing it by selling it back to the institutions for strategic planning purposes or on to third-parties to underpin policy/decision-making or classification systems – similar to the way in which financial data was turn into a commodity by Bloomberg. The Times Higher Education (THE) has transformed itself from a purveyor of (objective) information about higher education to a promoter of global rankings. Along with Quacquarelli Symonds Ltd, THE organizes events around the world, marketing its deep knowledge of ranking methodologies to universities striving to be at the top of global rankings; there is even an iPhone app!
Ranking journals involves hierarchically categorizing scholarly journals according to their perceived quality. The ranking of scientific journals has been an implicit aspect of research assessment for years but has now become very explicit. Mind you, there is a critical issue about academic quality and productivity that the academy needs to respond to. Simply writing the occasional article is arguably not sufficient evidence of scholarship. In response to the perceived lack of clarity and/or reluctance by academe to provide evidence, the process has now become quite formalized in many countries. In addition to Australia, Denmark, Norway, France, Spain, U.K., and Sweden, amongst others, also assign points to different journals on the basis of citation impact or whether the influence and scope is local, national or worldwide. More recently the European Science Foundation has produced its next iteration of the European Reference Index for the Humanities. The practice benefits elite universities and their researchers who dominate such publications. Others claim the process aids visibility of newer disciplines – more likely, many others have grinned and endured it.
Quality can be a subjective measurement; just because the ranking exercise is conducted by groups of noteworthy academics, usually in private, doesn’t make it otherwise. Then there is the problem of the databases which hold only a proportion of the over 1.3 million articles published annually. The main beneficiaries are the physical, life, and medical sciences, due to their publishing habits. This means other important sources or publication formats, such as books and conference proceedings, contribution to international standards or policy reports, electronic formats or open source publications, etc., are all ignored. The Shanghai Academic Ranking of World Universities, which has become the gold-standard used by governments around the world, gives bonus points to Nature or Science – but on what basis?
Nationally-relevant research also loses out; usually this criticism refers to the humanities or social sciences but is equally relevant to the “hard” sciences. I was reminded of this fact when I met a group of women from developing countries pursuing their Ph.D’s. They came from Pakistan, the Philippines, and Nigeria, and were pursuing problems of water quality, flood control, and crop fertility – goal-oriented research of real relevance to their communities – which means the language was not English and the publication outlet was nationally-oriented. Faculty I interviewed in Japan during 2008 voiced similar concerns that international journals in English were more highly regarded than Japanese journals.
There is an over-reliance on peer-review as a measure of quality and impact. But, there may be many reasons for a high-citation count: the field may be very popular or the paper seriously questioned; neither means high quality. This problem accounted for the controversially high ranking of the University of Alexandria, Egypt, in the Times Higher Education World University Rankings 2010.
While academe has questioned the trend away from curiosity driven and towards application-focused research, there is a responsibility on publicly-financed research(ers). Yet, this is not what ranking journals measures. In other words, using policy’s own objectives, ranking journals simply measures what one academic has written and the other has read rather than its impact and benefit on/for society. Where is the evidence the research is helping resolve society’s major challenges or benefit students?
Governments have adopted this practice because it appears to be a scientific method for resource allocation. But, given all the questions about its methodology, it’s unlikely they could withstand legal scrutiny. The implications are likely to be long-term. There is already evidence it is leading to distortions in research focus and research management: encouraging academics to write journal articles rather than reflective books or policy papers, discouraging intellectual risk taking, favoring particular disciplines for resource allocation, and informing hiring and firing.
Rather than quantification as a measure of quality, an E.U. Expert Group recommended a combination of qualitative and quantitative methodologies. This is because journals, their editors, and their reviewers can be extremely conservative; they act as gatekeepers and can discourage intellectual risk-raking at a time when society worldwide needs more, not fewer, critical voices.
Voir aussi sur le blog: Are rankings driving university elitism?, Do rankings promote trickle down knowledge? (by Ellen Hazelkorn), « Hit-parade des universités: la France stagne au huitième rang du classement de Shanghai » et « Nous récoltons les fruits des efforts enclenchés dans l'enseignement supérieur »
, Les universités françaises à la peine dans le classement de Shanghai, New International Ranking System Has a DIY Twist, Les classements des chercheurs en question, Questions Abound as the College-Rankings Race Goes Global (by Ellen Hazelkorn),  International Group Announces Audit of University Rankings.
Commentaires
Newsletter
49 abonnés
Visiteurs
Depuis la création 2 783 445
Formation Continue du Supérieur
Archives